{ "course": "Natural_Language_Processing", "course_id": "CO3085", "schema_version": "material.v1", "slides": [ { "page_index": 0, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_001.png", "page_index": 0, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:11:28+07:00" }, "raw_text": "Chapter 1: Introduction to Natural Language Processing The study of natural language 1.1 Natural language is studied in fields: Linguistics, Psycholinguistics, Philosophy Computational Linguistics. - Linguistics studies structure of language: - Why certain combinations of words form sentences but others do not. Why a sentence can have some meanings but not others Exp: I saw the wood by the saw (saw: cua, cai cua, nhin (tho'i qk)" }, { "page_index": 1, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_002.png", "page_index": 1, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:11:32+07:00" }, "raw_text": "Chapter 1: Introduction to Natural Language Processing 1.1 The study of natural language Psycholinguistics study: - how people identify the appropriate structure of a sentence and when they decide on the appropriate meaning for words . Philosophy considers how words can mean any thing at all and how they identify objects in the world. Philosophiers consider also what it means to have beliefs, goals, and intentions and how these cognitive capabilities relate to language." }, { "page_index": 2, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_003.png", "page_index": 2, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:11:35+07:00" }, "raw_text": "Chapter 1: Introduction to Natural Language Processing Computational Linguistics is to develop and create computational models of language by algorithms and data structure from Computer Science and utilizing advantages of above fields 1.2 Applications of Natural Language Understanding - Course Objective is to provide learning to create language understanding models, that are able to implement in specific domains. The course focuses on respect of natural language processing" }, { "page_index": 3, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_004.png", "page_index": 3, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:11:40+07:00" }, "raw_text": "Chapter 1: Introduction to Natural Language Processing 1.2 Applications of Natural Language Understanding - Two main applications: + Text based application; Dialogue based application Text based application - Retrieving appropriate documents on specific topics from text database: - Extracting information from messages, articles on certain topics; - Translating documents from one language to another; - Summarizing texts for specific purposes; - Question Answering systems." }, { "page_index": 4, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_005.png", "page_index": 4, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:11:45+07:00" }, "raw_text": "1.2.1 Text based application l.2.2.1 Information Extraction (IE) Lecture Information extraction (IE) systems Slides from the . Find and understand limited relevant parts of texts; Stanford gather information from many pieces of text; Coursera course produce a structured representation of relevant by Dan information: Jurafsky an . relations (in the database sense) d Christoph er Manning . a knowledge base Goals: 1. Organize information so that it is useful to people, 2. ] Put information in a semantically precise form that allows further inferences to be made by computer algorithms" }, { "page_index": 5, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_006.png", "page_index": 5, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:11:49+07:00" }, "raw_text": "1.2.1.1 Information Extraction (IE) Low-level information extraction Is now available in applications like Apple or Google mail and web indexing The Los Altos Robotics Board of Directors is having a potluck dinner Friday January 6,2012 and the upcoming Botball 1 Create New iCal Event. and FRC(MVHS agle Strike Robotics Show This Date in iCal.. seasons. You are of these dinners three years back and it was a Copy Often seems to be based on regular expressions and name lists." }, { "page_index": 6, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_007.png", "page_index": 6, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:11:53+07:00" }, "raw_text": "1.2.1.1 Information Extraction (IE) Low-level information extraction Google bhp billiton headquarters Search About 123,000 results0.23 seconds) Everything Best guess for BHP Billiton Ltd. Headquarters is Melbourne,London Mentioned on at least 9 websites including wikipedia.org,bhpbilliton.com and Images bhpbilliton.com-Feedback Maps BHP Billiton-Wikipedia,the free encyclopedia Videos en.wikipedia.org/wiki/BHP_Billiton Merger of BHP &Billiton 2001(creation of a DLC).Headquarters,Melbourne, News Australia(BHP Billiton Limited and BHP Billiton Group LondonUnited Kingdom... History-Corporate affairs-Operations-Accidents Shopping" }, { "page_index": 7, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_008.png", "page_index": 7, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:11:57+07:00" }, "raw_text": "1.2.1.1 Information Extraction (IE) Named Entity Recognition (NER) A very important sub-task: find and classify names in text, for example: Person The decision by the independent MP Andrew Date Wilkie to withdraw his support for the minority Location Labor government sounded dramatic but it should Organi- not further threaten its stability. When, after the zation 2010 election, Wilkie, Rob Oakeshott, Tony Windsor and the Greens agreed to support Labor. they gave just two guarantees: confidence and supply." }, { "page_index": 8, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_009.png", "page_index": 8, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:12:00+07:00" }, "raw_text": "1.2.1.1 Information Extraction (IE) The Named Entity Recognition Task Task: Predict entities in a text Foreign ORG Ministry ORG Standard spokesman 0 evaluation Shen PER is per entity Guofang PER not per token told 0 Reuters ORG" }, { "page_index": 9, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_010.png", "page_index": 9, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:12:03+07:00" }, "raw_text": "1.2.1 Text based application 1.2.1.2 Information retrieval Goal: find documents re/evantto an information need from a large document set Info. need Query IR Retrieval system Document Answer list collection" }, { "page_index": 10, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_011.png", "page_index": 10, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:12:13+07:00" }, "raw_text": " http://www.google.com.au - Google - Microsoft.. Fox Eile Edit View Tools Help Favorites >> Back Search Example Googl le http://www.google.com.au - Google Search: TREC conference X Australia File Edit View Favorites Tools Help Search > Back x e Favorites Web Imaces Groups News more Web Images Groups News more_ TREC conference Google TREC conference Google Search I'm Feeling Lucky Search: O the web O pages from Australia Search: the web O pages from Australia Web Results 1-10 of about 78,900 for TREC conference.(0.08 seconds) Tip: Save time by hitting the return key instead of clicking on \"search\" Text REtrieval Conference TRECHome Page An annual information retrieval conference and competition,the purpose of which is to support and further research within the information retrieval ... trec.nist.gov/-4k -13 Mar 2005 - Cached -Similar pages Text REtrieval Conference (TREC Overview Google ...The Text REtrieval Conference TREC).co-sponsored by the National Institute of Standards and Technology (NIST) and US Department of Defense, trec.nist.gov/overview.html -5k -Cached-Similarpages [More results from trec.nist.gov] Overview of the first TREC conference ...The first Text REtrieval Conference (TREC-1 was held in early November Web 1992 and was attended by about 100 people working in the 25 participating groups ... portal. acm.org/citation.cfm?id=160692-Similarpages Investigative Report On the TREC Trail ...A quick look at the TREC conference questions and the standards of success imposed...The 2004 TREC conference will be held this November at NIST 11" }, { "page_index": 11, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_012.png", "page_index": 11, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:12:15+07:00" }, "raw_text": "1.2.1.1 Information Extraction (IE) Maximum entropy Markov models (MEMMs) or Conditional Markov models 12" }, { "page_index": 12, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_013.png", "page_index": 12, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:12:21+07:00" }, "raw_text": "Suppose we have a sequence of observations O1, ...On that we seek to tag with the labels S1,..S. that maximize the conditional probability P(S1,...Sn/ O1,..On). In a MEMM, this probability is factored into Markov transition probabilities,where the probability of transitioning to a particular label depends only on the observation at that position and the previous position's labell n P(Si,...,SnO1,...,On)=IP(StSt-1,Ot). t=1 Each of these transition probabilities comes from the same general distribution each possible label value of the previous label s',the probability of a certain label s is modeled in the same way as a maximum entropy classifier: 1 P(ss',o)=Ps(so)= exp Z(o,s') a Here, the fa (o,s) are real-valued or categorical feature-functions, and Z(o, s') is a normalization term ensuring that the distribution sums to one. This form for the distribution corresponds to the maximum entropy probability distribution satisfying the constraint that the empirical expectation for the feature is equal to the expectation given the model: Ee[fa(o,s)] = Ep[fa(o,s)] for all a. The parameters a can be estimated using generalized iterative scaling. Furthermore, a variant of the Baum-Welch algorithm,which is used for training HMMs, can be used to estimate parameters when training data has incomplete or missing labels" }, { "page_index": 13, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_014.png", "page_index": 13, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:12:27+07:00" }, "raw_text": "Sequence problems Many problems in NLP have data which is a sequence of characters, words, phrases, lines, or sentences ... We can think of our task as one of labeling each item B B B B B B VBG NN IN DT NN IN NN mn fB Xf T iX ut g J$ gJ 1 Chasing opportunity in an age of upheaval Word segmentation POS tagging Q Text 0 0 ORG A PERS 0 ORG Q segmen- future of News A Murdoch discusses Corp. A tation A Named entity recognition Q A" }, { "page_index": 14, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_015.png", "page_index": 14, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:12:32+07:00" }, "raw_text": "MEMM inference in systems For a Conditional Markov Model (CMM) a.k.a. a Maximum Entropy Markov Model (MEMM), the classifier makes a single decision at a time conditioned on evidence from observations and previous decisions A larger space of sequences is usually explored via search Decision Point Wo 22.6 W+1 % -3 -2 -1 0 W.1 fell DT NNP VBD ??? ??? T.1 VBD The Dow fell 22.6 % T.1-T.2 NNP.VBD hasDigit? true Ratnaparkhi 1996; Toutanova et al. 2003, etc." }, { "page_index": 15, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_016.png", "page_index": 15, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:12:37+07:00" }, "raw_text": "Example: POS Tagging Scoring individual labeling decisions is no more complex than standard classification decisions We have some assumed labels to use for prior positions We use features of those and the observed data (which can include current previous, and next words) to predict the current label Features Decision Point Wo 22.6 W+1 -3 % -2 -1 0 +1 W.1 fell DT NNP VBD ??? ??? T.1 VBD The Dow fell 22.6 % T.1-T.2 NNP.VBD hasDigit? true Ratnaparkhi 1996; Toutanova et al. 2003, etc.)" }, { "page_index": 16, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_017.png", "page_index": 16, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:12:42+07:00" }, "raw_text": "Example: POS Tagging POs tagging Features can include: Current, previous, next words in isolation or together. Previous one, two, three tags Word-internal features: word types, suffixes, dashes, etc Features Decision Point Wo 22.6 W+1 -3 % -2 -1 0 +1 W.1 fell DT NNP VBD ??? ??? T.1 VBD The Dow fell 22.6 % T.1-T.2 NNP.VBD hasDigit? true Ratnaparkhi 1996; Toutanova et al. 2003, etc.)" }, { "page_index": 17, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_018.png", "page_index": 17, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:12:46+07:00" }, "raw_text": "Inference in Systems Sequence Level Sequence Model Inference Sequence Data Local Level Local Label Label Feature Data Optimization Extraction Features Smoothing Features Maximum Entropy Conjugate Quadratic Models Gradient Penalties" }, { "page_index": 18, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_019.png", "page_index": 18, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:12:50+07:00" }, "raw_text": "Greedy Inference (1) Sequence Model Best Sequence Greedy inference: We just start at the left, and use our classifier at each position to assign a label The classifier can depend on previous labeling decisions as well as observed data Advantages : . Fast, no extra memory requirements . Very easy to implement . With rich features including observations to the right, it may perform quite well Disadvantage: Greedy. We make commit errors we cannot recover from" }, { "page_index": 19, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_020.png", "page_index": 19, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:12:55+07:00" }, "raw_text": "Greedy Inference (2) Tim kiém töt nhät - däu tién: Tim kiém tót nhät - dau tién (best-first search) lä tim kiém theo bé röng duoc huóng dan böi ham dänh giä. Nhung nó khäc vói tim kiém theo bé röng chö,trong tim kiém theo be röng ta lan luot phat trién tat cá cäc dinh múc hién tai dé sinh ra cäc dinh múc tiép theo, cön trong tim kiém töt nhát - dau tién ta chon dinh dé phät trién la dinh tót nhät duoc xäc dinh bi ham dänh gia (tüc la dinh có giä tri ham danh gia la nhö nhat), dinh nay có thé ö múrc hien tai hoac ö cac múc tren. Xét khöng gian trang thäi duoc biéu dién bi dö thi trong hinh 2.2, trong dó trang thái ban dau lä A trang thai ket thüc la B. Giä tri cüa ham danh gia la cäc só ahi canh mi dinh. 20 20 15 5 6 12 8 10 10 8 12 3 H Hinh 2.2Dö th khöng gian trang thäi Hinh?3CAvimkimItnhAt-dAu1iAn" }, { "page_index": 20, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_021.png", "page_index": 20, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:12:59+07:00" }, "raw_text": "Greedy Inference (3) Tim kiém leo döi Tim kiém leo di (hill-climbing search) lá tim kiém theo dö sau duo'c huóng dan böi ham dánh gia Song khác vói tim kiém theo dö sau, khi ta phát trién möt dinh u thi buóc tiép theo, ta chon trong s cäc dinh con cua u, dinh có nhiéu húa hen nhat dé phat trién, dinh nay duoc xäc dinh böi ham dänh gia. Ta lai xét dö thi khöng gian trang thái trong hinh 2.2. Quá trinh tim kiém leo di dugc tién hánh nhu sau. au tién phät trién dinh A sinh ra cäc dinh con C, D, E. Trong cäc dinh nay chon D dé phät trién, va nó sinh ra cäc dinh con B, G. Qua trinh tim kiém két thüc. Cay tim kiém leo di duoc cho trong hinh 2.4 20 15 10 8 8 Hinh 2.4Cäy tim kiém leo döi" }, { "page_index": 21, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_022.png", "page_index": 21, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:02+07:00" }, "raw_text": "Beam Inference (1) Sequence Model Best Sequence Beam inference: . At each position keep the top k complete sequences. Extend each sequence in each local way. . The extensions compete for the k slots at the next position. Advantages : . Fast; beam sizes of 3-5 are almost as good as exact inference in many cases. Easy to implement (no dynamic programming required) Disadvantage : Inexact: the globally best sequence can fall off the beam." }, { "page_index": 22, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_023.png", "page_index": 22, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:08+07:00" }, "raw_text": "Beam Inference (2) Tim kiém beam Tim kiém beam (beam search) giöng nhu tim kiém theo bé röng, nó phát trién cäc dinh möt múc ri phát trién các dinh múc tiép theo. Tuy nhién, trong tim kiém theo bé röng, ta phat trién tat cá cäc dinh ö möt múc, cön trong tim kiém beam, ta han ché chi phát trién k dinh töt nhat (cäc dinh nay duoc xäc dinh bi ham dänh gia). Do dó trong tim kiém beam, bat ky múc nao cüng chi có nhiéu nhat k dinh duoc phat trién, trong khi tim kiém theo bé röng, só dinh can phát trién múc d la b (b lä nhan tö nhánh). Chüng ta lai xét dö thi khöng gian trang thai trong hinh 2.2. Chon k = 2. Khi dó cay tim kiém beam duoc cho nhu hinh 2.5. Cac dinh duoc gach duói la cac dinh duoc chon dé phat trién mi múc. 20 20 15 20 (k h 1 15 H Hinh 2.5C&y tim kiem beam." }, { "page_index": 23, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_024.png", "page_index": 23, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:14+07:00" }, "raw_text": "Beam Inference (3) A -54.6 -101.9 Beam search vói k=2 B B -97.4 ACB Tai mi burc.beam search -21.3 layra k két qua co diém cao nhat A -13.4 AC -123 -6.5 -24.5 -49.1 B -11.4 E E -111.3 -89.9 -123 A 10.9 -15.9 D -23 B -11.4 B -111.4 E EAA -8.9 E -123 -1134.6 EA Lp Triah Khong Kho -42.1 -65.3 E Mö phong cäch hoatdöng cua thuat toan Beam search vi k =2 E 1 cua 3 -80.9 -49" }, { "page_index": 24, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_025.png", "page_index": 24, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:19+07:00" }, "raw_text": "Another sequence model: Conditional Random Fields (CRFs) A whole-seguence conditional model rather than a chaining of local models. Z f;(c,d) exp P(cd,1)= Z Zfi(c,d) exp c' The space of c' s is now the space of sequences . But if the features f remain local, the conditional sequence likelihood can be calculated exactly using dynamic programming Training is slower, but CRFs avoid causal-competition biases These (or a variant using a max margin criterion) are seen as the state-of-the-art these days ... but in practice usually work much the same as MEMMs." }, { "page_index": 25, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_026.png", "page_index": 25, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:22+07:00" }, "raw_text": "1.2.1 Text based application Translation 1947 When ! look at the article in Russian, I say to myself: it is really written in English, but has been coded in some CMSC 723 7 strange symbols. I will now LING 723 /INST procced to decode 725 Slides & figure credits: Philipp Koehn mt-class.org WarrenWeaver" }, { "page_index": 26, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_027.png", "page_index": 26, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:24+07:00" }, "raw_text": "Translation Rule based systems 1950s - 1960s Approach Build dictionaries Write transformation rules Refine, refine, refine Meteo system for weather forecasts s (1976) Systran (1968), ..." }, { "page_index": 27, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_028.png", "page_index": 27, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:28+07:00" }, "raw_text": "Translation A statistical Machine translation 1988 A statistical Approach to Machine translation 1990s: increased research Mid 2000s: phrase-based MT (Moses, Google Translate) Around 2010: commercial viability Since mid 2010s: neural network models" }, { "page_index": 28, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_029.png", "page_index": 28, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:31+07:00" }, "raw_text": "Translation MT History: Hype vs. Reality Neural MT Georgetown Hype Statistical experiment MT Expert systems/ 5th generation Al Reality 1950 1960 1970 1980 1990 2000 2010" }, { "page_index": 29, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_030.png", "page_index": 29, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:34+07:00" }, "raw_text": "Translation The Vauquois Triangle Interlingua Semantic Transfer ration Syntactic Transfer Lexical Transfer Source Target" }, { "page_index": 30, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_031.png", "page_index": 30, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:37+07:00" }, "raw_text": "Translation Learning from Data > Shicherheit security 14,516 Shicherheit 7 safety 10,015 > Shicherheit certainty 334 Counts in parallel corpus (aka bitext) - Here European Parliament corpus" }, { "page_index": 31, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_032.png", "page_index": 31, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:40+07:00" }, "raw_text": "Translation Word Alignment danon ssss snes sne michael assumes that he will stay in the house" }, { "page_index": 32, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_033.png", "page_index": 32, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:43+07:00" }, "raw_text": "Translation Phrase-based Models Input segmented in phrases Each phrase is translated in output language Phrases are reordered hat john spiel natuerlich spass am of course john has fun with the game" }, { "page_index": 33, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_034.png", "page_index": 33, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:47+07:00" }, "raw_text": " I am learning at the university Ti dang hoc trong/ ö tru'ng dai hoc learning am at the university ti dang hoc truo'ng dai hoc" }, { "page_index": 34, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_035.png", "page_index": 34, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:51+07:00" }, "raw_text": "Translation Neural MT CS> the house 1s b1g Input Word Embeddings Left-to-Right Recurrent NN Right-to-Left Recurrent NN Attention Input Context Hidden State OutputWord Predictions Error Given Output Words Output Word Embedding das Haus ist groB " }, { "page_index": 35, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_036.png", "page_index": 35, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:53+07:00" }, "raw_text": "Translation Learning from Data What is most fluent? A problem for translation A problem of translation A problem in translation A language modeling problem!" }, { "page_index": 36, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_037.png", "page_index": 36, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:13:58+07:00" }, "raw_text": "Purdue Online Writinq Lab: Summarizing http://owl.enqlish.purdue.edu/ Summarizing, Paraphrasing, and Quoting: http://mciu.org/sp.jvweb/sumparquo.html English Language Center Study Zone: Why do it? http://web2.uvcs.uvic.ca/elc/studyzone/41 0/readinq/index.htm Comprehension: To reduce information to essential ideas in order to: Understand and learn important information Communication: To reduce information to essential ideas in order to: Expand the breadth or depth of your writing" }, { "page_index": 37, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_038.png", "page_index": 37, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:00+07:00" }, "raw_text": "The Process... Using the \"MIDAS Touch!\"" }, { "page_index": 38, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_039.png", "page_index": 38, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:03+07:00" }, "raw_text": "M Main idea: Identify main idea from TOPIC SENTENCE (if there is one) or use BASIC SIGNAL WORDS I D S Simplify, categorize, and label important information" }, { "page_index": 39, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_040.png", "page_index": 39, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:07+07:00" }, "raw_text": "Establishing a focus... The main idea is the most important information or concept in a text or statement. Sometimes the main idea is explicit: sometimes it is implied. Not all information is equal: some of it clearly is more important than the rest. Templeton,1997" }, { "page_index": 40, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_041.png", "page_index": 40, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:10+07:00" }, "raw_text": "Using basic signal words... WHO? WHAT? WHERE? (subject) (action) (location) WHEN? WHy? HOW? (time) (reason) (process)" }, { "page_index": 41, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_042.png", "page_index": 41, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:12+07:00" }, "raw_text": "Main Idea Supporting Supporting Supporting Detail Detail Detail" }, { "page_index": 42, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_043.png", "page_index": 42, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:14+07:00" }, "raw_text": "Topic Sentences... The TOPIC SENTENCE is usually the first sentence of the paragraph. It gives the reader an idea of what the paragraph is going to be about." }, { "page_index": 43, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_044.png", "page_index": 43, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:18+07:00" }, "raw_text": "Topic Sentences cont. However,the TOPIC SENTENCE may not always be so clearly stated, and it can come in the middle or end of a paragraph, not just its beginning. Regardless,all TOPIC SENTENCES are supported by sentences that give details to develop the MAIN IDEA." }, { "page_index": 44, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_045.png", "page_index": 44, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:21+07:00" }, "raw_text": "Example paragraphs... A tornado is a powerful, twisting It begins high in the air, windstorm. : among the winds of a giant storm cloud. People who have watched a tornado's howling winds reach down from the sky have said it's the most frightening thing the United States, these windstorms are called twisters or cyclones. 10" }, { "page_index": 45, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_046.png", "page_index": 45, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:24+07:00" }, "raw_text": "Main idea and supporting detaiks Tornado is powerful, twisting windstorm Also called Part of giant Frightening twister storm cloud or cyclone" }, { "page_index": 46, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_047.png", "page_index": 46, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:27+07:00" }, "raw_text": "Extractive summarization & Abstractive summarization Extractive summarization create the summary from phrases or sentences in the source document(s) Abstractive summarization: express the ideas in the source documents using (at least in part) different words" }, { "page_index": 47, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_048.png", "page_index": 47, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:31+07:00" }, "raw_text": "Summarization: Three Stages 1. content selection: choose sentences to extract from the document 2. information ordering: choose an order to place them in the summary 3. sentence realization: clean up the sentences All sentences Extracted from documents sentences Sentence Summary Document Information Realization Sentence Sentence Segmentation Extraction Ordering Sentence Simplification Content Selection" }, { "page_index": 48, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_049.png", "page_index": 48, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:34+07:00" }, "raw_text": "Chapter 1: Introduction to Natural Language Processing 1.2 Applications of Natural Language Understanding Dialogue based application Question and Answering system; - Automated Customer Service over phone: Tutoring system: - Spoken language control of a machine General cooperative problem-solving systems . Note: One thing is different between system based on text and system based on dialogue, it is to use language. The language used is different System based on dialogue needs to join actively in order to maintain a natural, smooth - flowing dialogue" }, { "page_index": 49, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_050.png", "page_index": 49, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:39+07:00" }, "raw_text": "Question and Answering One of the oldest NLP tasks (punched card systems in 1961) Simmons, Klein, McConlogue. 1964. Indexing and Dependency Logic for Answering English Questions American Documentation 15:30, 196-204 Question: Potential Answers: What do worms eat? Worms eat grass Horses s with worms eat grass horses worms worms eat with eat eat grass worms grass what Birds eat Grass is eaten by worms worms birds worms eat eat worms grass" }, { "page_index": 50, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_051.png", "page_index": 50, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:43+07:00" }, "raw_text": "Question and Answering Apple's Siri .AT&T 22:26 65% 6s Do I need an umbrella tomorrow in San Francisco 99 Yes, San Francisco should get rain tomorrow: Weekly Forecast TUES WED THU FRI SAT SUN 14° 16° 17° 17° 18° 18° 9 7 6 6° 7 8" }, { "page_index": 51, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_052.png", "page_index": 51, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:47+07:00" }, "raw_text": "Question and Answering WolframAlpha computational. knowledge engine how many calories are in two slices of banana cream pie? 8 =ExamplesRandom Assuming any type of pie,banana cream l Use pie, banana cream,prepared from recipe or pie,banana cream,no-bake type, prepared from mix instead Input interpretation: amount 2 slices pie total calories type banana cream Average result: Show details 702 Cal (dietary Calories" }, { "page_index": 52, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_053.png", "page_index": 52, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:50+07:00" }, "raw_text": "Question and Answering Paradigms for QA IR-based approaches TREC; IBM Watson; Google Knowledge-based and Hybrid approaches IBM Watson; Apple Siri; Wolfram Alpha; True Knowledge Evi" }, { "page_index": 53, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_054.png", "page_index": 53, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:54+07:00" }, "raw_text": "Question and Answering IR-based Factoid QA Docum Docl Document Indexing Answer Passage Question Retrieval Processing Answer Document Passage Query Formulation Retrieval Relevant Retrieval passages Processing Question Docs Answer Type Detection" }, { "page_index": 54, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_055.png", "page_index": 54, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:14:58+07:00" }, "raw_text": "Question and Answering IR-based Question Answering Google Where is the Louvre Museum located? Search About 904,000 results (0.30 seconds) Everything Best guess for Louvre Museum Location is Paris, France Mentioned on at least 7 websites including wikipedia.org, answers.com and east- Images buc.k12.ia.us- Show sources - Feedback Maps Musée du Louvre - Wikipedia, the free encyclopedia en.wikipedia.org/wiki/Musée du Louvre Videos Musée du Louvre is located in Paris. Location within Paris. Established, 1793. Location News Palais Royal, Musée du Louvre, 75001 Paris, France. Type, Art museum ... Louvre Palace - List of works in the Louvre - Category:Musée du Louvre" }, { "page_index": 55, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_056.png", "page_index": 55, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:01+07:00" }, "raw_text": "Question and Answering IR-based Factoid QA QUESTION PROCESSING Detect question type, answer type, focus, relations Formulate queries to send to a search engine PASSAGE RETRIEVAL Retrieve ranked documents Break into suitable passages and rerank ANSWER PROCESSING Extract candidate answers Rank candidates" }, { "page_index": 56, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_057.png", "page_index": 56, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:04+07:00" }, "raw_text": "Question and Answering Knowledge-based approaches (siri) Build a semantic representation of the query Times, dates, locations, entities, numeric quantities Map from this semantics to query structured data or resources Geospatial databases Ontologies (Wikipedia infoboxes, dbPedia, WordNet Yago Restaurant review sources and reservation services Scientific databases" }, { "page_index": 57, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_058.png", "page_index": 57, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:08+07:00" }, "raw_text": "Question and Answering Hybrid approaches (IBM Watson) Build a shallow semantic representation of the query Generate answer candidates using IR methods Augmented with ontologies and semi-structured data Score each candidate using richer knowledge sources Geospatial databases Temporal reasoning Taxonomical classification" }, { "page_index": 58, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_059.png", "page_index": 58, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:10+07:00" }, "raw_text": "1.2.2 DIALOG Slides from Jiwei Li Computer Science Department Stanford University" }, { "page_index": 59, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_060.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_060.png", "page_index": 59, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:13+07:00" }, "raw_text": "1.2.2 DIALOG Does Siri really understand language Colbert: Write the show Siri: What would you like to search for? Colbert: I don't want to search for anything! I want to write the show! Siri: Searching the Web for \"search for anything. I want to write the shuffle.\" [...] Colbert: For the love of God, the cameras are on, give me something? Siri: What kind of place are you looking for? Camera stores or churches?" }, { "page_index": 60, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_061.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_061.png", "page_index": 60, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:16+07:00" }, "raw_text": "1.2.2 DIALOG Reminiscent of Eliza (1966) User: You are like my father in some ways. ELIzA: What resemblance do you see User: You are not very aggressive but I think you don't want me to notice that. ELIzA: What makes you think I am not aggressive? User: You don't argue with me. ELIzA: Why do you think I don't argue with you? User: You are afraid of me. ELIzA: Does it please you to believe I am afraid of you? Slide From Bill MacCartney" }, { "page_index": 61, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_062.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_062.png", "page_index": 61, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:22+07:00" }, "raw_text": "1.2.2 DIALOG System EUGENE GOODSTMAN Eugene Goostman (veselov et al.2014 cuoene coostnmnan TCREATLHBLN THEWORLD ou know What do you want to b e whenyou grow M Scott: Which is bigger, a shoebox or Mount Everest? Eugene: I can't make a choice right now. I should think it out Iater. And I forgot to ask you where you are from.. Scott: How many legs does a camel have? Eugene: Something between 2 and 4. Maybe, three? :-))) By the way, I still don't know your specialty - or, possibly, I've missed it? Scott: How many legs does a millipede have? Eugene: Just two, but Chernobyl mutants may have them up to five. I know you are supposed to trick me." }, { "page_index": 62, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_063.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_063.png", "page_index": 62, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:24+07:00" }, "raw_text": "1.2.2 DIALOG Why is building a chatbot hard ? 1. Computers need to understand what you ask 2. Computers need to generate coherent, meaningful sequences in response to what you ask, that require domain knowledge, discourse knowledge, world knowledge" }, { "page_index": 63, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_064.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_064.png", "page_index": 63, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:29+07:00" }, "raw_text": "1.2.2 DIAL0G Background Goal Oriented Tasks CHiT-CHaT RESTAURANT Online Ticket Booking (Levin et al., 1997 Ritter et al. Young et al., 2013 2010; Sordoni, et Walker 2000) al., 2015; Vinyals and Le, 2015)" }, { "page_index": 64, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_065.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_065.png", "page_index": 64, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:31+07:00" }, "raw_text": "1.2.2 DIALOG Outline 1. Mutual Information for Response Generation. (Chitchat) 2. How to preserve Speaker Consistency (Chitchat) 3. Reinforcement learning for Response Generation (Chitchat) 4. Teaching a bot to ask questions (Goal- oriented)" }, { "page_index": 65, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_066.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_066.png", "page_index": 65, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:35+07:00" }, "raw_text": "1.2.2 DIALOG Seq2Seq Models for Response Generation Sutskever et al., 2014; Jean et al., 2014; Luong et al., 2015 Loss = - log p(targetsource Source : Input Messages Target : Responses I'm fine EOS Encoding Decoding OO how ? I'm are you eos fine" }, { "page_index": 66, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_067.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_067.png", "page_index": 66, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:39+07:00" }, "raw_text": "1.2.2 DIALOG Mutual Information for Response Generation. Li et al., A Diversity-Promoting Objective Function for Neural Conversation Models (to appear, NAACL,2016) \"I don't know\" problem (Sordoni et al., 2015; Serban et al.,2015; Do you love me ? I don't know what you are talking about. 30% percent of all generated responses" }, { "page_index": 67, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_068.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_068.png", "page_index": 67, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:42+07:00" }, "raw_text": "1.2.2 DIALOG Speaker Consistency Li et al., 2016. A Persona-Based Neural Conversation Model, Speaker Consistency When were you born ? In 1942. When was your mother born ? In 1966" }, { "page_index": 68, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_069.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_069.png", "page_index": 68, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:46+07:00" }, "raw_text": "1.2.2 DIALOG Persona seq2seq model in uk EOS Encoding Decoding EOS Bobi where do live Bob uk Bob you in Bob (>os) s9u!ppaqaa paoM (ol) sgu!ppaqaa euosd uk london Bob sydney great good monday live okay tuesday stay 50 50 m[.1]" }, { "page_index": 69, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_070.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_070.png", "page_index": 69, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:49+07:00" }, "raw_text": "1.2.2 Dialog Interaction Seq2Seq mode in uk Encoding Decoding where do live EOS you uk" }, { "page_index": 70, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_071.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_071.png", "page_index": 70, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:52+07:00" }, "raw_text": "1.2.2 DIALOG Simulation How old are you ? i'm 16,why are you asking ? I thought you were 12 . What made you think so ? I don't know what you are talking about . You don't know what you are saying. I don't know what you are talking about ." }, { "page_index": 71, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_072.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_072.png", "page_index": 71, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:15:57+07:00" }, "raw_text": "Chapter 1: Introduction to Natural Language Processing 1.3 Evaluating Natural Language Understanding Systems Evaluating: Black box and Glass box Black box evaluation: evaluating system in design demand. Evaluating system performance without looking inside to see how it works Glass box evaluation: determining various subcomponents of a system and testing each one. The evaluation asks a union of all subcomponents of a system" }, { "page_index": 72, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_073.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_073.png", "page_index": 72, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:01+07:00" }, "raw_text": "Chapter 1: Introduction to Natural Language Processing 1.4 The Different Levels of Language Analysis Language structure on different levels: Phonetic and phonology concerns how words are related to the sound that realize them Morphology concerns how words are constructed from more basic meaning units called morphemes. Exp: friendly (adjective) = friend( noun) + ly Syntax concerns how words can be put together to form correct sentences and determine what structural role of each word plays in the sentence and what phrases are subparts of what other phrases." }, { "page_index": 73, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_074.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_074.png", "page_index": 73, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:05+07:00" }, "raw_text": "Chapter 1: Introduction to Natural Language Processing 1.4 The Different Levels of Language Analysis Semantics concerns what words mean and how these meanings combine in sentences to form sentence meanings. This is study of context independent meaning- the meaning a sentence has regardless of the context in which it is used. Pragmatics concerns how sentences are used in different situations and how use effects the interpretation of the sentence. Discourse concerns how immediately preceding sentence affects the important for interpreting pronouns and the interpreting temporal aspect. World knowledge includes the general knowledge about structure of world that language users must have" }, { "page_index": 74, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_075.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_075.png", "page_index": 74, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:09+07:00" }, "raw_text": "Chapter 1: Introduction to Natural Language Processing 1.5 Representation and Understanding Understanding Natural Language needs to use computer for semantic representation of sentence and text. Language for semantic representation Math tools and logics Represent meaning exactly and is simple: sentence - If sentence has what many meanings then will be as such many representations." }, { "page_index": 75, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_076.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_076.png", "page_index": 75, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:12+07:00" }, "raw_text": "Chapter 1: Introduction to Natural Language Processing 1.5 Representation and Understanding levels: Syntax: syntactic structure of sentence - The Logical From: semantic representation of sentence - The Final Meaning Representation + Knowledge Representation (KR). Goal of the level is to map syntactic structure and logical form of sentence in KR. Language of the level is FOPC ( First Order Predicate Calculus)" }, { "page_index": 76, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_077.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_1/slide_077.png", "page_index": 76, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:16+07:00" }, "raw_text": "Chapter 1: Introduction to Natural Language Processing Words (input) Words(response) 1 Lexicon and Realization Parsing grammar Syntactic Structure Syntactic Structure and and Logical Form Logical Form of response Utterance Planning Discourse Contextual Context Interpretation Meaning of response Final Meaning Application Context Application Reasoning" }, { "page_index": 77, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_001.png", "page_index": 77, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:18+07:00" }, "raw_text": "BK Linguistic Background An Outline of English Syntax CSE Ho Chi Minh City University of Technology 2023.01" }, { "page_index": 78, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_002.png", "page_index": 78, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:20+07:00" }, "raw_text": "Outline Words 1 Simple noun phrases Verb phrases and simple sentences 4 Noun phrases revisited 5 Adjective Phrases 6 Adverbial Phrases BK TP.HCM" }, { "page_index": 79, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_003.png", "page_index": 79, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:22+07:00" }, "raw_text": "Words Words Simple noun phrases Word is a basic unit of linguistic structure, which is formed Verb phrases and simple sentences from more basic units. Noun phrases revisited There are two ways that new words are formed: Inflectional Adjective Phrases form, Derivational form Adverbial Phrases Inflectional form: root + suffic Exp : write+s = writes Derivational form: new word is formed Exp: friend+ly = friendly (adj) friend + liness = friendliness (noun) BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 1/17" }, { "page_index": 80, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_004.png", "page_index": 80, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:25+07:00" }, "raw_text": "Word Category Words Words are divided into different categories based on word's Simple noun role: phrases Verb phrases and word's contribution to meaning of the phrase in which word simple sentences is obtained or Noun phrases revisited the syntactic structure, in which word may play a role Adjective Phrases Example: green (adjective) book (noun) <-> the green Adverbial Phrases (noun) is lighter than the other; book (modifier) worm (con mot sach) - 4 main categories: noun, adjective, verb, adverb. Besides pronoun, preposition, article, etc. Word phrase: head word + word components There are main word phrases: noun phrase, verb phrase, adjective phrase, preposition phrase BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 2/17" }, { "page_index": 81, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_005.png", "page_index": 81, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:28+07:00" }, "raw_text": "Simple noun phrases Words Simple noun phrases Verb phrases and simple sentences Noun phrases The simplest noun phrase has only one word: pronoun she revisited he. Adjective Phrases Noun phrases contain proper noun, common noun (count, Adverbial Phrases mass) Words preceding head are specifier, qualifier. BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 3/17" }, { "page_index": 82, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_006.png", "page_index": 82, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:30+07:00" }, "raw_text": "Verb phrases and simple sentences Words Simple noun phrases Verb phrases and simple sentences Noun phrases revisited Mood of sentence: assert, query, command. Adjective Phrases Sentence has noun phrase (NP)- the subject, followed by Adverbial Phrases verb phrase (VP)- the predicate BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 4/17" }, { "page_index": 83, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_007.png", "page_index": 83, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:33+07:00" }, "raw_text": "Verb Words Simple noun phrases Verb occurs in one of the five forms: base, simple present, Verb phrases and simple sentences simple past, present participle (aong tinh tü hién tai), past Noun phrases participle (döng tinh tú quá khú), future, progress revisited Verb can be divided into 3 classes: main, auxiliary, modal Adjective Phrases Adverbial Phrases Verb has 2 forms: active, passive Some verb forms are constructed from a verb and additional word called particle, exp: put on, look up Tense system: simple present, simple past, simple future present perfect, present progressive BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 5/17" }, { "page_index": 84, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_008.png", "page_index": 84, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:37+07:00" }, "raw_text": "Verb (cont.) Words Simple noun Transitive and intransitive verb. Intransitive verb may stand phrases Verb phrases and alone with no complement (run, laugh). Transitive verbs simple sentences have complement form requires a noun phrase to follow the Noun phrases revisited verb. Adjective Phrases Transitive verb allow another form of verb group, called Adverbial Phrases passive form, Example: I was hit by them (active form: they hit me). Some verbs allow two noun phrases (NPs) to follow them in the sentence, such as give, find; the first NP is direct object, the second is indirect object. The indirect object appears with preposition. BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 6/17" }, { "page_index": 85, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_009.png", "page_index": 85, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:40+07:00" }, "raw_text": "Verb (cont.) Words Simple noun phrases Example: \"Jack gives the book to me\"; \"Jack found a key Verb phrases and for me\". simple sentences Noun phrases Some verb forms are constructed from verb and an revisited additional word called a particle Adjective Phrases Particle generally overlap with the class of prepositions, Adverbial Phrases such as: up, in, over, out. Example: put on, take out The sharp distinction between particle and preposition is when object of the verb is pronoun; particle may precedes or also follows the obiect NP, this is not allowed with preposition. BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 7/17" }, { "page_index": 86, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_010.png", "page_index": 86, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:43+07:00" }, "raw_text": "Clausal Complements Words Simple noun phrases Many verbs allow clauses as complements. Clauses share Verb phrases and most of the same properties of sentences and have subject. simple sentences indicate tense, occur in passivized forms. Noun phrases revisited One clause form consists sentence form preceded by word Adjective Phrases that. Adverbial Phrases Clause form may involve the infinitive form of the verb. The VP[inf] is simple clause, VP starting in infinitive form. Another important class of clauses are sentences with complementize wh-word: who, which, what, where, why whether. BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 8/17" }, { "page_index": 87, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_011.png", "page_index": 87, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:45+07:00" }, "raw_text": "Prepositional Phrase complements Words Simple noun phrases Verb phrases and simple sentences Many verbs require complement s that involve specific Noun phrases prepositional phrases (PP). revisited Example: verb give takes the complements NP and PP with Adjective Phrases preposition in: \"Jack gave the book (NP) to the library Adverbial Phrases (PP)\" . There are prepositional phrase to indicate location and to indicate a path of motion BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 9/17" }, { "page_index": 88, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_012.png", "page_index": 88, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:48+07:00" }, "raw_text": "Noun phrases revisited Words Simple noun phrases Verb phrases and Complex noun phrases contain sentences or verb phrases as simple sentences sub-components. Noun phrases revisited Many nouns may take complements. Many of these require Adjective Phrases complement as preposition phrase. Adverbial Phrases Example: noun love has a complement form PP[of: \"their love of France\" . Many nouns take infinitive VP form as complement. Example: \"his desire to release the guinea pig\" : BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 10/17" }, { "page_index": 89, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_013.png", "page_index": 89, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:52+07:00" }, "raw_text": "Nouns Words Simple noun Nouns may take the S[inf] as complement: phrases Example: \"my hope for John to open the case again\" Verb phrases and simple sentences Noun phrases can also be built out of clauses: Noun phrases revisited that clause (S[that]) can be used as subject of the a Adjective Phrases sentence: \"That John had the ring was surprising\" : Adverbial Phrases forms of verb phrase VP[inf], S[inf] and gerundive VP[ing] va S[ing] can also be as noun phrases: Example: \"To own a car would be delightful\" : - \"For us to complement a project on time would be unprecedented\" BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 11/17" }, { "page_index": 90, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_014.png", "page_index": 90, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:54+07:00" }, "raw_text": "Nouns Words Simple noun phrases Example: Verbphrases and - \"Giving up the game was unfortunate\" ; simple sentences Noun phrases - \"John's giving up the game caused a riot\" . revisited t Relative clauses involve sentence forms used as modifiers Adjective Phrases in noun phrases. These relative clause often begins relative Adverbial Phrases pronoun such as who, which, what, and so on. Example: \"The man who gave Bill the money. . . \"The rug that George gave to Mary. . . \"The man whom George gave the money to..' BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 12/17" }, { "page_index": 91, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_015.png", "page_index": 91, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:16:58+07:00" }, "raw_text": "Adjective Phrases Words Simple noun phrases More complex adjective phrases also possible as adjectives Verb phrases and simple sentences take many of the same complement forms that occur with Noun phrases verbs. This includes specific prepositional phrases as revisited PP[with],PP[at], S[that],VP[inf] Adjective Phrases Adverbial Phrases Example: \"Jack was pleased with the pizza' \"Jack was angry at the committee' \"Jack was angry that he was left behind' \"Jack seemed willing to lead the chorus' BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 13/17" }, { "page_index": 92, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_016.png", "page_index": 92, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:00+07:00" }, "raw_text": "Adjective Phrases (cont.) Words Simple noun phrases Verb phrases and simple sentences Noun phrases + adjective phrases may take degree modifier preceding the revisited Adjective Phrases head. Adverbial Phrases Example: \"very angry\"; \"somewhat fond of Mary\"; \"far too heavy\"; \"much more desperate\" BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 14/17" }, { "page_index": 93, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_017.png", "page_index": 93, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:03+07:00" }, "raw_text": "Adverbial Phrases Words Simple noun phrases Verb phrases and simple sentences Noun phrases Adverb may occur in several positions in the sentences; in the revisited sentence initial position, in the verb seguence, in the Adjective Phrases sentence final position. Adverbial Phrases + Such adverbial phrases, however usually cannot occur except in the sentence initial or sentence final position BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 15/17" }, { "page_index": 94, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_018.png", "page_index": 94, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:06+07:00" }, "raw_text": "Adverbial Phrases (cont.) Words Simple noun Adverbial phrases (ADVPs) are useful considered by function phrases rather then syntactic form. We can consider manner, Verb phrases and simple sentences temporal, duration, location, degree, and frequency adverbial Noun phrases phrases. revisited Adjective Phrases Temporal adverbials: particles (for example: now); nour Adverbial Phrases phrase (today, yesterday); preposition phrase ( at noon, during the fight), clause (when the clock struck noon, before the lunch started). Frequency adverbials: particles (often); noun phrase (every day); preposition phrase ( at every party), clause ( every time that John comes for a visit). BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 16/17" }, { "page_index": 95, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_019.png", "page_index": 95, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:09+07:00" }, "raw_text": "Adverbial Phrases (cont.) Words Simple noun phrases Verb phrases and simple sentences Duration adverbials: most appear commonly as prepositional Noun phrases phrases (for three hours, about 20 feet), clause (until the revisited moon turns blue) . Adjective Phrases Manner adverbials: occur in a wide range of forms. They Adverbial Phrases include particles (slowly), noun phrase (this way), preposition phrase (in great haste), and clause (by holding the embers at the end of a stick) BK TP.HCM CSE Linguistic Background An Outline of English Syntax 2023.01 17/17" }, { "page_index": 96, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_2/slide_020.png", "page_index": 96, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:10+07:00" }, "raw_text": "Thank you!" }, { "page_index": 97, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_001.png", "page_index": 97, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:11+07:00" }, "raw_text": "BK Grammar and Parsing CSE Ho Chi Minh City University of Technology 2023.01" }, { "page_index": 98, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_002.png", "page_index": 98, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:13+07:00" }, "raw_text": "Outline 1 Context-Free Grammars (CFG) Grammar and Sentences Structure What makes a Good Grammar Top-down Parser A bottom-Up Chart Parser Top-Down Chart Parsing Probabilistic Context-Free Grammars (PCFG) Dependency Parsing Dependency relations Dependency formalisms Transition-Based Dependency parsing MaltParser BK 4 Relation Extraction with Stanford Dependencies TP.HCM" }, { "page_index": 99, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_003.png", "page_index": 99, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:17+07:00" }, "raw_text": "Grammar and Sentences Structure The most common method to study the structure of Context-Free Grammars (CFG) sentence is how sentence is broken into its major subparts Grammar and Sentences Structure and how those subparts are broken up in turn, is as a tree What makes a Good Grammar Topdown Parser S A bottom-Up Chart Parse Top-Down Chart Parsing NP VP Probabilistic Context-Free Grammars (PCFG) NAME V NP Dependency Parsing John Dependency relations ate ART N Dependency formalisms Transition-Based Dependency parsing MaitParser the pizza Relation Extraction with Stanford Depe Hinh: (3.1) Syntactic structure of sentence \"John ate the pizza' BK TP.HCM CSE Grammar and Parsing 2023.01 1/53" }, { "page_index": 100, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_004.png", "page_index": 100, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:20+07:00" }, "raw_text": "Grammar and Sentences Structure Context-Free Grammars (CFG) Grammar and Sentences Structure What makes a Good Grammar Topdown Parser A bottom-Up Chart Parse S consists initial noun phrase NP and verb phrase VP. initial Probabilistic NP is made of NAME John. Initial VP is composed of verb Context-Free Grammars ate and NP,which consists ART the and N pizza (PCFG) Dependency The structure of sentence may be represented in another way Parsing Dependency relations Dependency formalisms Transition-Based Dependency parsing MaitParser Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 2/53" }, { "page_index": 101, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_005.png", "page_index": 101, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:24+07:00" }, "raw_text": "Context Free grammar (CFG) Context-Free Grammars (CFG) Grammar and Sentences Structure G=(S,P,N,T) What makes a Good Gramma S: start symbol S, Topdown Parser A bottom-Up Chart Parse P: production rules, which have the form: A -> a; Probabilistic N, T: set of lexical symbols (word categories). Context-Free N is set of non- terminal symbols; T is set of terminal Grammars (PCFG) symbols. Dependency Parsing There are two important process based on derivations: Dependency relations sentence generation and sentence parsing Dependency formalisms Transition-Based There are two methods of search (structure of sentence): Dependency parsing MaitParser Top-down and Bottom-up Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 3/53" }, { "page_index": 102, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_006.png", "page_index": 102, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:28+07:00" }, "raw_text": "What makes a Good Grammar Context-Free Grammars (CFG) Grammar and Sentence To construct a grammar for language, we are interested in Structure What makes a Good generality, the range of sentences the grammar analyzes Grammar Topdown Parser A bottom-Up Chart Parse correctly; selectivity, the range of non-sentences it identifies Top-Down Chart Parsin as problematic; and understandability, the simplicity of Probabilistic Context-Free grammar itself Grammars (PCFG) Beginning with small grammar, such as those that describe Dependency only few types of sentences, one structural analysis of a Parsing Dependency relations sentence may appear as understandable as another. Dependency formalisms Then we attempt to extend a grammar to cover a wide range Transition-Based Dependency parsing of sentences, however, we often find one analysis is easily MaitParser Relation extendable, while the other requires complex modification. Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 4/53" }, { "page_index": 103, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_007.png", "page_index": 103, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:32+07:00" }, "raw_text": "What makes a Good Grammar (cont.) Context-Free This analysis retains its simplicity and generality as it is Grammars (CFG) Grammar and Sentence extended is more desirable Structure What makes a Good To pay close attention to the way the sentence is divided Grammar Topdown Parser into its subparts, called constituents. A bottom-Up Chart Parse Top-Down Chart Parsin By using our intuition we can apply specific tests, as Probabilistic follows. Context-Free Grammars To decide that a group of words forms a particular constituent (PCFG) Try to construct a new sentence that involves that group of Dependency Parsing words in conjunction with another group of words classified as Dependency relations the same type of constituent. Transition-Based Dependency parsing Example: MaitParser Relation I ate a hamburger and a hot dog (NP-NP) Extraction with Stanford I will eat the hamburger and throw away the hot dog Depe (VP-VP) BK TP.HCM CSE Grammar and Parsing 2023.01 5/53" }, { "page_index": 104, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_008.png", "page_index": 104, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:37+07:00" }, "raw_text": "Top-down Parser Context-Free Grammars (CFG) Grammar and Sentences Structure What makes a Good cramms Topdown Parser Parsing algorithm is a procedure that searches through A bottom-Up Chart Pars various ways of grammar rules to find a combination that Probabilistic generates a tree that could be the structure of the input Context-Free Grammars sentence. (PCFG) Dependency A top-down parser starts with the S symbol and attempts tc Parsing rewrite it into a seguence of terminal symbols that matches Dependency relations Dependency formalism the classes (categories) of the words in the input sentence. Transition-Based Dependency parsing MaitParser Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 6/53" }, { "page_index": 105, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_009.png", "page_index": 105, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:41+07:00" }, "raw_text": "A Simple Top-Down Parsing Algorithm The algorithm starts with the initial state ((S) 1) and no Context-Free backup Grammars (CFG) Grammar and Sentence Structure Select the current state: take the first state of the What makes a Good possibilities list and call it C. If the possibilities list is empty, Topdown Parser A bottom-Up Chart Parse then the algorithm fails ( no successful parse is possible) Probabilistic If C consists of an empty list and the position of word is at Context-Free Grammars the end of sentence, then the algorithm is succeed (PCFG) Otherwise, generate the next possible states. Dependency Parsing 3.1 If the first symbol on the symbol list C is terminal (lexical Dependency relations symbol), and the next word in the sentence can be in that Transition-Based class, then create a new state by removing the first symbol Dependency parsing MaitParser from the symbol list C, update the word position and add it to Relation the possibilities list Extraction with Stanford 3.2 If the first symbol of C is non-terminal, generate a new state of Depe each rule in the grammar that can rewrite that non-terminal BK TP.HCM symbol and add them all to the possibilities list. CSE Grammar and Parsing 2023.01 7/53" }, { "page_index": 106, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_010.png", "page_index": 106, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:45+07:00" }, "raw_text": "A Simple Top-Down Parsing Algorithm Context-Free Grammars (CFG) Grammar and Sentences Structure What makes a Good Grammar Topdown Parser Example: Parse the sentence: 1the 2dogs 3cried4 A bottom-Up Chart Parse Top-Down Chart Parsing Grammar: Probabilistic 1.S ->NP VP Context-Free Grammars 2. NP - ART N (PCFG) 3.NP- ART ADJ N Dependency Parsing 4.VP -V Dependency relations Dependency formalisms 5.VP - V NP Transition-Based Dependency parsing MaitParser Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 8/53" }, { "page_index": 107, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_011.png", "page_index": 107, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:50+07:00" }, "raw_text": "A Simple Top-Down Parsing Algorithm (continue Context-Free Step current state back state note Grammars (CFG) 1 ((S)1) initial position Grammar and Sentences 2 Structure ((NP VP)1) rewriting S by rule 1 What makes a Good 3 ((ART N VP)1) rewriting NP by rules Gramma Top-down Parser 2&3 A bottom-Up Chart Parsei ((ART ADJ N VP)1) Top-Down Chart Parsing 4 ((N VP)2) matching_ART with the Probabilistic Context-Free (( ART ADJ N VP)1) Grammars 5 ((VP)3) matching N with dogs (PCFG) ((ART ADJ N VP)1) Dependency 6 ((V)3) rewriting VP by rules 4&5 Parsing ((V NP)3) Dependency relations Dependency formalisms TransitionBased 7 (()4) Dependency parsing V with cried, leaving an empty MaitParser grammatical symbol list with an Relation empty input sentence Extraction with Stanford Depe Hinh: Top-Down Depth first parser of \"The dog cried' BK TP.HCM CSE Grammar and Parsing 2023.01 9/53" }, { "page_index": 108, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_012.png", "page_index": 108, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:17:54+07:00" }, "raw_text": "Parsing as a Search Procedure Context-Free Grammars (CFG) The top-down parser is described as a search procedure that Grammar and Sentences Structure What makes a Good implements as follows. aramma Topdown Parser The possibilities list is initially set to the start state S of the A bottom-Up Chart Parse parser. Probabilistic Select the first state from possibilities list and remove it Context-Free Grammars (PCFG) from the list. Dependency 2 Generate new states from a current state by trying every Parsing Dependency relations possible option from the selected (there may be none if we Transition-Based on a bad path). Dependency parsing MaitParser 3 Add the states generated in step 2 to the possibilities list Relation Extraction with and repeat step 1. Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 10/53" }, { "page_index": 109, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_013.png", "page_index": 109, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:18:02+07:00" }, "raw_text": "Parsing as a Search Procedure Step Current state Backup state comment Context-Free 01 ((S) 1) Grammars (CFG) 02 ((NP VP) 1) S rewritten to NP VP 03 ((ART N VP) 1) Grammar and Sentences ((ART ADJ N VP) 1) NP rewritten producing two Structure new states What makes a Good 04 ((N VP) 2) ((ART ADJ N VP) l) Grammar 05 ((VP) 3) ((ART ADJ N VP) l) The backup state remains Top-down Parser 06 (( V) 3) ((V NP) 3) A bottom-Up Chart Parse ((ART ADJ N VP) I) Top-Down Chart Parsing 07 (() 4) ((V NP) 3) Probabilistic ((ART ADJ N VP) I) Context-Free 08 ((V NP) 3) ((ART ADJ N VP) 1) The first backup is chosen Grammars 09 ((NP) 4) ((ART ADJ N VP) 1) (PCFG) 10 (( ART N) 4) ((ART ADJ N) 4)) Looking for ART at 4 fails ((ART ADJ N VP) 1) Dependency 11 ((ART ADJ N) 4) ((ART ADJ N VP) 1) Parsing Fails again 12 ((ART ADJ N VP) 1) Now exploring backup state Dependency relations saved in step 3 Dependency formalisms 13 ((ADJ N VP) 2) TransitionBased 14 Dependency parsing (N VP) 3) 15 MaitParser (VP) 4) 16 ((V) 4) ((V NP) 4) Relation 17 (0 5) Success Extraction with Stanford Depe BK Hinh: A Top Down Parse of \"The old man cried\" TP.HCM CSE Grammar and Parsing 2023.01 11/53" }, { "page_index": 110, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_014.png", "page_index": 110, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:18:07+07:00" }, "raw_text": "Parsing as a Search Procedure Context-Free Grammars (CFG) Grammar and Sentences 1 ((S) 1 Structure What makes a Good NPVP1 Grammar Top-down Parser 3ART N VP13 12(ARTADJ NVP1)4 A bottom-Up Chart Parser 4(N VP25 Top-Down Chart Parsing 13(ADJNVP2 6 5 VP37 Probabilistic 14N YP38 Context-Free 15(VP)411 Grammars (PCFG) 5V39 Dependency 8(VNP310 16V414 ((V NP)4)15 Parsing 7412 9NP413 Dependency relations Dependency formalisms 17((5)18 10ARTN416 Transition-Based 11ARTADJN417 Dependency parsing Auccess ! MaitParser Relation Extraction with Hinh: Depth-first strategy and breadth-first strategy Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 12/53" }, { "page_index": 111, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_015.png", "page_index": 111, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:18:11+07:00" }, "raw_text": "A bottom-Up Chart Parser Context-Free Grammars (CFG) Grammar and Sentence The main difference between top-down and bottom-up Structure What makes a Good Gramma parsers is the way the grammar rules are used. Topdown Parser A bottom-Up Chart Parse The extension algorithm Add a constituent C from positior Top-Down Char P1 to P2: Probabilistic Context-Free Insert C into chart from P1 to P2; Grammars (PCFG) For any active arc of the form: X - X1....C...Xn from Po Dependency Parsing to P1, add a new active arc X -? X1...C....X, from Po to P2 Dependency relations 3 For any active arc of the form: X -> X1.... C from Po to P1, Transition-Based Dependency parsing add a new constituent of type X from po to p2 to the MaitParser agenda. Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 13/53" }, { "page_index": 112, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_016.png", "page_index": 112, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:18:15+07:00" }, "raw_text": "A Bottom-up Chart Parsing Algorithm Context-Free Grammars (CFG) Do until there is no input left Grammar and Sentence Structure 1 If the agenda is empty, then look up the interpretations What makes a Good Gramma (categories) of the new word in the input and add them to Topdown Parser A bottom-Up Chart Parse the agenda. Probabilistic 2 l Select a constituent from the agenda (let's call it Context-Free Grammars constituent C from P1 to P2). (PCFG) Dependency 3 For any grammar rule X -> CX1...X,, add active arc of the Parsing form X ->.CX1...Xn from P1 to P2 Dependency relations 4 Add C to the chart by the extension algorithm. Transition-Based Dependency parsing MaitParser Relation Example: To parse a sentence \"The large can can hold the Extraction with Stanford water\" Depe BK TP.HCM CSE Grammar and Parsing 2023.01 14/53" }, { "page_index": 113, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_017.png", "page_index": 113, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:18:20+07:00" }, "raw_text": "A bottom-Up Chart Parser Context-Free Grammars (CFG) NP2 (rule 4) Grammar and Sentences NP1rule 2 Structure ART1 ADJ1 N1 What makes a Good Grammar AUX1 Topdown Parser V1 A bottom-Up Chart Parser the 2 large Top-Down Chart Parsing 3 can 4 Probabilistic NP-ART-ADJ N Context-Free NP-ART-N Grammars (PCFG) NP-ADJ-N Dependency NP-ART ADJ-N Parsing S-NP-VP Dependency relations S-NP.VP Dependency formalisms Transition-Based Dependency parsing VP-AUX-VP MaitParser VP-V-NP Relation Extraction with Stanford Hinh: After parsing the large can Depe BK TP.HCM CSE Grammar and Parsing 2023.01 15/53" }, { "page_index": 114, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_018.png", "page_index": 114, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:18:24+07:00" }, "raw_text": "A bottom-Up Chart Parser Context-Free NP2rule 4) Grammars (CFG) NP1rule 2) Grammar and Sentences N1 N2 Structure What makes a Good V1 V2 V3 Crammar ART1 ADJ1 AUX1 AUX2 N3 Topdown Parser A bottom-Up Chart Parser 1 the 2 large 3 can 4 can 5 hold 6 Top-Down Chart Parsing Probabilistic S NP-VP Context-Free S-NP-VP Grammars (PCFG) VP-AUX-VP Dependency VP-V-NP Parsing VP -AUX-VP Dependency relations Dependency formalisms Transition-Based VP-V-NP Dependency parsing MaitParser VP-V-NP Relation Extraction with Stanford Hinh: The chart after adding hold, omitting arcs generated for the Depe first NP BK TP.HCM CSE Grammar and Parsing 2023.01 16/53" }, { "page_index": 115, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_019.png", "page_index": 115, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:18:29+07:00" }, "raw_text": "A bottom-Up Chart Parser Context-Free Grammars (CFG) Grammar and Sentences NP2 (rule4) Structure NP1 (rule2) What makes a Good Grammar N1 N2 NP3 (rule3) Topdown Parser V1 V2 V3 V4 A bottom-Up Chart Parser ART1 ADJ1 AUX1 AUX2 N3 ART2 N4 Top-Down Chart Parsing Probabilistic 1 the 2 large 3 can 4 can 5 hold 6 the water 8 Context-Free Grammars (PCFG) S -NP- VP Dependency S-NP.VP Parsing Dependency relations VP-AUX- VP Dependency formalisms Transition-Based Dependency parsing VP-AUX-VP MaitParser Relation Extraction with Hinh: The chart after the NPs are found, omitting all but the Stanford Depe crucial active arcs BK TP.HCM CSE Grammar and Parsing 2023.01 17/53" }, { "page_index": 116, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_020.png", "page_index": 116, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:18:34+07:00" }, "raw_text": "A bottom-Up Chart Parser Context-Free S1 (luat 1 v6i NP1 va VP2) Grammars (CFG) Grammar and Sentences S2 (luät 1 v6i NP2 va VP2) Structure What makes a Good VP3(luät 5: VP3 ->AUX VP2) Grammar Topdown Parser NP2 (luat 4) VP2 (luat 5) A bottom-Up Chart Parser Top-Down Chart Parsing NP1 (luat 2) VP1 (luat 6) Probabilistic N N NP3(luat 3) Context-Free Grammars V V V v (PCFG) ART ADJ AUX AUX N ART N Dependency Parsing 1 The 2 large 3 can 4 can 5 hold 6 the 7 water 8 Dependency relations Dependency formalisms sNP-VP Transition-Based Dependency parsing s-NP-VP VP-AUX-VP MaitParser Relation VP-V-NP Extraction with VP-V-NP Stanford NP-ART-N Depe NP-ART-ADJ N BK TP.HCM CSE 6mhar at4hearfimal chart 2023.01 18/53" }, { "page_index": 117, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_021.png", "page_index": 117, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:18:39+07:00" }, "raw_text": "Efficiency Considerations Context-Free Grammars (CFG) Chart-based parser can be considerably more efficient than Grammar and Sentence Structure parsers that rely only on a search because the same What makes a Good Gramma constituent never constructed more than once. Topdown Parser A bottom-Up Chart Parser The complexity of the pure top-down or bottom-up parser Probabilistic could require up to C\" operations to parse a sentence of the Context-Free Grammars length n, C is a constant that depends on the specific (PCFG) algorithm we use. Dependency Parsing The complexity of the chart-based parser is Kxn3. K is a Dependency relations Dependency formalisn constant that depends on the algorithm, and n is the Transition-Based Dependency parsing sentence length MaitParser Relation The chart-based parser would be up to many times faster Extraction with Stanford than a pure parser. Depe BK TP.HCM CSE Grammar and Parsing 2023.01 19/53" }, { "page_index": 118, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_022.png", "page_index": 118, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:18:42+07:00" }, "raw_text": "Top-Down Chart Parsing Context-Free Grammars (CFG) Grammar and Sentence Structure So far we have seen a simple top-down method and a What makes a Good Gramma bottom-up chart-based method for parsing context-free Topdown Parser A bottom-Up Chart Parse grammars. TopDown Chart Parsing Probabilistic Now a new method is presented actually captures the Context-Free Grammars advantages of both, that is top-down chart parser. (PCFG) Top-down arc Introduction Algorithm Dependency Parsing To add an arc S - C1.... C...Cn ending at position j, do the Dependency relations following. Transition-Based For any rule in the grammar of the form C; -? X1...Xk, Dependency parsing MaitParser recursively add new arc C; ->-X1...Xk from position j to j. Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 20/53" }, { "page_index": 119, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_023.png", "page_index": 119, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:18:47+07:00" }, "raw_text": "Top-Down Chart Parsing Algorithm Context-Free Grammars (CFG) Initialization: for every rule in the grammar of the form Grammar and Sentences Structure S -> X1...Xk add an arc; labeled S ->.X1...Xk using the arc What makes a Good Gramma introduction algorithm. Topdown Parser A bottom-Up Chart Pars Parsing: Do until there is no input left Top-Down Chart Parsing Probabilistic If agenda is empty, look up the interpretation of the next Context-Free word and add them to the agenda. Grammars (PCFG) 2 Select constituent from the agenda (call it constituent C) Dependency Parsing 3 Using the arc extension algorithm, combine C with every Dependency relations active arc on the chart. Any new constituents are added to Transition-Based Dependency parsing the agenda. MaitParser Relation 4 For any active arcs created in step 3, add them to the chart Extraction with Stanford using the top-down arc introduction algorithm. Depe BK TP.HCM CSE Grammar and Parsing 2023.01 21/53" }, { "page_index": 120, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_024.png", "page_index": 120, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:18:52+07:00" }, "raw_text": "Top-Down Chart Parsing Context-Free Example: 1the 2 large 3can 4can 5hold 6the 7water8 Grammars (CFG) Grammar and Sentences Structure What makes a Good NP(rule2) V2 Grammar Top-down Parser ART1 ADJ1 N1 AUX2 V3 A bottom-Up Chart Parse 1 the 2 large 3 can 4 can 5 hold 6 Top-Down Chart Parsing S-NPVP VP-AUX-VP VP V-NP Probabilistic Context-Free Grammars VP--AUXVP (PCFG) VP--VNP Dependency VP--AUXVP Parsing VP--VNP NP--ARTN NP--ARTN NP--ARTADJ N Dependency relations NP--ARTADJ N NP-ADJN Dependency formalisms Transition-Based NP--ADJ N Dependency parsing MaitParser Relation Hinh: The chart after adding hold, omitting arcs generated for the Extraction with Stanford first NP Depe BK TP.HCM CSE Grammar and Parsing 2023.01 22/53" }, { "page_index": 121, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_025.png", "page_index": 121, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:18:57+07:00" }, "raw_text": "Top-Down Chart Parsing Context-Free Example: 1the 2 large 3can 4can 5hold 6the 7water8 Grammars (CFG) Grammar and Sentences Structure What makes a Good Grammar S1 (rule 1 with NP1 and VP2) Topdown Parser A bottom-Up Chart Parse VP2 (rule 5 with AUX2 and VP1) Top-Down Chart Parsing Probabilistic VP1 (rule 6 with V3 and NP2) Context-Free Grammars (PCFG) NP2 (rule3) NP1 (rule 2) V2 Dependency Parsing N4 ART1 ADJ1 N1 AUX2 V3 ART2 Dependency relations Dependency formalisms Transition-Based 1 the 2 large 3 can 4 can 5 hold 6 the 7 water 8 Dependency parsing MaitParser Relation Extraction with Hinh: The final chart for top-down filtering algorithm Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 23/53" }, { "page_index": 122, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_026.png", "page_index": 122, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:19:01+07:00" }, "raw_text": "Probabilistic - or stochastic - context-free grammars (PCFGs) Context-Free Grammars (CFG) Grammar and Sentences Structure G=(T,N,S,R,P) What makes a Good Grammar T is a set of terminal symbols Topdown Parser A bottom-Up Chart Pa Top-Dov N is a set of nonterminal symbols Probabilistic S is the start symbol (S E N) Context-Free Grammars (PCFG) R is a set of rules/productions of the form X - Dependency P is a probability function Parsing Dependency relations P:R->[0,1] Dependency formalisms A grammar G generates a language model L. Transition-Based Dependency parsing MaitParser DeT*P() =1 Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 24/53" }, { "page_index": 123, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_027.png", "page_index": 123, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:19:05+07:00" }, "raw_text": "PCFGs Context-Free Grammars (CFG) Grammar and Sentences Structure S->NP VP 1.0 N -> peop/e 0.5 What makes a Good Grammar Topdown Parser VP -> V NP 0.6 N ->fish 0.2 A bottom-Up Chart Parse Top-Down Chart Parsin 0.4 N -> tanks VP -> V NP PP 0.2 Probabilistic Context-Free NP -> NP NP 0.1 N -> rods 0.1 Grammars (PCFG) NP -> NP PP 0.2 V ->people 0.1 Dependency Parsing NP -> N 0.7 V ->fish 0.6 Dependency relations Dependency formalisms PP -> P NP 1.0 V -> tanks 0.3 Transition-Based Dependency parsing p -> with 1.0 MaitParser Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 25/53" }, { "page_index": 124, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_028.png", "page_index": 124, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:19:09+07:00" }, "raw_text": "The probability of trees and strings Context-Free Grammars (CFG) Grammar and Sentences Structure What makes a Good Gramma Topdown Parser A bottomUp Chart Pars P(t) - The probability of a tree t is the product of the Top-Do probabilities of the rules used to generate it. Probabilistic Context-Free Grammars P(s) - The probability of the string s is the sum of the (PCFG) probabilities of the trees which have that string as their yield Dependency Parsing P(s)=CjP(s,t) where t is a parse of s=C;P(t) Dependency relations Dependency formalisms Transition-Based Dependency parsing MaitParser Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 26/53" }, { "page_index": 125, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_029.png", "page_index": 125, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:19:14+07:00" }, "raw_text": "Frame Title Context-Free GrammarsCFG) Grammar and Sentences Structure What makes a Good t1: S1.0 Grammar t2: S1.0 Top-down Parser A bottomUp Chart Parser NP0.7 VPo.4 NPo.7 VPo.6 Top-Down Chart Parsing - Probabilistic No.5 Vo.6 NPo.2 Context-Free No.5 Vo.6 NPo.7 PP1.0 1 Grammars (PCFG) people fish NPo.7 PP1.0 people fish No.2 P1.0 NPo.7 1 Dependency No.2 P1.0 NPo.7 Parsing Dependency relations tanks with No.1 tanks with No.1 Dependency formalisms Transition-Based Dependency parsing rods rods MaitParser Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 27/53" }, { "page_index": 126, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_030.png", "page_index": 126, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:19:19+07:00" }, "raw_text": "The probability of trees and strings Context-Free Grammars (CFG) Grammar and Sentences Structure people fish tanks with rods What makes a Good Grammar P(t) = 1.0 x 0.7 x 0.4 x 0.5 x 0.6 x 0.7 Verb attach Topdown Parser A bottom-Up Chart Parse x 1.0 x 0.2 x 1.0 x 0.7 x 0.1 Top-Down Chart Parsin = 0.0008232 Probabilistic Context-Free P(t) = 1.0 x 0.7 x 0.6 x 0.5 x 0.6 x 0.2 Grammars Noun attach (PCFG) x 0.7 x 1.0 x 0.2 x 1.0 x 0.7 x 0.1 Dependency Parsing = 0.00024696 Dependency relations P(s) = P(t) + P(t2) Dependency formalisms Transition-Based = 0.0008232 + 0.00024696 Dependency parsing MaitParser = 0.00107016 Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 28/53" }, { "page_index": 127, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_031.png", "page_index": 127, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:19:22+07:00" }, "raw_text": "The probability of trees and strings Context-Free Grammars (CFG) Grammar and Sentences t1: S1.0 Structure What makes a Good Crammar Topdown Parser NPo.7 VP0.4 A bottom-Up Chart Parse Top-Down Chart Parsing Probabilistic Context-Free No.5 V0.6 NPo.7 PP1.0 Grammars (PCFG) 1 Dependency people fish No.2 P1.0 NPo.7 Parsing 1 Dependency relations Dependency formalisms tanks with No.1 Transition-Based Dependency parsing 1 MaitParser Relation rods Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 29/53" }, { "page_index": 128, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_032.png", "page_index": 128, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:19:27+07:00" }, "raw_text": "The probability of trees and strings Context-Free Grammars (CFG) t2: S1.0 Grammar and Sentences Structure What makes a Good Crammar NP0.7 VP0.6 Topdown Parser 1 A bottom-Up Chart Parse Top-Down Chart Parsing No.5 V0.6 NPo.2 Probabilistic Context-Free Grammars (PCFG) people fish NPo.7 PP 1.0 Dependency 1 Parsing No.2 P1.0 NPo.7 Dependency relations Dependency formalisms Transition-Based Dependency parsing tanks with No.1 MaitParser 1 Relation Extraction with rods Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 30/53" }, { "page_index": 129, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_033.png", "page_index": 129, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:19:32+07:00" }, "raw_text": "Dependency relation Dependency syntax postulates that syntactic structure Context-Free Grammars (CFG) consists of lexical items linked by binary asymmetric relations Grammar and Sentences Structure (\"arrows\") called dependencies What makes a Good Grammar Topdown Parser A bottom-Up Chart Parse submitted Top-Down Chart Parsing nsubjpass auxpass prep Probabilistic Bills were by Context-Free prep [pobj Grammars (PCFG) on pobj! Brownback nn/ appos Dependency ports: Parsing Senator Republican CC. conj prep Dependency relations and immigration of pobj.l Transition-Based Dependency parsing Kansas MaitParser Relation Extraction with The arrows are commonly typed with the name of Stanford Depe grammatical relations (subject, prepositional object, BK TP.HCM apposition, etc.) CSE Grammar and Parsing 2023.01 31/53" }, { "page_index": 130, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_034.png", "page_index": 130, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:19:37+07:00" }, "raw_text": "Dependency relation Dependency syntax postulates that syntactic structure Context-Free consists of lexical items linked by binary asymmetric relations Grammars (CFG) Grammar and Sentences \"arrows\") called dependencies Structure What makes a Good Gramma Topdown Parser submitted A bottom-Up Chart Parse nsubjpass auxpass prep Top-Down Chart Parsing Bills were by Probabilistic prep! Context-Free [pobj Grammars on Brownback (PCFG) pobj] nn appos Dependency ports Senator Republican Parsing cC conj prep and immigration Dependency relations of pobj.l Transition-Based Kansas Dependency parsing MaitParser Relation The arrow connects a head (governor, superior, regent) with Extraction with Stanford a dependent (modifier, inferior, subordinate) Depe Usually, dependencies form a tree (connected, acyclic, BK TP.HCM single-head) CSE Grammar and Parsing 2023.01 31/53" }, { "page_index": 131, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_035.png", "page_index": 131, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:19:42+07:00" }, "raw_text": "Dependency relation Clausal Argument relations Description Context-Free Grammars (CFG) NSUBJ Nominal subject Grammar and Sentences Structure DOBJ Direct object What makes a Good Grammar IOBJ Topdown Parser Indirect object A bottom-Up Chart Parse Top-Down Chart Parsin CCOMP Clausal Complement Probabilistic XCOMP Open clausal complement Context-Free Grammars Nominal Modifier Relations Description (PCFG) Dependency NMOD Nominal modifier Parsing Dependency relations AMOD Adjectival modifier Dependency formalisms TransitionBased NUMMOD Numeric modifier Dependency parsing MaitParser Other Notable Relations Description Relation CONJ Conjunct Extraction with Stanford Depe Selected dependency relations from the Universal BK Dependency set [4] TP.HCM CSE Grammar and Parsing 2023.01 32/53" }, { "page_index": 132, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_036.png", "page_index": 132, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:19:45+07:00" }, "raw_text": "Dependency Grammar and Dependency Structure Context-Free Grammars (CFG) Example of dependency structure with universal dependency Grammar and Sentences Structure relations: What makes a Good Gramma Topdown Parser root A bottom-Up Chart Parse dobj} Top-Down Chart Parsing Probabilistic det Context-Free Grammars nsubj nmod case (PCFG) Dependency Parsing United canceled the morning flights to Houston Dependency relations Depend Transition-Based Clausal relations NSUBJ and DOBJ identify the subject and Dependency parsing MaitParser direct obiect of the predicate cancel, while NMOD, DET, Relation Extraction with and CASE relations denote modifiers of the nouns flights and Stanford Depe Houston BK TP.HCM CSE Grammar and Parsing 2023.01 33/53" }, { "page_index": 133, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_037.png", "page_index": 133, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:19:50+07:00" }, "raw_text": "Dependency formalisms Dependency structure is directed graph: Context-Free G = (V, A) V: set of vertices A: set of ordered pairs of Grammars (CFG) Grammar and Sentence vertices, We will refer A as arcs. Structure What makes a Good V corresponds exactly to the set of words in the given Gramma Topdown Parser A bottom-Up Chart Parse sentence. A captures the head-dependent and grammatical function Probabilistic Context-Free relationship between the elements in V Grammars (PCFG) Dependency Tree Dependency Dependency tree is directed graph that satisfies the following Parsing relations constraints: Dependency formalisms There is single designated root node that has no incoming Transition-Base Dependency parsing MaitParser arcs. Relation 2 With the exception of root node, each vertex has exactly Extraction with Stanford one incoming arc. Depe BK 3 There is a unique path from the root node to each vertex in TP.HCM V. CSE Grammar and Parsing 2023.01 34/53" }, { "page_index": 134, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_038.png", "page_index": 134, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:19:55+07:00" }, "raw_text": "Projectivity Context-Free An arc from a head to a dependent is said to be projective if Grammars (CFG) there is a path from the head to every word that lies Grammar and Sentence Structure between the head and the dependent in the sentence What makes a Good aramma Topdown Parser A dependency tree is then said to be projective if all the A bottom-Up Chart Pars Top-Down Chart Parsi arcs that make it up are projective. Probabilistic Context-Free Consider the following example: Grammars (PCFG) Dependency root mod Parsing Dep relations dobj Dependency formalisms Transition-Basec Dependency parsing MaitParser JetBlue canceled our flight this morning which was already late Relation Extraction with Stanford Depe Arc from f/ight to its modifier was is non-projective, since there is BK no path from flight to intervening words this and morning. TP.HCM CSE Grammar and Parsing 2023.01 35/53" }, { "page_index": 135, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_039.png", "page_index": 135, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:19:59+07:00" }, "raw_text": "Projectivity Dependencies from a CFG tree using heads must be Context-Free Grammars (CFG) projective Grammar and Sentences Structure There must not be any crossing dependency arcs when the What makes a Good Gramma words are laid out in their linear order, with all arcs above Topdown Parser A bottom-Up Chart Pars the words. Top-Down Chart Pars Probabilistic But dependency theory normally does allow non-projective Context-Free Grammars structures to account for displaced constituents (PCFG) You can't easily get the semantics of certain constructions Dependency Parsing right without these non-projective dependencies Dep yrelations Dependency formalisms Transition-Based Dependency parsing MaitParser Relation Extraction with Stanford Depe Who did Bill buy the coffee from yesterday BK TP.HCM CSE Grammar and Parsing 2023.01 36/53" }, { "page_index": 136, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_040.png", "page_index": 136, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:20:03+07:00" }, "raw_text": "Relation between phrase structure and dependency structure Context-Free A dependency grammar has a notion of a head. Officially Grammars (CFG) Grammar and Sentence CFGs don't. Structure What makes a Good Gramma But modern linguistic theory and all modern statistical Topdown Parser A bottom-Up Chart Parse parsers (Charniak, Collins, Stanford, ... ) do, via hand-written phrasal \"head rules\" : Probabilistic Context-Free The head of a Noun Phrase is a noun/number/adj/.. Grammars (PCFG) The head of a Verb Phrase is a verb/modal/... Dependency The head rules can be used to extract a dependency parse Parsing Dep relation from a CFG parse Dependency formalisms Transition-Base The closure of dependencies gives constituency from a Dependency parsing MaitParser dependency tree Relation Extraction with But the dependents of a word must be at the same level Stanford Depe (i.e., \"flat\") - there can be no VP! BK TP.HCM CSE Grammar and Parsing 2023.01 37/53" }, { "page_index": 137, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_041.png", "page_index": 137, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:20:08+07:00" }, "raw_text": "Relation between phrase structure and dependency structure (cont.) Context-Free Grammars (CFG Grammar and Sentences Swalked Structure What makes a Good Crammar Topdown Parser NP Sue VPwaked A bottom-Up Chart Parse Top-Down Chart Parsing Probabilistic NNP sue VBDwaked PP into Context-Free Grammars (PCFG) / Sue walked P into NP store Dependency Parsing Dependency relations into DT the Dependency formalisms NNstore Transition-Based Dependency parsing 1 MaitParser the store Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 38/53" }, { "page_index": 138, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_042.png", "page_index": 138, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:20:13+07:00" }, "raw_text": "Relation between phrase structure and dependency structure (cont.) Context-Free Grammars (CFG) Grammar and Sentences Structure What makes a Good NP-SBJ Grammar Top-down Parser VP A bottom-Up Chart Parse NNP MED Top-Down Chart Parsing VB Probabilistic NP PP-CLR NP- TMP Context-Free Vinken will join Grammars DT NN IN NP NN CD (PCFG) Dependency the board as Nov 29 Parsing DT JJ NN Depend ency relations Dependency formalisms Transition-Based nonexecutive director Dependency parsing MaitParser Relation Extraction with Stanford Depe ) Example of a phrase structure \"Vinken will join the board BK as a nonexecutive director Nov 2g\" TP.HCM CSE Grammar and Parsing 2023.01 39/53" }, { "page_index": 139, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_043.png", "page_index": 139, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:20:18+07:00" }, "raw_text": "Relation between phrase structure and dependency structure Context-Free Grammars (CFG) Grammar and Sentences Structure What makes a Good Grammar NP-SBJ (Vinken) VP Top-down Parser A bottom-Up Chart Parse VP (join) Top-Down Chart Parsing NNP MD Probabilistic VB NP (board) PP-CLR(director) NP- TMP(29) Context-Free Vinken will Grammars join (PCFG) DT NN IN NP(director) NN CD Dependency Parsing the board as Nov 29 Dependency relations DT JJ NN Dependency formalisms Transition-Based Dependency parsing a nonexecutive director MaitParser Relation Extraction with Stanford Depe b) To translate a) structure to dependency structure BK TP.HCM CSE Grammar and Parsing 2023.01 40/53" }, { "page_index": 140, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_044.png", "page_index": 140, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:20:21+07:00" }, "raw_text": "Relation between phrase structure and dependency structure Context-Free GrammarsCFG) Grammar and Sentences Structure What makes a Good Grammar Top-down Parser join A bottom-Up Chart Parse Top-Down Chart Parsing Probabilistic Context-Free Vinken will board director 29 Grammars (PCFG) Dependency the as a nonexecutive Nov Parsing Dependency relations Dependency formalisms Transition-Based Dependency parsing MaitParser Relation Extraction with c) Dependency structure of sentence Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 41/53" }, { "page_index": 141, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_045.png", "page_index": 141, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:20:25+07:00" }, "raw_text": "Transition - Based Dependency parsing Context-Free Grammars (CFG) Grammar and Sentences Structure What makes a Good Gramma Dependency parsing is motivated by a stack-based approach Topdown Parser A bottom-Up Chart Parse called shift-reduce parsing Probabilistic Configuration consists of a stack, an input buffer of words Context-Free Grammars or tokens, and a set of relations representing a dependency (PCFG) Dependency tree. Parsing Parsing process consists of a sequence of transitions through Dependency relations Depende the space of possible configurations TransitionBased Dependency parsing MaitParser Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 42/53" }, { "page_index": 142, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_046.png", "page_index": 142, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:20:29+07:00" }, "raw_text": "Basic transition- based dependency parser Context-Free Grammars (CFG) Input buffer Grammar and Sentences Structure What makes a Good we wn Grammar Topdown Parser A bottom-Up Chart Parse Top-Down Chart Parsing Probabilistic Context-Free Grammars Parser Dependency (PCFG) Relationa Dependency Stack Oracle Parsing Dependency relations Dependency formalisms Transition-Based Dependency parsing MaitParser sn Relation Extraction with Stanford Depe BK TP.HCM Basic transition-based dependency parser CSE Grammar and Parsing 2023.01 43/53" }, { "page_index": 143, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_047.png", "page_index": 143, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:20:33+07:00" }, "raw_text": "MaltParser [Nivre et al. 2008] Context-Free Grammars (CFG) A simple form of greedy discriminative dependency parser Structure What makes a Good The parser does a sequence of bottom-up actions Gramma Topdown Parser Roughly like \"shift\" or \"reduce\" in a shift-reduce parser, but A bottom-Up Chart Pars the \"reduce\" actions are specialized to create dependencies Probabilistic Context-Free with head on left or right Grammars (PCFG) The parser has: Dependency a stack , written with top to the right. which starts with the Parsing ROOT symbol Dependency relations a buffer 3, written with top to the left. which starts with the Transition-Based Dependency parsing input sentence MaltParser a set of dependency arcs A. which starts off empty Relation Extraction with a set of actions Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 44/53" }, { "page_index": 144, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_048.png", "page_index": 144, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:20:37+07:00" }, "raw_text": "Basic transition-based dependency parser Context-Free Grammars (CFG) Grammar and Sentences otructure Start: = [ROOT], B = w1, .., Wn,A= What makes a Good Gramma Topdown Parser 1. Shift o,w;B,A7 awj,B,A A bottom-Up Chart Pars Top-Down Chart Parsir 2. Left-Arcr Probabilistic a!wj,wjB,A9 o,wjB,AU{r(wjwj)} Context-Free Grammars 3. Right-Arcr awj,w;B,A7 o,w;1B,AU{r(wjwj)} (PCFG) Dependency Finish: B = Parsing Dependency relations Notes: Unlike the regular presentation of the CFG reduction Dependency formalism Transition-Based step, dependencies combine one thing from each stack and Dependency parsing MaltParser buffer Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 45/53" }, { "page_index": 145, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_049.png", "page_index": 145, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:20:41+07:00" }, "raw_text": "MaltParser (cont.) Context-Free Grammars (CFG) Actions (\"arc-eager\" dependency parser) Grammar and Sentences otructure Start: = [ROOT], B = w1, ..., Wn,A= What makes a Good Gramma 1. Left-Arcr owj,wjB,A7 o,wjB,AU{r(wjwj)} Topdown Parser A bottom-Up Chart Parse Precondition: r'(wk, wi) A, W; # ROOT Top-Down Chart Parsing Probabilistic 2. Right-Arcr awj,w;B,A> ow;wj,B,AU{r(wjwj)} Context-Free Grammars 3. Reduce alwj,P,Ao,P,A (PCFG) Precondition: r' (wk, wi) E A Dependency Parsing 4. Shift o,wB,A7 awjP,A Dependency relations Dependency formalisms Finish: B = TransitionBased Dependency parsing MaltParser This is the common \"arc-eager\" variant: a head can Relation immediately take a right dependent before its dependents are Extraction with Stanford found Depe BK TP.HCM CSE Grammar and Parsing 2023.01 46/53" }, { "page_index": 146, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_050.png", "page_index": 146, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:20:45+07:00" }, "raw_text": "MaltParser (cont.) Context-Free Grammars (CFG) Grammar and Sentences [ROOT] [Happy, children, ...] Structure What makes a Good Shift [ROOT, Happy] [children, like, ...] 0 Grammar Topdown Parser LAamod [ROOT] [children, like, ...] {amod(children, happy)} = A1 A bottom-Up Chart Parse Top-Down Chart Parsin (children -> happy) Shift [ROOT, children] [like, to, ...] A1 Probabilistic Context-Free LAnsubi [ROOT] [like, to,..] A U{nsubj(like, children)} = A2 Grammars (PCFG) ( like children) RAroot. [ROOT, like] [to, play, ...] A, U{root(ROOT,Iike)= A3 Dependency Shift Parsing [ROOT, like, to] [play, with, ...] A3 Dependency relations LAaux. [ROOT, like] [play, with, ..] A;U{aux(play,to)= A4 Dependency formalisms ( play -to) Transition-Based Dependency parsing RAxcome, [ROOT, like, play] [with their, ...] A4U{xcomp(like,play) = As MaltParser (like -> play) Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 47/53" }, { "page_index": 147, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_051.png", "page_index": 147, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:20:51+07:00" }, "raw_text": "Example Context-Free Grammars (CFG) Grammar and Sentences Structure Happy children like to play with their friends. What makes a Good Grammar RAxcomp [ROOT, like, play] [with their,...] A4U{xcomp(like, play) = As Topdown Parser A bottomUp Chart Parse RAprep [ROOT, like, play, with] [their, friends, ...]AsU{prep(play, with) = A. Top-Down Chart Parsing Shift [ROOT, like, play, with, their] [friends, .] A6 Probabilistic Context-Free LAposs [ROOT, like, play, with] [friends, .] A6U{poss(friends, their) = A7 Grammars RApobj (PCFG) [ROOT, like, play, with, friends] [.] AzU{pobj(with,friends) = A8 Reduce [ROOT, like, play, with] [.] A8 Dependency Parsing Reduce [ROOT, like, play] [.] A8 Dependency relations Reduce [ROOT,like] [.] A8 Dependency formalisms Transition-Based RApunc [ROOT, like, .] [] AgU{punc(like,.) = Ag Dependency parsing MaltParser You terminate as soon as the buffer is empty. Dependencies = A, Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 48/53" }, { "page_index": 148, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_052.png", "page_index": 148, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:20:55+07:00" }, "raw_text": "Example Context-Free Grammars (CFG) Grammar and Sentences Example: Happy children like to play with their friends Structure What makes a Good Crammar Top-down Parser A bottom-Up Chart Parse root Top-Down Chart Parsing Probabilistic xcomp pobj Context-Free Grammars (PCFG) amod nsubj prep poss Dependency au Parsing V Happy children like to play with their friend Dependency relations Dependency formalisms Transition-Based Dependency parsing MaltParser Relation Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 49/53" }, { "page_index": 149, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_053.png", "page_index": 149, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:01+07:00" }, "raw_text": "Evaluation of Dependency Parsing Context-Free Grammars (CFG) Grammar and Sentences Acc = #correct deps Structure What makes a Good # of deps Grammar Topdown Parser A bottom-Up Chart Parse UAS = 4/ 5 = 80% Top-Down Chart Parsing ROOT She saw the video lecture LAS = 2/5 = 4O% Probabilistic Context-Free 0: 1 3 4 5 Grammars (PCFG) Gold Parsed Dependency 1 2 She Parsing nsubj 1 2 She nsubj Dependency relations 2 0 saw root 2 0 saw root Dependency formalism Transition-Based 3 5 the det 3 4 the det Dependency parsing 4 5 video nn 4 5 MaltParser video nsubj Relation 5 2 lecture dobj 5 2 lecture ccomp Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 50/53" }, { "page_index": 150, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_054.png", "page_index": 150, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:07+07:00" }, "raw_text": "Stanford Dependencies [de Marneffe et al. LREC 2006] Context-Free The basic dependency representation is projective Grammars (CFG) Grammar and Sentence It can be generated by postprocessing headed phrase Structure What makes a Good structure parses (Penn Treebank syntax) Gramma Topdown Parser A bottom-Up Chart Pars It can also be generated directly by dependency parsers, Top-Down Chart Pars such as MaltParser, or the Easy-First Parser Probabilistic Context-Free Grammars (PCFG) jumped Dependency nsub prep Parsing boy over: Dependency relations Dependency formalisms amod pobi Transition-Based det Dependency parsing MaitParser the little the: Relation det Extraction with Stanford fence Depe Es BK TP.HCM CSE Grammar and Parsing 2023.01 51/53" }, { "page_index": 151, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_055.png", "page_index": 151, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:12+07:00" }, "raw_text": "Graph modification to facilitate semantic analysis Context-Free Grammars (CFG) Bell. based in LA. makes and distributes electronic and Grammar and Sentences Structure What makes a Good computer products Grammar Topdown Parser A bottom-Up Chart Parse Top-Down Chart Parsing conj Probabilistic makes distributes: Context-Free nsubj cc dobj Grammars (PCFG) Bell and products Dependency partmod amod Parsing : based electronic Dependency relations prep con Dependency formalisms Transition-Based in and computer Dependency parsing pobj] MaitParser Relation Extraction with Stanford Depe es BK TP.HCM CSE Grammar and Parsing 2023.01 52/53" }, { "page_index": 152, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_056.png", "page_index": 152, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:17+07:00" }, "raw_text": "Graph modification to facilitate semantic analysis Context-Free Grammars (CFG) Bell. based in LA. makes and distributes electronic and Grammar and Sentences Structure What makes a Good computer products. Grammar Topdown Parser A bottom-Up Chart Parse nsubj Top-Down Chart Parsing Probabilistic conj..and. Context-Free makes distributes Grammars nsubj dobj (PCFG) Dependency Bell products Parsing partrnod amod :based amod Dependency relations electronic Dependency formalisms [conj_and Transition-Based Dependency parsing prep_in computer MaitParser Relation LA Extraction with Stanford Depe BK TP.HCM CSE Grammar and Parsing 2023.01 52/53" }, { "page_index": 153, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_057.png", "page_index": 153, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:20+07:00" }, "raw_text": "REFERENCE OF CHAPTER 3 Context-Free Grammars (CFG) Grammar and Sentences Structure What makes a Gooc Gramma Topdown Parser 1 http://www.sfs.uni-tuebingen.de/dm/10/ss/dep/dg-slides A bottom-Up Chart Pars 2x2.pdf Probabilistic Context-Free 2 https://web.stanford.edu/jurafsky/NLPCourseraSlides.html Grammars (PCFG) 3 Speech and Language Processing. Daniel Jurafsky & James H Dependency Martin. Copyright 2018. Parsing Dependency relations 4 https://universaldependencies.org/u/dep. Access: Jan 2023 Dependencyformalism Transition-Based Dependency parsing MaitParser Relation Extraction with Stanford Depe as BK TP.HCM CSE Grammar and Parsing 2023.01 53/53" }, { "page_index": 154, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_3/slide_058.png", "page_index": 154, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:21+07:00" }, "raw_text": "Thank you!" }, { "page_index": 155, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_001.png", "page_index": 155, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:22+07:00" }, "raw_text": "Features and Augmented Grammar CSE Ho Chi Minh City University of Technology 2023.01" }, { "page_index": 156, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_002.png", "page_index": 156, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:23+07:00" }, "raw_text": "Outline Feature Systems and Augmented Grammars Basic feature systems for English Morphological Analysis and the lexicon 4 A simple Grammar using Features 5 Parsing with Features 6 Exercises BK TP.HCM" }, { "page_index": 157, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_003.png", "page_index": 157, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:27+07:00" }, "raw_text": "Feature Systems and Augmented Grammars Feature Systems Context Free Grammars provide the basis for most the and Augmented Grammars computational parsing mechanisms, but they would be very Basic feature inconvenient for capturing natural languages because they systems for English can not capture the meaning and categories of words. Morphological Analysis and the The chapter describes an extension to the basic context-free lexicon mechanism, that defines the constituents by a set of A simple Grammar using features. Features number-agreement Parsing with Features in NP: between art and noun; example: a men Exercises subject-verb agreement; example: the man cry gender agreement for pronouns; for example restriction between the head of a phrase and the form of its complement. BK TP.HCM CSE Features and Augmented Grammar 2023.01 1/22" }, { "page_index": 158, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_004.png", "page_index": 158, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:30+07:00" }, "raw_text": "Feature Systems and Augmented Grammars Feature Systems and Augmented Grammars Basic feature Allow constituents to have features systems for English Feature NUMBER may take a value of either s (singular) or Morphological Analysis and the p (plural). Example: lexicon NP-sing -> ART-sing N-sing A simple Grammar using NP-plural -> ART-plural N-plural Features Parsing with Constituent is defined as a feature structure. Example: Features ART1 (CAT ART ROOT: a NUMBER s) Exercises or ART1 (ART ROOT a NUMBER s) BK TP.HCM CSE Features and Augmented Grammar 2023.01 2/22" }, { "page_index": 159, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_005.png", "page_index": 159, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:33+07:00" }, "raw_text": "Feature Systems and Augmented Grammars Feature Systems and Augmented Grammars Basic feature systems for English Example 2: Morphological NP1: (NP NUMBER s 1 (ART ROOT a NUMBER s) 2 (N Analysis and the lexicon ROOT fish NUMBER s)) A simple Variable are allowed as feature values so that the rules can Grammar using Features apply to wide range of situations. Parsing with Example 3: Features (NP NUMBER?n) -> (ART NUMBER?n) (N NUMBER?n) Exercises BK TP.HCM CSE Features and Augmented Grammar 2023.01 3/22" }, { "page_index": 160, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_006.png", "page_index": 160, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:37+07:00" }, "raw_text": "Features and Augmented Grammar (cont.) Feature Systems and Augmented NP Grammars Basic feature systems for English Morphological Analysis and the ART N lexicon A simple Froot root Grammar using Features Parsing with a fish Features Exercises number number S s Figure 4.1: A feature structure as an extended parse tree BK TP.HCM CSE Features and Augmented Grammar 2023.01 4/22" }, { "page_index": 161, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_007.png", "page_index": 161, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:40+07:00" }, "raw_text": "Basic feature systems for English Feature Systems and Augmented Person and number features: 1s, 2s, 3s, 1p, 2p, 3p Grammars Basic feature Verb Form and verb subcategorization: VFORM, SUBCAT systems for English Prepositional feature: PFORM Morphological Analysis and the Binary feature lexicon Default yalue of feature A simple Grammar using Features Person and number features: Parsing with Number system in English: word may be described as a Features Exercises single object or multiple objects. Number agreement restrictions occur in subject-verb agreement. But subjects and verbs must also agree on another dimension, namely with respect to person: BK TP.HCM CSE Features and Augmented Grammar 2023.01 5/22" }, { "page_index": 162, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_008.png", "page_index": 162, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:44+07:00" }, "raw_text": "Basic feature systems for English Feature Systems and Augmented Grammars Basic feature First Person (1): speaker or a group of people including the systems for English speaker: I, we, you, and I. Morphological Second Person (2): referring to listener, or a group of Analysis and the lexicon Iisteners but no speaker: you, all of you. A simple Grammar using Third Person (3): referring to one or more objects, not Features including the speaker or hearer. Parsing with Features Since number and person features always co-occur, so to Exercises combine the two into a single feature AGR, it has six possible values: 1s, 2s, 3s, 1p, 2p, 3p. BK TP.HCM CSE Features and Augmented Grammar 2023.01 6/22" }, { "page_index": 163, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_009.png", "page_index": 163, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:48+07:00" }, "raw_text": "Verb form Features - VFORM Feature Systems and Augmented Grammars Basic feature Form Explanation Examples systems for English base base go, be, say, write Morphological Analysis and the pres simple present go, goes, is, says, writes lexicon past simple past went, was, said, wrote A simple Grammar using fin equivalent pres, past Features ing Continuous participle going, being, saying, writing Parsing with Features pastprt past participle Gone, been, said, written Exercises inf infinitive with to go, be, say, write BK TP.HCM CSE Features and Augmented Grammar 2023.01 7/22" }, { "page_index": 164, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_010.png", "page_index": 164, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:52+07:00" }, "raw_text": "Verb Subcategorization - SUBCAT Feature Systems and Augmented Grammars Basic feature Feature value verb examples systems for -none laugh English Jack laughed -np take Jack takes a bus Morphological Analysis and the -np-np give Jack gave Mary the book lexicon -vp: inf want Jack wants to run A simple Grammar using -np-vp:inf tell Jack told the man to go Features -vp:ing keep I keep hoping for the best Parsing with Features He caught the bus moving on -np-vp:base catch Exercises the street -np-vp:base watch Jack watched Sue look at her dress BK TP.HCM CSE Features and Augmented Grammar 2023.01 8/22" }, { "page_index": 165, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_011.png", "page_index": 165, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:21:57+07:00" }, "raw_text": "Basic feature systems for English Feature Systems and Augmented Feature PFORM Grammars Feature value preposition examples Basic feature John gave the money systems for to to English to the bank Morphological to, on, by, inside Analysis and the Loc I put it on the desk lexicon on top, of Mot A simple to, from, along We walked to the beach Grammar using Binary features Features Binary feature is a part of syntactic structure, in which a Parsing with Features constituent either has or doesn't have feature. A binary Exercises feature has a value to be either + or -. + INV feature is a binary feature that indicates whether or not S structure has inverted subject (as in yes/no question) Example: Jack laughed (-INV); Did Jack laugh? (+INV) BK TP.HCM CSE Features and Augmented Grammar 2023.01 9/22" }, { "page_index": 166, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_012.png", "page_index": 166, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:22:01+07:00" }, "raw_text": "The default value for features Feature Systems and Augmented Grammars Basic feature systems for English It will be useful to allow a default value of features. Any Morphological time a constituent is constructed that could have a feature, Analysis and the lexicon but a value is not specified, the feature takes a default A simple value. Grammar using Features This is especially useful for binary features but is used for Parsing with Features non-binary features as well. The default value is inserted Exercises when the constituent is first constructed BK TP.HCM CSE Features and Augmented Grammar 2023.01 10/22" }, { "page_index": 167, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_013.png", "page_index": 167, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:22:03+07:00" }, "raw_text": "Morphological Analysis Feature Systems and Augmented Grammars Basic feature systems for English The lexicon Morphological Often a word will have multiple interpretations that use Analysis and the lexicon different entries and different lexical rules. A simple Example word saw that has three entries in the lexicon Grammar using Features saw: (CAT N ROOT SAW1 AGR 3s) Parsing with saw: (CAT V ROOT SAW2 VFORM base SUBCAT _np) Features saw: (CAT V ROOT SEE1 VFORM past SUBCAT _np) Exercises BK TP.HCM CSE Features and Augmented Grammar 2023.01 11/22" }, { "page_index": 168, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_014.png", "page_index": 168, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:22:12+07:00" }, "raw_text": "A Lexicon a (CAT ART saw (CAT N 1S (CAT V ROOT A1 ROOT SAW1 ROOT BE1 Feature Systems AGR 3s) AGR 3s) VFORM pres and Augmented SUBCAT{_adjp_np} AGR 3s) Grammars be (CAT V saw (CAT V jack (CAT NAME ROOT BE1 ROOT SAW2 AGR 3s) Basic feature VFORM base VFORM base IRREG-PRES + SUBCAT_np) systems for IRREG-PAST + English SUBCATadjpnp cry (CAT V saw (CAT V man (CAT N1 Morphological ROOT CRY1 ROOT SEE1 ROOT MAN1 VFORM base VFORM past AGR 3s) Analysis and the SUBCAT none SUBCAT_np) lexicon dog (CAT N see (CAT V men (CAT N ROOT DOG1 ROOT SEE1 ROOT MAN1 AGR 3s) VFORM base AGR 3p) A simple SUBCAT_np Grammar using IRREG-PAST + Features EN-PASTPRT +) fish (CAT N seed (CAT N want (CAT V TOOT FISH1 ROOT SEEDl ROOT WANT1 Parsing with AGR (3s, 3p) AGR 3s) VFORM base Features IRREG-PL+) SUBCAT{_np vp:inf} happy (CAT ADJ to (CAT TO) was (CAT V Exercises SUBCAT _vp:inf) ROOT BE1 VFORM past AGR{1s,3s} SUBCAT{_adip_npD he (CAT PRO the (CAT ART were (CAT V ROOT HEl ROOT THE1 ROOT BE1 AGR 3s) AGR{3s,3p} VFORM past AGR{2s,1p,2p,3p} BK Figure 4.2: A Lexicon TP.HCM CSE Features and Augmented Grammar 2023.01 12/22" }, { "page_index": 169, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_015.png", "page_index": 169, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:22:17+07:00" }, "raw_text": "A simple Grammar using Features Feature Systems Using a lexicon and features to define grammars with and Augmented Grammars features (Augmented Grammars) Basic feature systems for 1. S(-inv)-(NPAGR7a)(VP[prespasi]AGR7a English 2 NP-(ARTAGR(NAGR7G Morphological 3. NP-PRO Analysis and the lexicon 4 VP-M(_none 5 VP=1np]NP A simple Grammar using 6 VP-M_vpinVPlin Features 7. VP-Mp_vpinNPVP(inf] Parsing with 8. VP-M_odjP)ADJP Features 9. VP(in=TOVP[basc] Exercises 10. ADJP-ADJ 11. ADJP-ADJLpiVPlin 1leadfeaturesforS.VPVFORM.AGR Head feaiures forNP.AGR Figure 4.3: A simple grammar in abbreviated form BK TP.HCM CSE Features and Augmented Grammar 2023.01 13/22" }, { "page_index": 170, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_016.png", "page_index": 170, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:22:22+07:00" }, "raw_text": "A simple Grammar using Features Feature Systems and Augmented Grammars 1. SINY-VFORM?v(prcspasi)AGR?a= (NAGR?a)(VP VFORM?prespasI)AGR?a) Basic feature 2 (NPACR?a-(ART AGR?NAGR?o systems for English 3 (NPAGR-(PROAGR?o 4 (VPAGR?aVFORM?v)-(VSUBCAT_oneAGR?oVFORA? Morphological Analysis and the (VPAGR?aYFORM?v-(VSUBCAT_\"PAGR?aVFORAI?NP lexicon 6 (VPAGR?VFORM?v- A simple (VSUBCAT_vPinyAGR7aVFORM?)(VPVFORM1inn Grammar using 7. (VPACR?aVFORM1?v)- Features (V SUBCAT_nP_v/ACR7aVFORM?v NP(VPVFORMinf) Parsing with 8 (VPACR?aVFORM?)- Features (V SUBCAT_adjPAGR?aVFORM7v)ADJP 9 (VPSUBCATinAGRaORMin Exercises (TO AGR?oVFORMVPVFORM basc 10. ADJP-ADJ 11. ADJP-AD(SUBCAT_VPVFORMin Figure 4.4: The expanded grammar showing all features BK TP.HCM CSE Features and Augmented Grammar 2023.01 14/22" }, { "page_index": 171, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_017.png", "page_index": 171, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:22:27+07:00" }, "raw_text": "A simple Grammar using Features Feature Systems and Augmented S13s Grammars NI'lIs VPIP Basic feature 3s systems for ART[3s N(3s English Vipses.3 ncomel 1 Morphological de cres Analysis and the lexicon S13s1 A simple NP13s1 VPpces.3s1 Grammar using Features PRO13s1 Vipres.3s. -vpinil VFfinl Parsing with Features Ite waEs TO Exercises Viba adjpi ADJP txe lagpy Figu Figure 4.5: Two simple parse trees with feature values BK TP.HCM CSE Features and Augmented Grammar 2023.01 15/22" }, { "page_index": 172, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_018.png", "page_index": 172, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:22:30+07:00" }, "raw_text": "A simple Grammar using Features Feature Systems and Augmented Many feature values are unique to a feature. Grammars Basic feature Example: feature value inf can only appear in the VFORM, systems for English _np_vp:inf can only appear in SUBCAT. Morphological Unique feature values will be listed in square parentheses Analysis and the lexicon \"[]\", example (VP SUBCAT inf) will be abbreviated as VP A simple [inf]. Grammar using Features The feature value on the mother must be identical to the Parsing with Features value on its head constituent, there are called head features Exercises Example: (VP VFROM ?v AGR ?a) TODO (V VFORM ?v AGR ?a SUBCAT _np_vp:inf) (NP) (VP VFORM inf) (Look at the grammar in figure 4.4) BK TP.HCM CSE Features and Augmented Grammar 2023.01 16/22" }, { "page_index": 173, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_019.png", "page_index": 173, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:22:33+07:00" }, "raw_text": "A simple Grammar using Features Feature Systems and Augmented Grammars Basic feature systems for The head features may be declared separately from the rules English With VFORM and AGR declared as head features, the VP Morphological Analysis and the rule can be abbreviated as: lexicon VP ->(V SUBCAT _np_vp:inf) NP (VP VFORM inf A simple Grammar using Features The rule could be further simplified to: Parsing with Features VP -V [_np_vp:inf] NP VP [inf] Exercises Look at the grammar in figure 4.3 BK TP.HCM CSE Features and Augmented Grammar 2023.01 17/22" }, { "page_index": 174, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_020.png", "page_index": 174, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:22:37+07:00" }, "raw_text": "Parsing with Features Chart parsing algorithms developed in Chapter 3 all used an Feature Systems and Augmented extending active arcs with new constituent. Grammars Basic feature C- C...C;.X...C. systems for English to produce a new arc of the form: Morphological C- C...CX....C. Analysis and the lexicon Each constituent has feature values A simple Grammar using Features Example: Parse the sentence \"a dog' Parsing with 1.(NP AGR? a) ->.(ART AGR? a) ( N AGR? a Features Exercises (NP AGR? a)->(ART AGR ? a) o( N AGR? a 2. Take information of ART from lexicon: (ART root a AGR 3s ) To make arc 1 applicable, the variable ?a must be 3s, BK TP.HCM producing CSE Features and Augmented Grammar 2023.01 18/22" }, { "page_index": 175, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_021.png", "page_index": 175, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:22:41+07:00" }, "raw_text": "Parsing with Features (cont.) Feature Systems 3.( NP AGR 3s ) ->: ( ART AGR 3s ) ( N AGR 3s) and Augmented Grammars This arc can now be extended because of every feature in the Basic feature systems for rule in constituent 2. English 4.( NP AGR 3s ) -(ART AGR 3s ) .( N AGR 3s ) Morphological Analysis and the lexicon Consider extending this arc with constituent (N AGR 3s ) for A simple the word dog Grammar using Features 5.(N root DOG1 AGR 3s ) Parsing with Features Then can done because the AGR features agree. This Exercises completes the arc: 6.( NP AGR 3s ) -> ( ART AGR 3s ) ( N AGR 3s ) That means the parser found a constituent with then form BK ( NP AGR 3s ) TP.HCM CSE Features and Augmented Grammar 2023.01 19/22" }, { "page_index": 176, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_022.png", "page_index": 176, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:22:45+07:00" }, "raw_text": "Parsing with Features Feature Systems and Augmented The algorithm can be specified more precisely: Grammars Basic feature Given an arc A, the constituent following the dot is called systems for English NEXT, and a new constituent X, is being used to extend the Morphological arc. Analysis and the lexicon Find an instantiation of the variables such as that features A simple specified in NEXT, are found in X. Grammar using Features Create a new arc A' is a copy of A, except for the Parsing with Features instantiations of the variables determined in step a. Exercises Update A' as usual in a chart parser. Figure 4.10 describes the process of chart parsing for the sentence He wants to cry. BK TP.HCM CSE Features and Augmented Grammar 2023.01 20/22" }, { "page_index": 177, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_023.png", "page_index": 177, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:22:52+07:00" }, "raw_text": "Chapter 4: Features and Augmented Grammar Feature Systems and Augmented SICATS Grammars ACR3s VFORM cs Basic feature INV- 1 systems for 2VP3 English CATVP WFORM1pc Morphological AGR3s IVI Analysis and the VP2 lexicon VP2 CATVP VFORM inf 1TOI A simple VPI Grammar using NPI CATNF VPI CATVP Features AGR3s VFORMias 1PROI 1V2 Parsing with PROI CA'T PRO VI CATY TOI CAT TO V2 CATV ACR 3s ROOTwan ROOT cry Features VFORM pres yFORA1hasc AGR3s SUBCAT Exercises SUBCAT .ap_vp.inr. np_Vp:inf] lie wans cy Figure 4.6: The chart parsing of the sentence He wants to cry BK TP.HCM CSE Features and Augmented Grammar 2023.01 21/22" }, { "page_index": 178, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_024.png", "page_index": 178, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:22:55+07:00" }, "raw_text": "EXERCISES FOR CHAPTER 4 Feature Systems and Augmented Grammars 1 Extend the lexicon in Figure 4.2, slide 12, and the grammar Basic feature in figure 4.3, slide 13. So that the following two sentences systems for English are accepted: Morphological Analysis and the He was sad to see the dog cry. lexicon He saw the man saw the wood with the saw. A simple Parse two above sentences by Top-Down Chart Parsing Grammar using Features 2 Specify an augmented context-free grammar and lexicon for Parsing with Features simple subject-verb-object sentences. The grammar only Exercises allows appropriate pronouns in subject and object positions and does number agreement between the subject and verb Tus it should accept \"I hit him\" but not \"me love you\" BK TP.HCM CSE Features and Augmented Grammar 2023.01 22/22" }, { "page_index": 179, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_4/slide_025.png", "page_index": 179, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:22:56+07:00" }, "raw_text": "Thank you!" }, { "page_index": 180, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_001.png", "page_index": 180, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:22:59+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.1 Auxiliary verbs and verb phrases English sentences typically contain a sequence of auxiliary verbs followed by a main verb Example: I can see the house I will have seen the house I was watching the movie I should have been watching the movie How the auxiliaries constrain the verb that follows them ? + have must be followed by a past participle form (either another auxiliary or a main verb. + be must be followed by a present participle form, or, in the case of the passive sentence, by the past participle form" }, { "page_index": 181, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_002.png", "page_index": 181, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:03+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.1 Auxiliary verbs and verb phrases Auxiliaries: have, be, do, can, will, should, must, .. + do usually occurs alone (in position of a main verb), but can accept a base form following it, example: \"I did eat my pizza\". + can, must must always be followed by a base form The first auxiliary (or verb) in the sequence must agree with subject in the simple declarative sentences and be in finite form (past, present tense). In the section is to analysis how the structure of auxiliary forms using to combine new rules and features restrictions." }, { "page_index": 182, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_003.png", "page_index": 182, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:06+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.1 Auxiliary verbs and verb phrases - Auxiliaries have subcategorization features that restrict their verb phrase complements The distinction is between auxiliary and main verb: + auxiliary can place before adverbial not, but main verb can not, example: \"I am not going\", \"I did not try it \". +Only auxiliary verb can precede the subject NP in yes/no questions: \"Did you see the car ?. \", \"Can I try it ?. question, to need add a auxiliary do yes/no" }, { "page_index": 183, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_004.png", "page_index": 183, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:10+07:00" }, "raw_text": "5.1 Auxiliary verbs and verb phrases Primary auxiliaries are based on the root forms: be, Have. The other auxiliaries are called modal and generally appear in the finite forms (simple present, past. Example: can (could) will (would), shall (should)) may (might), must, need, dare Auxiliary COMPROM construction examples modal base modal Can see the house have pastprt perfect Have seen the house be ing Is lifting the box progressive be Was seen by the crowd pastprt passive" }, { "page_index": 184, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_005.png", "page_index": 184, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:14+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.1 Auxiliary verbs and verb phrases The new rule VP with a auxiliary VP -> (Aux COMPFORM ? S) (VP VFORM ? S) COMPFORM indicates VFORM of VP Binary head feature MAIN could be introduce that is + for any main verb and - for auxiliary verb. Example: VP->Aux [be] VP [ing,+ main] Binary feature for passive construction (PASS): for complement in the passive form , that is + only if the Vp involves passive: Example: VP ->Aux [be] VP[ing,+ pass The passive rule: VP [+ pass] -> Aux [be] VP [pastprt, main]" }, { "page_index": 185, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_006.png", "page_index": 185, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:18+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.1 Auxiliary verbs and verb phrases - The lexicon of some auxiliaries do (CAT AUX can (CAT AUX MODAL + MODAL + VFORM pres VFORM pres AGR [1s 2s 3s 1p 2p 3p] AGR [1s 2s 3s 1p 2p 3p] COMPFORM base) COMPFORM base Be (CAT AUX have (CAT AUX ROOT be ROOT have VFORM base VFORM base COMPFORM ing) VFORM pastprt)" }, { "page_index": 186, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_007.png", "page_index": 186, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:22+07:00" }, "raw_text": "5.1 Auxiliary verbs and verb phrases Passive Most verbs that includes NP in their complement allow the passive form. In this passive sentence, \"object position\" NP will be a first NP in the sentence, and either omitting the subject NP or putting it into PP with preposition \"by\" Example: \"I will hide my hat in the drawer \"My hat will be hidden in the drawer - Some rules consist auxiliaries for the passive forms: Example:1.S[inv]->(NP AGR?A) (VP[fin]AGR?A 2. VP -> (AUX COMPFORM ? V) (VP VFORM ? V) 3.VP ->AUX [be] VP [ing,+ main] 4.VP->AUX[be] VP[ing,+ pass]" }, { "page_index": 187, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_008.png", "page_index": 187, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:25+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.1 Auxiliary verbs and verb phrases - A new binary head feature PASSGAP, is defined that is + only the constituent is missing the object NP. This feature e would default to - if it is not specified in the left hand side of the rule. Simple _np subcat in the grammar would be realized as two rules: Example: VP{-passgap} ->V [-np] NP VP{+ passgap} ->V [-np]" }, { "page_index": 188, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_009.png", "page_index": 188, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:29+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.1 Auxiliary verbs and verb phrases S.[-inv] NP VP[pres,pass] Jack VP base, Aux [+modal] passgap V [-np,base] NP can thedog :see Figure 5.1: An active form sentence" }, { "page_index": 189, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_010.png", "page_index": 189, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:31+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.1 Auxiliary verbs and verb phrases S.[-in v] NP VP[past, + pass the dog Aux[be] VP [pastprt, +passgap] V [-np, pastprt] was seen Figure 5.2: A passive form sentence" }, { "page_index": 190, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_011.png", "page_index": 190, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:35+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages To explore techniques handling question forms in English Considering g yes/no question and how they relate to their assertional counterpart: Jack is giving Sue a back rub -> Is Jack giving Sue a back rub ? He will run in the marathon next year -> Will he run in the marathon next year ?. yes/no questions appear identical in the structure to their assertional 0 counterpart, except , NP subjects and first auxiliaries have swapped l positions. If no auxiliary in the assertional an auxiliary of root do in the appropriate is used sentence, then The rearranging of the subject and auxiliary is called subject- inversion aux j" }, { "page_index": 191, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_012.png", "page_index": 191, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:39+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.2 Movement phenomena in Language Deriving yes/no questions from assertions by moving the constituents in the manner, is a local movement. Wh-question: the constituents may be moved arbitrarily far from their original position. This movement is called unbounded movement. Example: \"The fat man will angrily put the book in the corner\" - If we are interested in who did the action, we may ask: a) Which fat man will angrily put the book in the corner ?. b Who will angrily put the book in the corner ?." }, { "page_index": 192, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_013.png", "page_index": 192, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:43+07:00" }, "raw_text": "5.2 Movement phenomena in Language - If we are interested in how it is done, we may ask: How will the fat man put in the corner ?. C) d) In what way will the fat man put in the corner ? If we are interested in other aspects, we may ask: What will the fat man angrily put in the corner ?. f Where will the fat man angrily put the book ? Each question of wh-form has the same form as the original assertion, except that the part being questioned is removed and replaced by wh-phrase at begging of the sentence. Note: except when the part being questioned is the subject NP the subject and auxiliary are inverted as in yes/no question." }, { "page_index": 193, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_014.png", "page_index": 193, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:45+07:00" }, "raw_text": "5.2 Movement phenomena in Language How to control the fact that a constituent is missing from some places later in the sentence. *what will the fat man angrily put in the corner ? *What will the fat man angrily put the book in the corner ?. A place of missing constituent is called gap, and the constituent moved is called the filler" }, { "page_index": 194, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_015.png", "page_index": 194, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:50+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.2 Movement phenomena in Language sub aux inversion MP VP transformation the 2 cat scratch.iohn S Aux NP 2 the cat Scraitc Figure 5.3: The yes/no question is qenerated from the structure by a transformation" }, { "page_index": 195, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_016.png", "page_index": 195, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:53+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.2 Movement phenomena in Language Different types of the movement - Wh-movement: : move a wh-term to the front of the sentence to form wh-question - Topicalization: move a constituent to the beginning of the for emphasis, as in: sentence I never like this picture This picture, I never like. - Adverb preposing: Move an adverb to the beginning of the sentence I will see you tomorrow Tomorrow, I will see you" }, { "page_index": 196, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_017.png", "page_index": 196, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:56+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages Different types of the movement Extra position: Move certain NP complement to the sentence final position, as in: A book discussing evolution was written A book was written discussing evolution" }, { "page_index": 197, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_018.png", "page_index": 197, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:23:59+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.3 Handling question in context free grammar y/n question S[-inv] ->(NP AGR?a) (VP [pres past] AGR?a) (Grammar 4.7 S[+INV]->(Aux AGR? a SUBCAT? v (NP AGR? a)( VP VFORM? v) WH-question: Using the feature GAP to introduce wh-question form. Feature GAP is passed from the mother to the subconstituent until the appropriate place for the gap is found in the sentence" }, { "page_index": 198, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_019.png", "page_index": 198, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:24:02+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.3 Handling question in context free grammar In the appropriate place, the sub-constituent will be missed by empty rule ( the right hand side of this rule is empty). NP GAP ((CAT NP) (AGR ? a) ) AGR? a > e Two ways in which the GAP feature propagates, depending on whether the head constituent is lexical or non-lexical category. - if it is non-lexical category, GAP is passed from the mother to head, not to any other sub-constituents." }, { "page_index": 199, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_020.png", "page_index": 199, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:24:05+07:00" }, "raw_text": "5.3 Handling question in context free grammar Example: S has GAP feature (S GAP? gQ > (NP GAP -) (VP GAP? g) VP has Gap feature. The rule VP with lexical head, the gap move to any one of the non-lexical subconstituents. For instance, verb rules with_np _vp complement VP -> V[-np-vp:inf] NP P P, There are two rules involving gaps: (VP GAP?g) > V [-np-vp :inf](NP GAP? G}(P P GAP- (VP GAP?g)> V [-np-vp:inf](NP GAP-)(P P GAP? g)" }, { "page_index": 200, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_021.png", "page_index": 200, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:24:09+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.3 Handling question in context free grammar The algorithm for adding GAP feature to grammar For each rule Y -> X ... H ... X. with head constituent H 1) If the rule specifies a GAP feature in some constituent already, then skip; 2) If the head GAP is not a lexical category, the add Gap feature ton the head and the mother, and -Gap to other sub-constituents producing a rule of the form: (Y GAP ? g) > (X GAP -)...(H GAP ? g)... (X, GAP - 3) If the head GAP is lexical category, the for each non-lexical constituent Xj, add a rule of the form: (Y GAP?g) >(X GAP-)...(X GAP?g)...(X. GAP -)" }, { "page_index": 201, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_022.png", "page_index": 201, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:24:12+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.3 Handling question in context free grammar In the Wh-question, filler NP and PP at the start of the sentence and are identified by new feature Wh that identifies a class of the phrases which introduce the questions: - Whom, Who, What can appear specify simple NPs 1 as pronouns and Example: Who ate the pizza? What did you put the box in ?. - What, which are as determiners in noun phrases: What book did you put on the desk Which book did you put on the desk" }, { "page_index": 202, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_023.png", "page_index": 202, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:24:15+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.3 Handling question in context free grammar - Where, When can appear in propositional phrases: Example: Where did you put the book? When did you go to the store ?. - How acts as an adverbial modifier in the adjective, adverbial phrases: Example: How quickly did he run ? Whose acts possessive pronoun: Example: Whose book did you find ?" }, { "page_index": 203, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_024.png", "page_index": 203, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:24:23+07:00" }, "raw_text": "5.3 Handling question in context free grammar what: (CAT PRO when: (CAT PP.WRD WIIQ WH(QR) AGR3s3pl P'FORM TIME) what: (CAT QDET who: (CATPRO WHQ WH(Q R) AGR3s 3p AGR(3s 3p) which: (CAT QDET where: (CATPP-WRD WHQ WH(QR) AGR(3s 3p PFORM (LOCMOT)) which: (CATPRO whose: (CATPRO WHR WH(QR) AGR3s 3p POSS+ AGR(3s 3p Figure 5.4: A lexicon foe some of the wh-words" }, { "page_index": 204, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_025.png", "page_index": 204, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:24:29+07:00" }, "raw_text": "5.3 Handling question in context free grammar 1. (NP POSS?p WH?w=(PRO POSS?pWII?w) (NPWII?w)-(DETWII?wAGR?a)(CNPAGR?a) 3. CNPN 4 CNPADJM 5. DET-ARI 6. (DET WH?w)-(NP[+POSS]WH?a) 7. (DETWH?w)->(QDET WH?w) 3. (PP WH?w)=>P(NP WH?w) (PPWH?w)->(PP.WRD WH?w) Hcad fealure for NP.DET and CNP.AGR Headfcaturc for PP:PFORM Figure 5.5: A simple NP and PP grammar handling wh-words" }, { "page_index": 205, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_026.png", "page_index": 205, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:24:36+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.3 Handling question in context free grammar 10.(S(-inv]WH1?w- (NP WII ?w AGR 7a) (VPUvi AGR ?a) 11. (S[+inv] WH1 ?w GAP ?g) (AUX COMPFORM ?s AGR ?a) (NP WI ?w AGR ?a GAP -) (VP VFORM ?s GAP ?g) 12 S-(NP[Qgap] AGR ?a)(SI+im] G1P(NP AGR?a)) 13 S (PP[Q.gap] PFORM ?p)(S[+inm] GAP (PP PFORM ?p)) 14 VP (AUX COMIPFORM ?s) VPVFORM ?s) 15. VP M none] 16 VP _\"P]NP 17. VP Mvpiy)VP(in 18. VP -M_\"p_pif1NPVPlinf] 19. VP[inf]- 7O VP[basc] 20. VP - M_npVp.loc] NP PP[loc] Hcad fcaiures for S.VPVFORM.AGR Figure 5.6: The unexpanded S grammar for wh question" }, { "page_index": 206, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_027.png", "page_index": 206, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:24:39+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.3 Handling question in context free grammar Parsing with GAPs parsing algorithm. In particular, rules may have empty right hand side. NP constituent may be empty any where. NP AGR? a GAP (NP AGR? a)) -> E - The arc extension algorithm may be modified to handle the gaps automatically" }, { "page_index": 207, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_028.png", "page_index": 207, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:24:42+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.3 Handling question in context free grammar The algorithm to insert an empty constituents Whenever an arc of the form X >...(CFV1...F,VnGAP (C G?vg1 Gm? vgm) . is suggested by the parser, and the constituent pattern that is the GAP feature, that is ( C G, ? vg1 ... Gm ? vgm ) matches the constituent itseft C F1 V1 ... Fn V, GAP ( C G VG1 ... G VGm))) m EMPTY + with the variables bound as necessary. Use this constituent to extend the original arc." }, { "page_index": 208, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_029.png", "page_index": 208, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:24:49+07:00" }, "raw_text": "5.3 Handling question in context free grammar Example: Parse the sentence \"Which dogs did he see?\" by bottom up chart parsing (Figure 5.7) NPI WI1 Q AGR 3p 1 DETI CNPI DETI CNPI WH Q NP2 AGR3p AGR 3p AGR 3s 1 N1 1 QDETI 1 PRO1 QDETI NI AUX1 WH Q PROI AGR 3p AGR3s AGR 3r AGR 3s VFORM past SUBCAT base Which dogs did he S -, NPlQ1 (S GAP (NP AGR 3p)) Figure: 5.7: The chart after word the" }, { "page_index": 209, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_030.png", "page_index": 209, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:24:56+07:00" }, "raw_text": "Chapter 5: Grammar for Natural Languages 5.3 Handling question in context free grammar S2 VIORMlpast I NPI 2.S1 NPI SI WIIQ INV+ AGR3p GAP(NP AGR 3p) IDET1 VFORM past 2 CNPI 1 AUX1 2 NP23VPI DETI CNPI NP2 VPI WIIQ AGR3p AGR3s VFORM inl ACR 3p INI IPROI GAP(NP ACR 3p) IQDETI 1 V1 2 EMPTY-NPI QDETI NI AUXI PROI WIIQ AGR 3p AUK Js AGR3s AGRJV VFORMpasi SUBCAT base Which doys did he sce Figure 5.8: The final chart for \"Which dogs did he see ?.\"" }, { "page_index": 210, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_031.png", "page_index": 210, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:25:01+07:00" }, "raw_text": "Chapter 5 EXERCISE 1) Using the grammar developed in 5.3, show the analyses of the following questions in chart form, as show in figure 5.8: a) In which town were you born ? b) Where were you born ? c) When did they leave ? d) What town were you born in ? 2) GPSG allows certain rules to have multiple head sub- constituents. For instance VP -> VP and VP a) How does the presence of multiple heads effect the algorithm that produces propagation of the gap feature ?. In order to answer this question consider the following sentences: Who did you see and give the book to ? What man did Mary hate and Sue love ?" }, { "page_index": 211, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_5/slide_032.png", "page_index": 211, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:25:03+07:00" }, "raw_text": "Chapter 5 EXERCISE Also consider that the following sentences are ill-formed: *who did you see and give the book to John? *what man did Mary hate John and Sue love ? b) Write out the VP rule showing the GAP feature and then draw the chart for the sentence: Who did Mary see and Sue see ? Using grammar augmented on the figure 5.6 with your rule only show the constituents that are used in the final analysis, but be sure to show all the feature values for each constituent." }, { "page_index": 212, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_001.png", "page_index": 212, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:25:07+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.1 basic Probability theory This section explores some techniques for solving the problems based on probability theory. A probability function, PROB, assigns a probability to every value of a random variable 1.PROB(e;)>0 for all i (Vi) 2. PROB(e;)<1 for all i (Vi) 1" }, { "page_index": 213, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_002.png", "page_index": 213, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:25:10+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.1 basic Probability theory - Conditional probability is defined by the formula PROB(e e')=PROB(e & e')/PROB(e') where PROB ( e & e') is probability of two events e and ej occurring simultaneously - An important theorem relating conditional probabilities is Bayes' rule. This rule relates conditional probability of an event A given B to the conditional probability of an event B given A: PROB(BA) * PROB (A) PROB(A B)= PROB (B) 2" }, { "page_index": 214, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_003.png", "page_index": 214, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:25:14+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.1 basic Probability theory - Two events A and B are independent of each other if only if PROB(A/B) = PROB(A) which using the definition of conditional probability, is equivalent to saying PROB(A &B) = PROB(A) PROB (B) +K Example: PROB(Win/Rain) = PROB (Win & Rain)/ PRO(Rain) = .15/.3. If Win and Rain are independent of each other then PROB(Win/Rain) = PROB (Win)/ PRO(Rain) = 0.2*0.3= 0.06 While PROB(Win&Rain) = 0.15. Note ! Probability of winning and raining occur together at a rate much greater than random chance 3" }, { "page_index": 215, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_004.png", "page_index": 215, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:25:19+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.1 basic Probability theory -Consider an application of probability theory related to language, namely part- of- speech indentification: given a sentence with ambiguous words. Now, determine the most likely lexical category for each word. Example: word flies can be either V or N. C that ranges over the part-of-speech (V, N), W that ranges over all possible words. The problem can be state as determining either PROB( C= N/W = flies) or PROB ( C=V/W = flies) is greater. 4" }, { "page_index": 216, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_005.png", "page_index": 216, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:25:23+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.1 basic Probability theory The conditional probability for the word flies with lexical categories N and V: PROB (N / flies ) = PROB (flies & N )/ PROB ( flies) PROB (V / flies ) = PROB (flies & V) / PROB ( flies ) Hence, we reduce to finding which of PROB ( flies & N) and PROB ( flies & V ) is greater, because the denominator - PROB ( flies ) is the same in each formula. Let's say we have a corpus of simple sentence obtaining 1273000 words. Say, there are 1000 uses of word flies, 400 of them in the N sense and 6 600 in the V sense. Then: PROB ( flies ) = 1000 / 127 3000 = 0.0008 PROB ( flies & N) = 400 / 1273000 = 0.0003 S" }, { "page_index": 217, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_006.png", "page_index": 217, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:25:26+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.1 basic Probability theory PROB (flies & V) = 600 7 1273000 = 0.0005 Finally PROB(V / flies)= PROB (V & fIies)/ PROB (fIies) = 0.0005 7 0.0008 = 0.625 6" }, { "page_index": 218, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_007.png", "page_index": 218, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:25:30+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.2 Estimating Probabilities - For instance, we use information on Harry's past performance Harry's winning probability of 100 races) to predict how likely he is to win his 101st race. - We interested in parsing sentences that have never been seen before. Thus we need to use data on previously occurring sentences to predict the next sentences. We will always be working with estimates of probability rather than actual probability. V Maximum likelihood estimate (MLE) If we have seen the word flies 1000 times before, and 600 of them were as a verb, we assume that PROB(V/flies) = 0.6, and use that to guide our guess with 1001st case. 7" }, { "page_index": 219, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_008.png", "page_index": 219, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:25:34+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.2 Estimating Probabilities Maximum likelihood estimate (MLE) This simple ratio estimate is called Maximum likelihood estimate - MLE. The accuracy of an estimate increases as the amount of data expands. The estimate is accuracy enough if it falls between 0.25 and 0.75. This range will be called margin error. result Estimate of Prob H Acceptable estimate HH 1.0 NO HT 0.5 YES TH 0.5 YES TT 0.0 NO Figure 6.1: Probabilities with two trails 8" }, { "page_index": 220, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_009.png", "page_index": 220, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:25:39+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.2 Estimating Probabilities Maximum likelihood estimate : (MLE) Results Estimate of Prob (H) Acceptable Estimate HHH 1.0 NO HHT 0.66 YES HTH 0.66 YES HTT 0.33 YES THH 0.66 YES THT 0.33 YES TTH 0.33 YES TTT 0.0 NO Fiqure 6.2: Probabilities with three trails 9" }, { "page_index": 221, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_010.png", "page_index": 221, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:25:42+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.2 Estimating Probabilities Sparse data There are a vast number of estimates needed for natural language applications, and large proportion of these events are quite rare. This is the problem of sparse data. For instance, the Brown corpus contains about a million words, but due to duplication there are only 49.000 different words and 40.000 of the words occur five times or less. The worst case occurs, if low-frequency word does not occur at all in one of its possible category. Its probability in this category would then be estimated as 0, then the probability of the overall sentence containing the word would be 0. There are other techniques s attempt to address the problem of estimating probabilities of low-frequency events. 10" }, { "page_index": 222, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_011.png", "page_index": 222, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:25:47+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.2 Estimating Probabilities Sparse data Random variable X, technique start with a set of values Vi computed from the count of the number of times X = xi. MLE uses Vj = xi where V is exactly the count of number of times PROB(X=xj)=V/Erxi, X= xj : One technique to solve the zero probability is to sure that no Vi has value 0 by V = x: + 0.5, that 0.5 is added to every count This estimation technique is called expected likelihood estimate (ELE) Different between MLE and ELE: For instance: consider a word w that does not occur in the corpus. Consider estimating the probability that w occurs in one of 40 words classes L1.. .L40 categories. Consider comparing between 11 MLE and ELE" }, { "page_index": 223, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_012.png", "page_index": 223, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:25:52+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.2 Estimating Probabilities Evaluation How well our new technique performs compared with other algorithms or variants of our algorithms. The general method for doing this is to divide the corpus into 1 two parts, the training set and the test set. The test set consists of 10 - 20% 0f total data. - The training set is then used to estimate the probabilities and the algorithm is run on the test set to see how well it does on new data. A more thorough method of testing is called cross-validation: Removing repeatedly different parts of the corpus as the test set, training on the remainder of the corpus, and then evaluating on the new set. 12" }, { "page_index": 224, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_013.png", "page_index": 224, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:25:55+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging Part-of-speech tagging involves selecting the most likely sequence of the syntactic categories for the words a sentence. - A typical set of tags is used in the Penn Treebank project, is shown in figure 6.3 The general method to improve reliability is to use some of the local context of the sentence in which the word appears. For instance, if the word preceded by the word the , it is much more likely to be N. In the section, we use this technique to exploit such information. Let W1, ..., W, be a sequence of words. We want to find a 13" }, { "page_index": 225, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_014.png", "page_index": 225, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:26:08+07:00" }, "raw_text": "1. CC Coordinating conjunction 19. PPS Possessive pronoun 2 CD Cardinal number 20.RB Adverb DT Deteminer 21.RBR Comparative adverb 4.EX Existcntialther 22.RBS Superlative Adverb 5.FW Foreign word 23.RP Particle 6IN Preposition/subord.conj 24. SYM Symboi (nath or scientific) 7.JJ Adjcctive 25. 1O 8 JIR Comparative adjective 26.UH Interjection 9. JJS Superlative adjective 27. VB Verb,basc fon 10. IS Listitem marker 28. VBD Verb,past tensc 11. MD Modal 29. VBG Verb,gerund/pres.participle 12 NN Noun, singular or mass 30. VBN Verb, past participle 13. NNS Noun,plural 31. VBP Verb,non-3s.present 14. NNP Proper nou, singular 32. YBZ Verb.3s.presen IS.NNP'S Proper noun.plural 33.WDT Wh-determinc 16.1 PDI Picdeterminer 34.WP Wh-pronoun 17.POS Possessive ending 35.WPZ Possessive wh-pronoun 18. PRP Pcisonal pronoun 36.WRB Wh-adveib Figure 6.3 The Penn Treebank tagset 14" }, { "page_index": 226, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_015.png", "page_index": 226, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:26:12+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging 1)PROB(C...,C/W, ...,W) We solve this problem by Bayes' rule, which says that this conditional probability equals 2) PROB(C,..., C)* PROB(W,...,W/C,..., C PROB(W1,...,Wr) Finding C1,. . ., Cn, that gives a maximum value, the common denominator in all these cases will not affect the answer. Thus the problem reduces to finding C1,. . ., Cn, that maximizes the formula: 3 PROB(C,...,Cr)*PROB(W,...,W /C, ..., Cr There are still no effect methods for calculating the probability of these long sequences accurately, as it would require far too much data. 15" }, { "page_index": 227, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_016.png", "page_index": 227, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:26:15+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging But the probabilities can be approximated by marking some independence assumptions. Each of the two expressions in formula 3 will be approximated. The most common assumptions use either one or two previous categories The bigram model looks at pairs of categories (or words) and use the conditional probability that Ci will follow Ci-1, written as PROB(Ci/ Ci-1) The trigram model use conditional probability of one category (or word) given the two preceding categories (or words), that is PROB( Ci / Ci-2 Ci-1) 16" }, { "page_index": 228, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_017.png", "page_index": 228, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:26:19+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging n-gram model, in which n represents the number of word used in the pattern. While the trigram model will produce better result in practice. We use bigram here for simplicity PROB(C1,..., Cr)=IIi =1,T PROB ( C;/ Ci-1) To account t for beginning of the sentence, we posit a pseudocategory at the position 0 as value of Co If ART at the beginning of a sentence that the first bigram will be PROB (ART/ ): Example: approximation of the probability of the sequence ART N V N using bigram would be 17" }, { "page_index": 229, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_018.png", "page_index": 229, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:26:24+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging PROB (ART N V N ) PROB (ART / ) * PROB (N / ART) + PROB (V / N) * PROB (N /V) The second probability in formula 3: PROB (W, ..., WT 7 /C, ...,Cr)=Ili=1,T PROB (Wj/ Cj) Can be approximated by assuming that a word appears in a category independent of the words in the preceding or succeeding categories. -With these two approximations, the problem has changed into finding the sequence C1.. .CT that maximizes the value of 3') PROB (C, ..., Cr) * PROB (W, ..., W/ C, ..., Cr) IIj=1,T PROB (C / Cj-1) * PROB ( W/ C) 18" }, { "page_index": 230, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_019.png", "page_index": 230, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:26:29+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging The advantage of formula 3') is that the probabilities involved can be readily estimated from a corpus of text labeled with parts of speech In particular, given a database of text, the bigram probabilities can be estimated simply by counting the number of times each counts. Example: the probability that a V follows an N would be estimated as follows: count (N at position i-1 and V at i) PROB(Ci=V/Ci-1=N) count (N at position i-1) To deal with the problem of the sparse data, any bigram is not listed in figure 6.4, will be assumed to have a token probability 0.0001. 19" }, { "page_index": 231, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_020.png", "page_index": 231, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:26:35+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging category Count at i pair Count at i,i+1 bigram estimate 0 300 O,ART 213 PROB(ART/O) 0.71 0 300 O,N 87 PORB(NO) 0.29 ART 558 ART, N 558 PROB(NART) 1.0 N 833 N,V 258 PROB(V/N) 0.43 N 833 N,N 108 PROB(MN) 0.13 N 833 N,P 366 PROB(P/N) 0.44 V 300 V,N 75 PROB(N) 0.35 v 300 V,ART 194 PROB(ART/V) 0.65 P 307 P, ART 226 PROB(ART/P) 0.74 P 307 P,N 81 PROB(NP) 0.26 Figure 6.4 Bigram probabilities from a corpus s [1] 20" }, { "page_index": 232, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_021.png", "page_index": 232, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:26:43+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging N V ART H TOTAL flies 21 23 44 fruit 49 5 1 55 lik 10 30 O 21 61 wwwww 1 201 202 a the 1 300 2 303 flower 53 15 68 flowers 42 16 58 birds 64 1 65 others 592 210 56 284 1142 833 300 558 307 1998 TOTAL Figure 6.5 A summary of some of the words counts in the corpus 21" }, { "page_index": 233, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_022.png", "page_index": 233, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:26:49+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging - Lexical probabilities PROB ( Wi / Ci ) can be estimated simply Example: some lexical probabilities are estimated based on data of figure 6.5, and are shown on figure 6.6 . .54 PROB(a1ART .360 PROBte 1ART PROBUlies 1N .025 PROBa1N .CO! PROB(Ilies1V .076 PROBowerIN 063 PROB(like 1V) .1 PROBlower1V .05 PROB(ikc1P .068 PROB(birds1N) .076 PROB(like1N .012 Figure 6.6 The lexical generation probabilities 22" }, { "page_index": 234, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_023.png", "page_index": 234, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:26:52+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging We can find the sequence of categories that has the highest probability of generating a specific sentence (POST) by the independent assumption that were made about the data. - Since we only deal with bigram probabilities, the probability is that the i'th word in category Ci depends only on category of the (i-1)th word, Ci-1. -Thus the process can be modeled by a special form of probabilistic finite state, as shown in Figure 6.7.Each node represents a possible lexical category and the transition probabilities. Networks like that in Figure 6.7 are called Markov Chains 23" }, { "page_index": 235, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_024.png", "page_index": 235, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:26:58+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging Example: Given a sequence of categories ART N V N, that has the probability as follows: 0.71 * 1 * 0.43 * 0.35 = 0.107 (data from Figure 7.7) .74 .65 ART 43 .3.5 29 26 N 44 13 Figure 6.7 A Markov chain capturing the bigram probabilities 24" }, { "page_index": 236, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_025.png", "page_index": 236, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:27:02+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging Example: The probability that the sequence N V ART N generates the output flies like a flower is computed as follows: - The probability of the path N V ART N, given the Markov Model in Figure 6.7: PROB (N V ART N) = 0.29 * 0.43 * 0.65 * 1 = 0.081 - The probability of the output flies like a flowers for this sequence is computed from probabilities in Figure 6.6: PROB (flies/ N) * PROB (like/V) * PROB (a/ ART) + PROB (flower / N) = 0.025 * 1 * 0.36 * 0.063 = 5.4 * 10-5 Multiplying these together give us the likelihood, that the HMM would generates the sentence: PROB (N V ART N / flies like a flower) = 5.4 * 10-5 * 0.081 = 4.37 * 10-6 25" }, { "page_index": 237, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_026.png", "page_index": 237, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:27:06+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging The formula for computing probability of sentence w1...wT given sequence C1...Cn is: IIj=1,T PROB (Cj/ Ci-1) * PROB (Wj/ Cj) If we keep track of the most likely sequence found so far for each possible ending category, so we can ignore all the other less likely sequences. Example: To find the most likely categories for sentence flies like a flower are shown in Figure 6.8. There are 256 different sequences of length four (words) To find the most likely sequence we sweep forward through the words one at the time finding for each ending category. This algorithm usually is called Viterbi algorithm 26" }, { "page_index": 238, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_027.png", "page_index": 238, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:27:13+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging flower/V like/V flies/V a/N flower/N like/N flies/N NULUS flower/P a/P like/P flies/P flower/ WART ART likc/ART flics/ART Figure 6.8 Encoding the 256 possible sequences exploiting the Markov assumption 27" }, { "page_index": 239, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_028.png", "page_index": 239, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:27:15+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging The Viterbi algorithm Given word sequence: w1...wT, lexical categories: L1...LN,lexical probabilities PROB(wi/Li) and bigram probabilities PROB(Li/Li-1), find the most likely sequence of lexical categories C1. ...CT for the word sequence. Initialization step: For i := 1 to N do SEQSCORE (i,1) = PROB( w1li)* PROB(li ) BACKPTR(i1) = 0 28" }, { "page_index": 240, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_029.png", "page_index": 240, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:27:19+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging The Viterbi Algorithm (cons) Iteration step For t := 2 to T do for i := 1 to N do SEQSCORE (I,t) = MAX j=1,N (SEQSCORE (j,t-1) *PROB(lilj))* PROB(wtli ) BACKPTR(i,t) = index of j that gave the max above Sequence identification step C(T) = I that maximizes SEQSCORE ( i, t) For i := T-1 to 1 do C(i) = BACKPTR (C(i+1),i+1) 29" }, { "page_index": 241, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_030.png", "page_index": 241, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:27:24+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.3 Part - of- speech Tagging Examp/e: Using the Viterbi algorithm compute a probability of the sequence W= flies like a flower. L= V,N,ART, P i=1 to N SEQ(1,1) = PROB(fIies/V)*PROB(V/O 7.6*10-2 * 10-4 = 7.6 *10-6 SEQ(2,1) = PROB(flies/V)*PROB(N/O 0.025*0.29 = 7.25 * 10-3 SEQ(3,1) = PROB(flies/V)*PROB(ART/O) 0 SEQ(4,1) = PROB(flies/V)*PROB(P/O) 0 t= 2 to 4 SEQ(1,2) = max j=1,4(SEQ(1,1)* PROB(V/V) Max(7.6 *10-8, 7.25 *10-3 l= 1 to 4 SEQ(2,1)*PROB(V/N))* PROB(Iike/V *0.43) *0.1 = 3.1 *10-4 SEQ(2,2) = maxj=1,4(SEQ(1,1)* PROB(N/V) Max (7.6*10-6*0.35, 7.25 *10 SEQ(2,1)*PROB(N/N))* PROB(like/N 3 *0.13)*0.012= 1.13 *10-5 SEQ(3,2) = max j=1,4 (SEQ(1,1)* PROB(ART/V) 0 SEQ(2,1)*PROB(ART/N))*PROB(Iike/ART) SEQ(4,2) max j=1,4 (SEQ(1,1)* PROB(P/V), Max(7.6*10-6*10-4, 7.25*10 SEQ(2,1)*PROB(P/N))*PROB(like/P 3*0.44)*0.068 = 2.2 *10-4 30" }, { "page_index": 242, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_031.png", "page_index": 242, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:27:31+07:00" }, "raw_text": "t=3 to 4 SEQ(1,3) = max j=1,4 (SEQ(1,2)* PROB(V/V) =max( 3.1 *10-4*10-4 i=1 to 4 SEQ(2,2)*PROB(V/N),SEQ(4,2)*PROB(V/P 1.13 *10-5*0.43 *PROB(a/V) 2.2 *10-4*10-4)*0 =0 SEQ(2,3) = max j=1,4 (SEQ(1,2)* PROB(N/V =max( 3.1 *10-4*0.35 SEQ(2,2)*PROB(N/N,SEQ(4,2)*PROB(N/P 1.13 *10-5*0.13 *PROB(a/N) 2.2 *10-4*0.26)*0.01 -5.7*10-7 SEQ(3,3) = max j=1,4 (SEQ(1,2)* PROB(ART/V) max( 3.1 *10-4* 0.65 SEQ(2,2)*PROB(ART/N),SEQ(4,2) 1.13 *10-5*10-4 PROB(ART/P))*PROB(a/ART) 2.2 *10-4*0.74)*0.36= 2.01*10-5 SEQ(4,3) = max j=1,4 (SEQ(1,2)* PROB(P/V) max( 3.1 *10-4* 10-4 SEQ(2,2)*PROB(P/N),SEQ(4,2)* 1.13 *10-5*0.44 PROB(P/P))*PROB(a/P) 2.2 *10-4*10-4)*0 = 0 t=4 SEQ(1,4) = max j=1,4 (SEQ(2,3)* PROB(V/N) - Max(5.7*10-7*0.43, 2.01*10- i=1 to 4 SEQ(3,3)*PROB(V/ART)) *PROB(flower/V) 5*10-4)*0.05= 1.2*10-8 SEQ(2,4) = max j=1,4 (SEQ(2,3)* PROB(N/N) Max(5.7*10-7*0.13, 2.01*10- SEQ(3,3)*PROB(N/ART)) *PROB(floWer/N 5)*0.63= 1.26*10-5 SEQ(3,4) = max j=1,4 (SEQ(2,3)* PROB(ART/N) 0 SEQ(3,3)*PROB(ART/ART)) *PROB(floWer/ART) SEQ(4,4) = max j=1,4 (SEQ(2,3)* PROB(P/N) 0 SEQ(3,3)*PROB(P/ART)) *PROB(floWer/P) 31" }, { "page_index": 243, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_032.png", "page_index": 243, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:27:34+07:00" }, "raw_text": "Example: flies like a flower. L= V,N,ART, P Sequence Identification Step C(T) = i that maximizes SEQ (I,T) -> SEQ (2,4) = 4.5*10-6 C(4) = 2 For i = 4- 1 to 1 do C(3) = BACKPTR(C(3+1),4) = BACKPTR(2,4) = 3 C(2) =BACKPTR(C(3),3) = BACKPTR(3,3) = 1 1 C(1) = BACKPTR(C(2), 2) = BACKPTR(4,2) = 2 2-1-3-2 -> N V ART N 32" }, { "page_index": 244, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_033.png", "page_index": 244, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:27:38+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.4 Obtaining lexical probabilities -The simplest technique for estimating lexical probability is computed by a number of times each word appears in the corpus in each category -The probability that the word w appears in a lexical category y Lj out of possible categories L1.. .Ln could be estimated by a formula: count (L; 7w) - A better estimate would be obtained by computing how likely it is that category Lj occurs at a position t over all sequences given 33" }, { "page_index": 245, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_034.png", "page_index": 245, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:27:44+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.4 obtaining lexical probabilities the input that yields the maximum probability for the input 1 from all sequences. PROB(ART1ie)E .99 PROBN1like= .16 PROB(N1flies)= 48 PROBART1a)= .995 PROBV1flies .52 PROB(N1a)= .005 PROBVIlikeE. .49 PROBN1flowcr= .78 PROBP1likc)E .34 PROB(V1flower)= .22 Figure 6.9 Context independent estimates for the lexical categories 34" }, { "page_index": 246, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_035.png", "page_index": 246, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:27:49+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.4 Obtaining lexical probabilities Example: The probability that flies is as N in the sentence The flies like flowers would be calculated by summing the probability of all sequences that end with flies as N. Given transition (Figure 7.4) and lexical generation probabilities (Figure 7.6), the sequences would be calculated as follows: The/ART flies/N 9.58*10-3 The/ N flies/N 1.13* 10-6 The/P flies/N 4.55*10-9 Three nonzero sequences above have sum that is up 9.58*10-3 Likewise, three nonzero sequences end with flies as V, yielding a total sum of 1.13*10-5. The sum of all sequences will be the probability of the sequence The flies, namely 9.591*10-3 35" }, { "page_index": 247, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_036.png", "page_index": 247, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:27:52+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.4 Obtaining lexical probabilities The probability that flies is s a noun as follows. PROB(flies/NThe flies) =PROB(flies/N&The flies)/PROB(The flies) = 9.58*10-3/9.591*10-3 = 0.9988 Likewise, the probability that flies is a verb would be 0.0012. of the algorithm ( the Viterbi algorithm), we compute the sum of all scores. We define the forward probability (Figure 7.14), written as aj(t), which is the probability of producing the words: w1...wt and ending in state wt/Li : aj(t)= PROB(wt/Li, w1,..,wt) 36" }, { "page_index": 248, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_037.png", "page_index": 248, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:27:56+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.4 Obtaining lexical probabilities Initialization Step for i = 1 to N do SEQSUM (I,1) = PROB(w1Li) * PROB(Li O) Computing the Forward Probabilities for t=2 to T do for i=1 toN do SEQSUM (i,t) = Z j = 1,N (PROB(LiLj) * SEQSUM(j,t-1)) *PROB(wt Li) Computing the Lexical probabilities for t = 1 to T do for i = 1 to N do PROB( Ct =Li) = SEQSUM(i,t)/ Z j = 1,N SEQSUM(j,t) Figure 6.10 The forward algorithm for computing the lexical probabilities 37" }, { "page_index": 249, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_038.png", "page_index": 249, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:28:00+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 64 Obtaining lexical probabilities Example: with sentence The flies like flowers, a2(3)= would be the sum of values computed for all sequences ending V( V is the second category) in position 3 given the input The flies like. Using the conditional probability we derive the probability that word wt is an instance of the lexical category Li: PROB(w/Liw1...w)= PROB(w,/Li,W1...w)/PROB(w1...w) (1) (2) From (1) and (2): PROB(wt/Lj/w1...w)=a(t)/Zj=1,N aj(t) 1 38" }, { "page_index": 250, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_039.png", "page_index": 250, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:28:08+07:00" }, "raw_text": "Buöc khoi dong: Fori=1toNdo/*N=4laART,N,V,P*/ SEQSUM(1,1)=PROB(the/ART)*PROB(ART/O)=0.54*0.71=0.3834 SEQSUM(2,1)=PROB(the/N)*PROB(N/6)=1/833*0.29=0.0012*0.29=0.0003481 SEQSUM(3,1)=PROB(the/V)*PROB(V/6=0.0*10-4=0.0 SEQSUM(4,1)=PROB(the/P)*PROB(P/)=2/307*10-4=0.0065147*10-4 Buoclap t = 2.(flies) i= 1: SEQSUM(1,2)=(PROB(ART/ART)*SEQSUM(1,1)+PROB(ART/N)*SEQSUM(2,1) +PROB(ART/V)*SEQSUM(3,1)+PROB(ART/P)*SEQSUM(4,1))*PROB(flies/ART) =(10-4*0.3834+ 10-4*0.0003481+0 +0.74*0.0065147*10-4)* 0=0.0 i= 2: SEQSUM(2,2)=(PROB(N/ART)*SEQSUM(1,1)+PROB(N/N)*SEQSUM(2,1 + PROB(N/V)*SEQSUM(3,1) +PROB(N/P)*SEQSUM(4,1))*PROB(flies/N) =(.1.0*0.3834+10-4*0.0003481+0.0+0.26*0.0065147*10-4)*0.025=9.585*10-3 i=3: SEQSUM(3,2)=(PROB(V/ART)*SEQSUM(1,1)+PROB(V/N)*SEQSUM(2,1) +PROB(V/V)*SEQSUM(3,1) +PROB(V/P)*SEQSUM(4,1))*PROB(flies/V) =10-4*0.3834+0.43*0.0003481+0.0+10-4*0.0065147*10-4)*0.076=1.4*10-5 i=4 : SEQSUM(4,2)= (PROB(P/ART)*SEQSUM(1,1) +PROB(P/N)*SEQSUM(2,1) +PROB(P/V)*SEQSUM(3,1) +PROB(P/P)*SEQSUM(4,1)*PROB(flies/P)=(........)*0.0=0.0 t = 3 (like) i =1 SEQSUM(1,3)=(PROB(ART/ART)*SEQSUM(1,2)+PROB(ART/N)*SEQSUM(2,2) +PROB(ART/V)*SEQSUM(3.2) +PROB(ART/P)*SEQSUM(4,2)*PROB(like/ART) =.......*0.0=0.0 i=2 SEQSUM(2,3)=(PROB(N/ART)*SEQSUM(1,2)+PROB(N/N)*SEQSUM(2,2) + PROB(N/V)*SEQSUM(3,2)+PROB(N/P)*SEQSUM(4,2))*PROB(like/N) =(1*0.0+0.13*9.585*10-3+0.35*1.4*10-5)*0.012=1.5*10-5 i= 3 SEQSUM(3,3)=(PROB(V/ART)*SEQSUM(1,2) +PROB(V/N)*SEQSUM(2,2) +PROB(V/V)*SEQSUM(3,2)+PROB(V/P)*SEQSUM(4,2))*PROB(like/V) =(10-4*0.0+0.43*9.585*10-3+10-4*1.4*10-5+10-4*0.0)*0.1=0.412*10-3 i= 4 SEQSUM(4,3)=(PROB(P/ART)*SEQSUM(1,2)+PROB(P/N)*SEQSUM(2,2 +PROB(P/V)*SEQSUM(3,2)+PROB(P/P)*SEQSUM(4,2))*PROB(like/P)=0.2867*10-3 39" }, { "page_index": 251, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_040.png", "page_index": 251, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:28:15+07:00" }, "raw_text": "t=4,flower) i=1 SEQSUM( 1,4)=(PROB(ART/ART)*SEQSUM(1.3) +PROB(ART/N)*SEQSUM(2,3) + PROB(ART/V)*SEQSUM(3,3) + PROB(ART/P)*SEQSUM(4,3))* PROB(floWers/ART) ..*0.0=0.0 i=2 SEQSUM( 2,4)=(PROB(N/ART)*SEQSUM(1,3) + PROB(N/N)*SEQSUM(2,3 + PROB(N/V)*SEQSUM(3,3) + PROB(N/P)*SEQSUM(4,3))* PROB( floWers/N) (1.0*0.0+0.13*1.5*10-5+0.35*0.412*10-3+0.26*0.2867*10-3)*2.4*10-3=0.52*10-6 i=3 SEQSUM( 3,4)=(PROB(V/ART)*SEQSUM(1,3) +PROB(V/N)*SEQSUM(2,3) + PRQB(V/V)*SEQSUM(3,3) + PROB(V/P)*SEQSUM(4,3))* PROB( flowers/V) (10-4*0.0+ 0.43*1.5*10-5+10-4*0.412*10-3 +10-4*0.2867*10-3)*0.0533=3.475*10-7 i=4 SEQSUM( 4,4)=(PROB(P/ART)*SEQSUM(1,3) + PROB(P/N)*SEQSUM(2,3) +PRQB(P/V*SEQSUM(3,3+PROB(P/P)*SEQSUM(4,3)* PROB(floWers/P)=0.0 Tinh xäc suat tir vung l. PROB(the/ART/the)= 0.9992 SEQSUM(1,1)/(SEQSUM(1,1)+SEQSUM(2,1)+SEQSUM(3,1)+SEQSUM(4,1) =0.3834/(0.3834+0.0003481+0.0+0.00651*10-7)=0.9992 PROB(the/N/the)= SEQSUM(2,1/0.3837= 0.0003481/0.3837=9.02*10-4 PROB(the/V/the) =SEQSUM(3,1)/0.3837= 0.0/0.3837=0.0 PROB(the/P/the) =SEQSUM(4,1)/0.3837= 0.00651*10-4/0.3837=1.696*10-6 2.PROB(flies/ART/the flies)=0.0 SEQSUM(1,2)/(SEQSUM(1,2)+SEQSUM(2,2)+SEQSUM(3,2)+SEQSUM(4,2)) = 0.0 PRQB(flies/N/the flies)= 0.9985 SEQSUM(2,2)/9.599*10-3=9.585*10-3/9.599*103=0.9985 PROB(flies/V/the flies)= SEQSUM(3,2)/9.599*10-3=1.4*10-5/9.599*10-3= 1.45*10-3 PROB(flies/P/the flies)= SEQSUM(4,2)/9.599*10-3=0.0 3.PROB(like/ART/the flies like)= 0.0 SEQSUM(1,3)/(SEQSUM(1,3)+SEQSUM(2,3)+SEQSUM(3,3)+SEQSUM(4,3))= 0.0/0.71*10-3=0.0 PROB(like/N/the flies like) = SEQSUM(2,3/0.71*10-3=1.5*10-5/0.71*10-3=0.0211 PRQB(like/V/the flies like)=0.58 SEQSUM(3.3)/0.71*10-3=0.412*10-3/0.71*10-3=0.58 PROB(like/P/the flies like)=SEQSUM(4,3)/0.71*10-3= 0.2867*10-3/0.71*10-3=0.4038 4-PRQB(flowers/ART/the flies like flowers)= SEQSUM(1,4)/(SEQSUM(1,4)+SEQSUM(2,4)+SEQSUM(3,4)+SEQSUM(4,4)) = 0.0 PRQB(flowers/N/the flies like flowers)= 0.604 SEQSUM(2,4/0.86*10-6=0.52*10-6/0.86*10-6=0.604 PRQB(flowers/V/the flies like flowers)= SEQSUM(3.4)/0.86*10-6=0.3475*10-6/0.86*10-6=0.404 PROB(flowers/P/the flies like flowers)= SEQSUM(4,4)/0.86*10-6=0.0 40" }, { "page_index": 252, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_041.png", "page_index": 252, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:28:22+07:00" }, "raw_text": "flies/N like/V lowers/N 4.1*10 0095 1.1*10 the/ART likc/N .38 1.6*10 flies/V like/P nlowers/V 1.1*10 2.9*10 3.6*10 hitialization lterationl lleration2 Iteration. 41 Figure 6.11 Computing the sums of the probabilities of the sequences" }, { "page_index": 253, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_042.png", "page_index": 253, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:28:28+07:00" }, "raw_text": "PROBhclARTe)=10 PROBikePhe fieslike PROBicsNhe flies) 9988 PROBheie 8 022 PROB0(0iesSV1liefis)E00lP PROBlke/V1 he lesike=575PROB(flowesVlthe fieslike lowcs)03 Figure 6.12 Context dependent estimates for lexical categories in the sentence: The flies like flowers 42" }, { "page_index": 254, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_043.png", "page_index": 254, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:28:31+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.4 Obtaining lexical probabilities - We could also consider the backward probability Bi(t), the probability of producing the sequence wt,...,Wr beginning from the state wt/Lj. A better method of estimating the lexical probability for word Wt: PROB(w7Lj)=xj(t)* f Bj(t)/2j:=1,N aj(t)*Bj(t) 43" }, { "page_index": 255, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_044.png", "page_index": 255, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:28:34+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.5 Language Modeling : Introduction Application Today's goal: assign a probability to a sentence Machine Translation: P(high winds tonite) > P(large winds tonite) Spell Correction The office is about fifteen minuets from my house P(about fifteen minutes from) > P(about fifteen minuets from) Speech Recognition . P(I saw a van) >> P(eyes awe of an) 44" }, { "page_index": 256, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_045.png", "page_index": 256, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:28:37+07:00" }, "raw_text": "6.5 Language Modeling : Introduction Goal: compute the probability of a sentence or sequence of words: P(W) =P(W1,W2,W3,W4,W5...Wn) Related task: probability of an upcoming word: P(w5W1,W2,W3,W4) A model that computes either of these: P(W) P(w.W1,W2...Wn-1) is called or Better: the grammar But language model or LM is standard 45" }, { "page_index": 257, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_046.png", "page_index": 257, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:28:41+07:00" }, "raw_text": "6.5 Language Modeling : Introduction The Chain Rule in General P(x1,X2,X3,...,x.) = P(x)P(x2x1)P(xx1,x,)...P(xx1,...,Xn-1) Example: P(\"its water is so transparent\") = P(its) x P(waterits) x P(isits water) x P(soits water is) x P(transparentlits water is so) Markov Assumption Bigram model P(w;ww=wii)P(w;wi_I) 46" }, { "page_index": 258, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_047.png", "page_index": 258, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:28:44+07:00" }, "raw_text": "6.5 Language Modeling : Introduction Zeros Training set: Test set ... denied the allegations .. denied the offer ... denied the reports ... denied the loan ... denied the claims ... denied the request P(\"offer\" denied the) = 0 47" }, { "page_index": 259, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_048.png", "page_index": 259, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:28:49+07:00" }, "raw_text": "6.6 Applications for spelling correction Word processing Phones New iMessage Spelling and Grammar. English (US) Cancel Spell checking is a componant of Not in dictionary: To: Dan Jurafsky Spell checking is a componant of Ignore Ignore All Add Suggestions: component Change late X Change All Sorry, running layr Send AutoCorrect Tyuop - AsDFGHJkL Web search zxcvBnM x ploogle natural langage processing 123 space return Showing results for natural language processing Search instead for natural langage processing 48" }, { "page_index": 260, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_049.png", "page_index": 260, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:28:51+07:00" }, "raw_text": "Spelling Tasks Spelling Error Detection Spelling Error Correction: - Autocorrect . hte->the - Suggest a correction - Suggestion lists 49" }, { "page_index": 261, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_050.png", "page_index": 261, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:28:54+07:00" }, "raw_text": "Non-word spelling errors Non-word spelling error detection: Any word not in a dictionary is an error The larger the dictionary the better Non-word spelling error correction: Generate candidates: real words that are similar to error Choose the one which is best: Shortest weighted edit distance Highest noisy channel probability Non-word spelling g error example acress 50" }, { "page_index": 262, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_051.png", "page_index": 262, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:28:58+07:00" }, "raw_text": "Non-word spelling errors Using a bigram language model .a stellar and versatile acress whose combination of sass and glamour... 22 Counts from the Corpus of Contemporary American English with add-1 smoothing P(actress/versatile)=.000021 P(whose/actress) = .0010 P(across/versatile) =.000021 P(whoseacross) = .000006 P(\"versatile actress whose\") = .000021*.0010 = 210 x10-10 P(\"versatile across whose\") = .000021*.000006 = 1 x10-10 51" }, { "page_index": 263, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_052.png", "page_index": 263, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:29:01+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.7 Probabilistic Context - Free grammars Context free grammar can be generated to the probabilistic case Counting the number of times each rule is used in a corpus containing parsed sentences and use this to estimate the probability of each rule being used For instance, m rules R1,.., Rm with left-hand side C. Estimating the probability using the rule Rj to derive C: PROB ( Rj/ C ) =count (# times Rj used) /i=1,m(# times R; used) 52" }, { "page_index": 264, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_053.png", "page_index": 264, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:29:10+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.7 Probabilistic Context - Free grammars Rulc Countfor Rule Probability Count for LHS 300 300 1 I.S-NPVP 300 116 .386 2 VP-V 3 300 118 .393 VP-VNI 300 66 .22 4. VP-VNPPI 1023 241 24 NP-NPPP 1023 92 .09 6. NP-NN 1. 1023 141 .14 NP-N 1023 558 55 8 NP-ARTN 307 307 1 9 PP-PNP Figure 6.13 A simple probabilistic grammar The grammar in Figure 7.13 shows 53 probabilities for r CFG" }, { "page_index": 265, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_054.png", "page_index": 265, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:29:13+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.7 Probabilistic Context - Free grammars We must assume that the probability of a constituent being derived by a rule Rj is independent of how the constituent used as sub-constituent. Inside probability PROB(wij/C): The probability that a constituent C generates a sequence of words wi,...wj, written as wij. This type of probability is called because it assigns a probability to the word sequence inside the constituent. Consider how to derive inside probabilities - For lexical categories that these are exactly lexical generation probabilities in Figure 6.3, instance PROB ( flower / N ) that is 54 inside probability" }, { "page_index": 266, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_055.png", "page_index": 266, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:29:18+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.7 Probabilistic Context - Free grammars For non-lexical categories: using such lexical generation probabilities, we can derive the probability of a constituent. Example: Deriving a probability that the constituent NP generates a sequence a flower as in figure 7.18. The grammar in Figure 7.17, there are two rules NP which can generate two words, so the probability of NP generating a flower can be derived as follows: PROB ( a flower/NP) = PROB ( rule;/NP )* PROB (a/ART) + PROB ( flower / N ) = 0.55 * 0.36 * 0.6 + 0.9 * 0.0001 * 0.06 = 0.012 55" }, { "page_index": 267, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_056.png", "page_index": 267, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:29:21+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.7 Probabilistic Context - Free grammars This probability can be then used to compute a probability of large constituents. For instance, the probability of generating the words A flower wilted from S could be computed by summing the probabilities from each of the possible trees shown in the Figure 7.19. Note that in Figure 7.19 there are three trees, and the first two differ only in the derivation of a flower as NP. Thus the probability of a flower wilted is: PROB (a flower wilted/ S) = PROB (Rule1 / S) * PROB (a flower / NP) * PROB (wilted/ VP)+ PROB (Rule1/ S) PROB ( a/ NP ) * PROB ( flower wilted/ VP ). 56" }, { "page_index": 268, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_057.png", "page_index": 268, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:29:28+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.7 Probabilistic Context - Free grammars S S S NP NP VP VP NP VP 12 /6 /8 12 17 ART N V N N V N V NP 17 flower wilted flower.wilted a flower N a wilted Figure 6.14 The three possible ways to generate a flower wilted as an S?" }, { "page_index": 269, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_058.png", "page_index": 269, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:29:32+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.7 Probabilistic Context - Free grammars Using this method, the probability that a given sentence will be generated by the grammar can be computed efficiently The goal of probability parsing method is to find the most likely parse rather than overall probability of the given sentence. The probability of each constituent is computed from the probability of its sub-constituents and the probability of the rule used. For instance, entry E of category C using a rule i with n sub- constituents corresponding to entries E, ..., E.. then: 58" }, { "page_index": 270, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_059.png", "page_index": 270, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:29:36+07:00" }, "raw_text": "NP 425 1 N 422 0.14 NP 424 1 N 417 2 N 422 0.00011 NP 423 1 ART 416 2 N 422 0.54 S 421 1 NP 418 2 VP 420 3.2 x 10-8 NP 418 1 N 417 0.0018 VP 420 0.0018 N 417 0.001 N 422 0.00011 ART 416 0.99 V 410 0.00047 59 Figure. 6.15 The full chart for a flower that is as NP" }, { "page_index": 271, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_060.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_060.png", "page_index": 271, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:29:40+07:00" }, "raw_text": "6.7 Probabilistic Context - Free grammars Machine translation The idea behind statistical machine translation comes from information theory. -A document is translated according to the probability distribution p(ef), that a string e in the target language 1 (for example, English) is the translation of a string g f in the source language (for example, French). - The problem of modeling the probability distribution p(ef) implementation is to apply Bayes Theorem, that is p(ef) a p(fe)p(e), where the translation model p(fe) is the probability that the source string is the translation 60" }, { "page_index": 272, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_061.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_061.png", "page_index": 272, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:29:43+07:00" }, "raw_text": "6.7 Probabilistic Context - Free grammars Machine translation of the target string, and the language model p(e) is the probability of seeing that target language string. This decomposition is attractive as it splits the problem into two subproblems. Finding the best translation & is done by picking up the one that gives the highest probability: é =argmax p(ef)=argmax p(fe)p(e) E e* e e Ee* 61" }, { "page_index": 273, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_062.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_062.png", "page_index": 273, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:29:45+07:00" }, "raw_text": "Chapter 6: Ambiguity Resolutions: Statistical methods 6.8 Best -First parsing (study oneself) 62" }, { "page_index": 274, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_063.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_063.png", "page_index": 274, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:29:49+07:00" }, "raw_text": "6.9 Word Similarity Synonymy: a binary relation . Two words are either synonymous or not Similarity (or distance): a looser metric Two words are more similar if they share more features of meaning Similarity is properly a relation between senses The word \"bank\" is not similar to the word \"slope Bank1 is similar to fund3 . Bank2 is similar to slope5 But we'll compute similarity over both words and senses 63" }, { "page_index": 275, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_064.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_064.png", "page_index": 275, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:29:51+07:00" }, "raw_text": "Why word similarity Information retrieval Question answering Machine translation Natural language generation Language modeling Automatic essay grading Plagiarism detection Document clustering 64" }, { "page_index": 276, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_065.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_065.png", "page_index": 276, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:29:54+07:00" }, "raw_text": "Word similarity and word relatedness We often distinguish word similarity from word relatedness Similar words: near-synonyms Related words: can be related any way car, bicycle: similar related, not similar car, gasoline: 65" }, { "page_index": 277, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_066.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_066.png", "page_index": 277, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:29:57+07:00" }, "raw_text": "Two classes of similarity algorithms Thesaurus-based algorithms Are words \"nearby\" in hypernym hierarchy? . Do words have similar glosses (definitions)? Distributional algorithms Do words have similar distributional contexts? 66" }, { "page_index": 278, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_067.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_067.png", "page_index": 278, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:01+07:00" }, "raw_text": "Path based similarity standard medium of exchange scale 8 currency money Richter scale 6 coinage fund coin budget 3 2 nickel dime Two concepts (senses/synsets) are similar if they hierarchy are near each other in the thesaurus =have a short path between them concepts have path 1 to themselves 67" }, { "page_index": 279, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_068.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_068.png", "page_index": 279, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:04+07:00" }, "raw_text": "Refinements to path-based similarity pathlen(c,c,) = 1 + number of edges in the shortest path in the hypernym graph between sense nodes c1 and c2 ranges from 0 to 1 (identity) 1 simpath(c1,c2) = pathlen(G,C) wordsim(w1,w2) = sim(c1,c2) max C1esenses(w),c,esenses(w,) 68" }, { "page_index": 280, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_069.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_069.png", "page_index": 280, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:08+07:00" }, "raw_text": "Summary: thesaurus-based similarity sim \"pata (G, ,)= pathlen(c, c,) 21og P(LCS(c,c,)) (c,c,)=-logP(LCS(c,c) simun(c,c,) = 1og P(q) + 1og P(c,) sim (G,c,)= jiang conrath logP(q)+1og P(c,)-21og P(LCS(c,c)) simeLeok(G,C,)= overlap(gloss(r(c)),gloss(q(c,)) r,qe RELS 69" }, { "page_index": 281, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_070.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_070.png", "page_index": 281, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:12+07:00" }, "raw_text": "Example: path-based similarity simpath(c1,c2) = 1/pathlen(c1,c2) standard medium of exchange scale 8 currency money 6 Richter scale coinage fund coin budget 3 2 nickel dime simpath(nickel,coin) = 1/2 = .5 simpath(fund,budget) = 1/2 = .5 simpath(nickel,currency) = 1/4 = .25 simpath(nickel,money) = 1/6 = .17 70 simpath(coinage,Richter scale) = 1/6 = .17" }, { "page_index": 282, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_071.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_071.png", "page_index": 282, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:15+07:00" }, "raw_text": "Problems with thesaurus-based meaning We don't have a thesaurus for every language Even if we do, they have problems with recall Many words are missing Most (if not all) phrases are missing Some connections between senses are missing Thesauri work less well for verbs, adjectives Adjectives and verbs have less structured hyponymy relations 71" }, { "page_index": 283, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_072.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_072.png", "page_index": 283, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:18+07:00" }, "raw_text": "Distributional models of meaning - Offer much higher recall than hand-built thesauri. - Although they tend to have lower precision Zellig Harris (1954): \"oculist and eye-doctor .. occur in almost the same environments... If A and B have almost identical environments we say that they are synonyms Firth (1957): \"You shall know a word by the company it keeps!\" 72" }, { "page_index": 284, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_073.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_073.png", "page_index": 284, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:22+07:00" }, "raw_text": "Reminder: Term-document matrix Each document is a count vector in Nv: a column below As You Like lt Twelfth Night Julius Caesar Henry V battle 1 8 15 1 soldier 2 12 36 2 fool 37 58 1 5 6 clown 117 0 0 73" }, { "page_index": 285, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_074.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_074.png", "page_index": 285, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:25+07:00" }, "raw_text": "Reminder: Term-document matrix Two documents are similar if their vectors are similar As You Like lt Twelfth Night Julius Caesar Henry V battle 1 1 8 15 soldier 2 2 12 36 fool 37 58 1 5 0 0 clown 6 117 74" }, { "page_index": 286, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_075.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_075.png", "page_index": 286, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:30+07:00" }, "raw_text": "The words in a term-document matrix Each word is a in ND: a row below count vector As You Like It Twelfth Night Julius Caesar Henry V battle 1 1 8 15 soldier 2 2 12 36 fool 37 58 1 5 clown 6 117 O O 75" }, { "page_index": 287, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_076.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_076.png", "page_index": 287, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:33+07:00" }, "raw_text": "The words in a term-document matrix Two words are similar if their vectors are similar As You Like lt Twelfth Night Julius Caesar Henry V battle 1 1 8 15 soldier 2 2 12 36 fool 37 58 1 5 clown 6 117 0 0 76" }, { "page_index": 288, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_077.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_077.png", "page_index": 288, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:35+07:00" }, "raw_text": "The Term-Context matrix Instead of using entire documents, use smaller contexts Paragraph Window of 10 words A word is now defined by a vector over counts of context words 77" }, { "page_index": 289, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_078.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_078.png", "page_index": 289, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:39+07:00" }, "raw_text": "Sample contexts: 20 words (Brown corpus) equal amount of sugar, a sliced lemon, a tablespoonful of apricot on board for their enjoyment. Cautiously she sampled her first pineapple and another fruit whose taste she likened to that of of a recursive type well suited to programming on the digital computer. In finding the optimal R-stage policy from that of substantially affect commerce, for the purpose of gathering data and information necessary for the study authorized in the first section of this 78" }, { "page_index": 290, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_079.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_079.png", "page_index": 290, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:43+07:00" }, "raw_text": "Term-context matrix for word similarity Two words are similar in meaning if their context vectors are similar aardvark computer data pinch result sugar apricot 0 0 0 1 0 1 pineapple 0 0 0 1 0 1 digital 0 0 0 2 1 1 information 0 1 6 0 4 0 79" }, { "page_index": 291, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_080.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_080.png", "page_index": 291, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:46+07:00" }, "raw_text": "Should we use raw counts? For the term-document matrix We used tf-idf instead of raw term counts For the term-context matrix Positive Pointwise Mutual Information (PPMI) is common 80" }, { "page_index": 292, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_081.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_081.png", "page_index": 292, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:49+07:00" }, "raw_text": "Pointwise Mutual Information Pointwise mutual information: Do events x and y co-occur more than if they were independent? P(x,y) PMI between two words: (Church & Hanks 1989) Do words x and y co-occur more than if they were independent? P(word,word,) Positive PMI between two words (Niwa & Nitta 1994) Replace all PMl values less than 0 with zero 81" }, { "page_index": 293, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_082.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_082.png", "page_index": 293, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:30:55+07:00" }, "raw_text": "Computing PPMI on a term-context matrix Matrix F with W rows (words) and C columns (contexts) aardvark computer data pinch result sugar apricot 0 0 0 1 0 1 pineapple 0 0 0 1 0 1 digital 0 2 1 0 1 0 information 0 1 6 0 4 0 C fij W 0 fij Pij = W C j=1 Pj* = i=1 l 1 fj W C p*j = W C O 0 fj 0 0 fj i j1 i=1 j=1 i=1 j=1 Pij if pmij>0 =10g2 pmi.. - ppmi;: P;*P*j 0 otherwise 82" }, { "page_index": 294, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_083.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_083.png", "page_index": 294, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:31:04+07:00" }, "raw_text": "Count(w,context) fij computer data pinch result sugar Pij = W C apricot 0 0 1 0 1 0 0 fj pineapple 0 0 1 0 1 digital 2 1 0 1 i j=1 information 1 6 0 4 0 p(w=information,c=data) = 6/19 =.32) C p(w=information) = 11/19 =.58 fij W 0 fj p(c=data) =7/19 =.37 p(w;)=j= p(cj)=i1 N N p(w,context) p(w) computer data pinch result sugar apricot 0.00 0.00 0.05 0.00 0.05 0.11 pineapple 0.00 0.00 0.05 0.00 0.05 0.11 digital 0.11 0.05 0.00 0.05 0.00 0.21 information 0.05 0.32 0.00 0.21 0.00 0.58 p(context) 0.16 0.37 0.11 0.26 0.11 83" }, { "page_index": 295, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_084.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_084.png", "page_index": 295, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:31:11+07:00" }, "raw_text": "p(w,context) p(w) computer data pinch result sugar pmi., =log2 apricot 0.00 0.00 0.05 0.00 0.05 0.11 P*P*j pineapple 0.00 0.00 0.05 0.00 0.05 0.11 digital 0.11 0.05 0.00 0.05 0.00 0.21 information 0.05 0.32 0.00 0.21 0.00 0.58 p(context) 0.16 0.37 0.11 0.26 0.11 pmi(information,data) = log, ( .32/ (.37*.58) ) =.58 (.57 using full precision PPMI(w,context) data computer pinch result sugar apricot 2.25 2.25 pineapple 2.25 2.25 digital 1.66 0.00 0.00 information 0.00 0.57 0.47 84" }, { "page_index": 296, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_085.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_085.png", "page_index": 296, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:31:14+07:00" }, "raw_text": "Reminder: cosine for computing similarity Dot product Unit vectors N vIw V;Wj V W cos(v,w) = i=1 vw W N N 2 W i= i=1 v; is the PPMI value for word v in context i w; is the PPMI value for word w in context i. Cos(v,w) is the cosine similarity of v and w 85" }, { "page_index": 297, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_086.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_086.png", "page_index": 297, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:31:18+07:00" }, "raw_text": "Cosine as a similarity metric 50 00 150 200 250 300 350 0.5 -1: vectors point in opposite directions +1: vectors point in same directions 0: vectors are orthogonal Raw frequency or PPMI are non- negative, so cosine range 0-1 86" }, { "page_index": 298, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_087.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_087.png", "page_index": 298, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:31:23+07:00" }, "raw_text": "large data computer N apricot 1 O O vlw W cos(v, w) = i=1 w digital 0 1 2 lw N N 2 v21 W i= i=1 information 1 6 1 Which pair of words is more similar? 1 cosine(apricot,information) = 1+0+0 =.16 V38 V1+0+0 V1+36+1 0+6+2 8 cosine(digital,information) = =.58 V385 0+0+0 =0 cosine(apricot,digital) = V1+0+0V0+1+4 87" }, { "page_index": 299, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_088.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_088.png", "page_index": 299, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:31:26+07:00" }, "raw_text": "Other possible similarity measures v.w Vj XWj DV w -1 N min(vi,wi) max(vi,wi) N 2x min(vi,wi) simDice(i,w) N (vi+wi) simjs(w) - 88" }, { "page_index": 300, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_089.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_089.png", "page_index": 300, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:31:29+07:00" }, "raw_text": "Evaluating similarity (the same as for thesaurus-based) Intrinsic Eyaluation: Correlation between algorithm and human word similarity ratings Extrinsic (task-based, end-to-end) Evaluation: Spelling error detection, WSD, essay grading Taking TOEFL multiple-choice vocabulary tests Levied is closest in meaning to which of these: imposed, believed, requested, correlated 89" }, { "page_index": 301, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_090.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_090.png", "page_index": 301, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:31:35+07:00" }, "raw_text": "EXERCISE OF CHAPTER 6 1.. Hand C simulate 1 the Viterbi algorithm using the data and probability Figures7.4- 7.6 the estimates in on sentence Flower flowers like flowers. Draw transition network as in Figure 7.10-7.12 for the problem, and identify what part of speech the algorithm identifies for each word. Using the bigram and lexical generation probabilities g given in this chapter, calculate the word probabilities using the forward algorithm for the sentence The a flies like flower (involving a very rare use of the word a as a noun, as in the a flies, the b flies, and so on). Remember to use 0.0001 as a probability for any bigram not in the table. Are the results you get reasonable ?. If not, what is the problem and how might it be fixed ? 90" }, { "page_index": 302, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_091.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_091.png", "page_index": 302, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:31:41+07:00" }, "raw_text": "EXERCISE OF CHAPTER 6 3. Consider an extended version of Grammar 7.17 with the additional rule: 10.VP -VPP The revised rule probabilities are shown here (Any not mentioned are the same as in Grammar 7.17): 0.32 VP - I V NP PP 0.20 VP -> V NP 0.33 VP - VPP 0.15 In addition, the following bigram probabilities differ from those in Figure 7.4: PROB(N/V) = 0.53 PROB(ART/V) = 0.32 PROB(P/V) = 0.15 Hand simulate (or implement) the forward algorithm on Fruit a) flies like birds to produce the lexical probabilities. b) Draw out the full chart for Fruit.flies like birds, showing the probabilities of each constituent. 91" }, { "page_index": 303, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_092.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_6/slide_092.png", "page_index": 303, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:31:46+07:00" }, "raw_text": "EXERCISE OF CHAPTER 6 Specify PMI between two words, Positive PMI between two words in the below table aardvark computer data pinch result sugar 0 0 0 1 0 Apricot 0 0 0 0 1 0 Pineapple 0 2 1 1 0 Digital 0 1 6 0 4 0 Information 92" }, { "page_index": 304, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_001.png", "page_index": 304, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:31:49+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.1 Semantics and Logical Form Precisely defining the notions of semantics and meaning is surprisingly difficult. Example: - \"The fire means someone camped here last night. The verb \"mean \" has not thing to do with language A word \"Amble e\" means to walk slowly Consider the sentences: I was at an airport recently and while I was walking toward my departure gate, a guard at the entrance asked : \" Do you know what gate you are going to ? \". I answer him \"yes, I know\". But the guard the asked \"Which gate is it ?. The meaning we want is closer to the usage when defining a word" }, { "page_index": 305, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_002.png", "page_index": 305, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:31:53+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.1 Semantics and Logical Form From now on, we the term meaning in this context independent sense and we will use the term usage for context dependent aspect. Logical form -the interpretation of the context independent meaning Semantic interpretation- The process of mapping a sentence to its logical.form Context interpretation - The process of mapping logical form to the final knowledge representation (KR) language Knowledge representation language is First Order Predicate Calculus (FOPC) The logical language is the same as the final knowledge representation (FOPC)" }, { "page_index": 306, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_003.png", "page_index": 306, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:31:57+07:00" }, "raw_text": "S Syntactic analysis NP VP ART N V ADJ P the ball is red semantic interpretation logical form (RED 1 < THE b BALL) contextual interpretation Final Red (BO 73) representation H 7.1 Logical form as an intermediate representation" }, { "page_index": 307, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_004.png", "page_index": 307, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:32:01+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.1 Semantics and Logical Form Logical form must allow to indexical terms, that is, the term are defined by context, such as pronouns, tense, quantifies. For instance: The red ball are indexical (RED 1 < THE b, BALL)) Many aspects of language depend on the context as well As the object referred to can only be identified with respect to a context and thus can not be uniquely determined at the logical form level. All of this could be treated as ambiguity at the logical form level. What is logical form 's formal status ? Uses the notion of a situation, which is a particular set of circumstances in the world" }, { "page_index": 308, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_005.png", "page_index": 308, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:32:05+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.1 Semantics and Logical Form A situation can be a set of objects and their relations. For instance: very simple situation might consist of two objects. a ball B0005 and a person P86 and include the relationship that the person owns the ball. To encode that situation as the set: { (BALL B0005), (PERSON P86),(OWNS P86 B0005)} Logical form is a function which maps the discourse situation in which the utterance was made to a new discourse that result from the occurrence of the utterance." }, { "page_index": 309, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_006.png", "page_index": 309, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:32:09+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.1 Semantics and Logical Form Example: old utterance: { (BALL B0005),(PERSON P86) (OWNS P86 B0005)}+ new fact (B0005 has the property RED) -> new utterance {{ (BALL B0005),(PERSON P86),(OWNS P86 B0005),(RED B0005)}. Figure 8.2 shows this view of the interpretation process where the logical form as a function between situations" }, { "page_index": 310, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_007.png", "page_index": 310, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:32:13+07:00" }, "raw_text": "S Syntactic Analysis NP VP ART N V ADJ P the ball is red semantic interpretation (ASSERT(RED1)) logical form { (BALL B0005) {(BALL B0005) CONTEXTUAL (PERSON P86), (OWNS PERSON P86), (OWNS LNLERPRELALION P86 B0005),(RED B0005)} P86 B0005)} INLTIAL DISCOURSE UPDATED DISCOURSE situation situation Figure 7.2 : logical form as a function" }, { "page_index": 311, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_008.png", "page_index": 311, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:32:17+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.2 Word Senses and Ambiguity To develop a theory of semantics and semantic interpretation, we need to develop a structural model, just as we did for syntax. The basic semantic unit could be word or morpheme, but every word has one or more senses Different senses s can be organized into a set of broad classes of objects by which we classify the world. Different classes are: substance (physical object), quantity (such as numbers), quality, relation, place, time, position. state, action, affection, event, ideas, concept, and plans. Two of the most influential classes such as actions and events Example: We lifted the box, it was hard work." }, { "page_index": 312, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_009.png", "page_index": 312, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:32:21+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.2 Word Senses and Ambiguity Situation may act like an abstraction of the world over some location and time For example: We laughed and sang at the football game >actions are laugh and sing at the particular time and location. The world can be semantically ambiguous if it maps to more than one a sense. For instance, word kid seems to be ambiguous between a baby goat and human child; the horses may be subdivided: mares, colts, trotters . Example: I have two kids and George has three" }, { "page_index": 313, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_010.png", "page_index": 313, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:32:24+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.2 Word Senses and Ambiguity The ambiguity of verbs: Example 1: I ran last year and George did too To distinguish between two senses we can index the verb run. RUN1- exercise sense RUN2- the political sense. Example 2: I kissed Sue and George did too. Verb kiss is vague in that it does not specify where one is kissed. This is lexical ambiguity The syntactic ambiguity: Example 3: Happy cats and dogs live on the farm (the dogs are happy or not ??)" }, { "page_index": 314, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_011.png", "page_index": 314, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:32:29+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.3 The basic logical form language - A primitive unit of meaning is called word sense. - A language of meanings for more complex expressions resembles FOPC - Word senses will serve as atoms or constants of the representation. Constants describe objects in the world including abstract objects such as event and situation, are called terms. Constants describe relation and property, are called predicates. Proposition is formed from a predicate followed by an appropriate number of terms to sever as its arguments. Example: Fido is a dog: FIDO is a term, DOG is a predicate proposition: ( DOG1 FIDO1)" }, { "page_index": 315, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_012.png", "page_index": 315, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:32:32+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.3 The basic logical form language unary predicate or properties, binary predicate, n-ary predicate Example: Sue loves Jack would involve a binary predicate LOVE1 and would be written as: ( LOVES1 SUE1 JACK1 Different word classes in English corresponding to different types of constants in logical form: + proper names (Jack, Mary) have word senses as term; + common nouns (dog, horse)have word senses as unary predicate; + Verbs (run, love, put) have word senses as n-ary predicate, where n depends on how many terms the verb subcategorizes for." }, { "page_index": 316, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_013.png", "page_index": 316, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:32:36+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.3 The basic logical form language LOVES1 LOVES1 1 pred 2 JACK1 SUE1 theme agent SUE1 JACK1 Figure.7.3 : Two possible network representations of Sue loves Jack" }, { "page_index": 317, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_014.png", "page_index": 317, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:32:41+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.3 The basic logical form language A new class of constants called logical operator: NOT, AND (&) OR (V) Example: Sue does not love Jack > (NOT (LOVES1 SUE1 JACK1) The logical operators that combine two or more propositions to form a complex proposition. FOPC contains operators such as V ( OR - disjunction ), & ( AND - conjunction ), (implication Other forms - 16 functional binary operators in FOPC When we apply 16 operators in natural language that situations are more complex than in FOPC Example: I went home and had a drink" }, { "page_index": 318, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_015.png", "page_index": 318, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:32:45+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.3 The basic logical form language The general logical form: (connective proposition proposition) Example: Jack loves Sue or Jack loves Mary -> OR(LOVES1 JACK1 SUE1) (LOVES1 JACK1 MARY1) > One important construct is quantifier. >In FOPC, there are only two quantifiers V and . English contains much large range of quantifies: all, some, most, many, a few, the, a, Example: \"All dogs bark\": \"Most people laugh These quantifies are called generalized quantifiers Scope of quantifies in natural language is more complex than FOPC Example: \"A man entered the room. He walked over the table\"" }, { "page_index": 319, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_016.png", "page_index": 319, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:32:49+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form The basic logical form language 7.3 The quantifiers are used in logical form of the general form (quantifier variable: restriction-proposition body-proposition) Example: \"most dogs bark\" ->(MOST1 d1 : (DOG1 d1)(BARKS1d1)) This means that most of the objects d1 that satisfy ( DOG1 d1) also 1 satisfy ( BARKS1 d1 ). Note that this has a very different meaning from the formula:(MOST1 d2 :( BARKS1 d2)( DOG1 d2))) The important quantifiers are the, a: Example: \"The dog barks\">(THE x:(DOG1 x)(BARKS x))" }, { "page_index": 320, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_017.png", "page_index": 320, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:32:53+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.3 The basic logical form language Quantifier Use Example THE definite reference the dog A indefinite reference a dog BARE bare singular NP water, (mass term) food BARE Or bare plural NP dogs (generic) Figure 7.4: Some common quantifiers The predicate operator PLUR: Example: dogs -> ( PLUR DOG1 ), so predicate DOG would be an argument of new predicate PLUR" }, { "page_index": 321, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_018.png", "page_index": 321, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:32:56+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.3 The basic logical form language Example: the dogs bark -> (THE x:((PLUR DOG1) x) (BARK1 x)) Plural noun phrases introduce the possibility of a new form of ambiguity, this is called distributive reading, collective reading Example: (1 The dogs bark There is a specific set of dogs s and each one of them barks- distributive reading (2) The dogs met at the corner It makes no sense to say that each individual dog met; rather the meeting is true of the entire set of dogs- collective reading" }, { "page_index": 322, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_019.png", "page_index": 322, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:00+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.3 The basic logical form language Example: 3) Two men bought a stereo There are two meanings: the sentence can mean that two men each bought a stereo (distributive reading) or two men bought a stereo together (collective reading) Modal operators Modal operators look similar to logical operators but have some important differences, specifically, terms within the scope. We assume that Jack is also known as John to some people then Jack1 = John2" }, { "page_index": 323, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_020.png", "page_index": 323, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:04+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.3 The basic logical form language if HAPPY JOHN2) is true,then (HAPPY JACK1) is true Then so,if (OR(HAPPY JOHN2) (SAD JOHN2)) is true, then (OR(HAPPY JACK1) (SAD JACK1)) is true too. The same propositions within the scope of modal such as BELIEVE1, however are not interchangeable Example: if Sue believes that jack is happy, that is: (BELIEVE SUE1 (HAPPY JACK1)) Then is does not necessarily follow that Sue believes John is happy, that is: ( BELIEVE SUE1 (HAPPY JOHN2)) Because, Sue might not know that JACK1 and JOHN2 are the same person" }, { "page_index": 324, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_021.png", "page_index": 324, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:09+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.3 The basic logical form language Tense- modal Operators: PAST, PRES, FUT Example: (2) John saw Fido -> (PAST (SEES1 JOHN1 FIDO1) (3) John will see Fido -> (FUT (SEES1 JOHN1 FIDO1)) The three sentences are true If John was a president, but now was not, so we can not conclude that now, John sees Fido that means the president sees Fido too." }, { "page_index": 325, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_022.png", "page_index": 325, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:12+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.3 The basic logical form language Example: If ( PAST (SEES1 JOHN1 FIDO1)) is true then (PAST ( SEES1 PRESIDENE1 FIDO1 )) is true If (PRES (SEES1 JOHN1 l FIDO1)) is true then PRES CSEES1 PRESIDENLI FIDO1 )) is not true" }, { "page_index": 326, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_023.png", "page_index": 326, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:16+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.4 Encoding Ambiguity in the Logical Form Ambiguity encoding is as separate level of representation from the logical form, and it is often referred to as quasi-logical form. Often words have different senses that have identical structural constrains Only way to encode these would be to build a separate logical form for each possible combination of senses for words in the sentence. Example: \"Sue watched the ball\" The noun ball has at least two senses: BALL1 and BALL2 BALL1 is object used in game; BALL2 is the social event involving dancing. So the logical form of \"Sue watched the ball\" is that:" }, { "page_index": 327, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_024.png", "page_index": 327, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:20+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.4 Encoding Ambiguity in the Logical Form (THE b1:(R BALL1 BALL2} b1) (PAST (WATCH1 SUE1 b1) This abbreviates two possible logical forms: SUE1 b1))) 1) (THE b1:(BALL1 b1) (PAST (WATCH1 SUE1 b1))) 2) (THE b1:(BALL2 b1) (PAST (WATCH1 Most complex forms of ambiguity in logical forms arises from the relative scoping of quantifies and operators. Example: Every boy loves a dog -> LOVES1 < EVERY b1 (BOY1 l b1) > ) and \" > \" indicate the scoping abbreviation" }, { "page_index": 328, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_025.png", "page_index": 328, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:24+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.4 Encoding Ambiguity in the Logical Form This abbreviates an ambiguity between logical form: (EVERY b1:(BOY1 b1) (A d1: (DOG1 d1) (LOVES1 b1 d1))) (A d1: (DOG d1) (EVERY b1: (BOY b1) (LOVES1 b1 d1))) Example: \"Every boy didn 't run\" is ambiguous between the reading in which some boys didn't run and some did, that is. (NOT(EVERY b1:(BOY1 b1) (RUN1 b1)) and the reading where no boys ran, that is, (EVERY b1:(BOY1 b1) (NOT(RUN1 b1))) These two reading are captured by the single logical form ( )" }, { "page_index": 329, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_026.png", "page_index": 329, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:28+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.4 Encoding Ambiguity in the Logical Form VTwo important structures of meaning, that are name (NAME) and pronoun (PRO) ( NAME < variable > < name > ( PRO< variable > < proposition >) Example: (1) \"John ran (< PAST RUN1>(NAME J1 \"John\")) (2) \"Every man liked him\"-> < PAST LlKE1 >< EVERY m1 MAN1> (PRO m2 (HE2 m2)))" }, { "page_index": 330, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_027.png", "page_index": 330, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:31+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.5 Verbs and states in Logical Form So far, verbs have mapped to appropriate senses acting as predicate, that can hold all the different forms but loses some generalities. Example: (i) John broke the window with the hammer (ii) The hammer broke the window (iii) The window broke All these sentences describer the same type of event but in varying detail. We would need three different senses of break: BREAK1 BREAK2 and BREAK3" }, { "page_index": 331, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_028.png", "page_index": 331, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:35+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.5 Verbs and states in Logical Form Thus, these sentences have three logical forms, that are. (i') ((NAME J1 \"John\")< THE w1 WINDOW1 > < THE c h1 HAMMER1 >) ii')( < THE h1 F HAMMER1 > < THE w1 WINDOW1 >) (iii') (< PAST BREAK3 > < THE w1 WINDOW1 >) The event is introduced into the ontology, and treating the meaning the \"John broke it \" along the following lines, sentence (3 e1:(BREAK e1(NAME J1 \"John\") (PROi1 IT1)))" }, { "page_index": 332, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_029.png", "page_index": 332, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:39+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.5 Verbs and states in Logical Form e1 is an event that John broke the window. The meaning of the sentence John broke it with the hammer would be that, ( e1:(&(BREAK el(NAME J1 \"John\") (PROI1 IT1)) (INSTR e1 < THE h1 F The advantage is that additional modifiers, as with the hammer or on Saturday...can be incrementally added to the basic representation by adding more predications involving the event. Abstract semantic relationships that can hold between a verb and its arguments, these often called thematic roles or case roles. The intuition is that John, the hammer and the window play the same semantic roles in each of these sentences." }, { "page_index": 333, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_030.png", "page_index": 333, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:43+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.5 Verbs and states in Logical Form Let' go back to the sentences: John broke the window with the hammer The hammer broke the window John is an actor (agent role), the window is a object (theme role) and the hammer is an instrument (instrument role), that is used to make the action break The new notation for logical form: (1) ( e: (& (Event -p e) (Relation1 e obj1) ... (Relationn e objn))) The abbreviated form (2) for an assertion of (1): (2) (Event -p e [Relation1 obj1]..[Relationn objn]" }, { "page_index": 334, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_031.png", "page_index": 334, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:47+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.5 Verbs and states in Logical Form From (2), three thematic roles such as agent role, theme role, and instrument role are mentioned, the meaning of \"John broke the window with the hammer\" would be ( e (& (BREAK e (AGENT e (NAME J1 \"John\")) (THEME e We have sentence Mary was unhappy, then its meaning would be: If we will extend this sentence such as:" }, { "page_index": 335, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_032.png", "page_index": 335, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:51+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.5 Verbs and states in Logical Form Mary was unhappy in the meeting , then its meaning would be: ( <[THEME( NAME J1 \"MARY\")] IN -LOC ] Thus, we can develop a logical form appropriate for a extended sentence" }, { "page_index": 336, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_033.png", "page_index": 336, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:55+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.6 Thematic roles We introduced the roles AGENT, THEME, INSTR, now we define these relations more precisely and introduce other thematic roles. A range of roles has to do with locations, or abstract locations. Example Roles I threw the ball to John TO - LOC It fell to the ground I gave a book to John TO - POSS I caught the ball from John FROM-LOO I walked from here to school yesterday I borrowed a book from John FROM- POSS he box contains a ball AT - LOC John owns a book AT - POSS The bird flew from the lake along the river gorge PATH - LOC" }, { "page_index": 337, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_034.png", "page_index": 337, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:33:59+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.6 Thematic roles A range of roles has to do with time: AT-TIME, TO-TIME FROM-TIME Example Roles I saw the car at 3 o'clock AT-TIME I worked from one until three FROM- TIME and TO- TIME The roles apply to general state change as well as with temperature: AT - VALUE, FROM - VALUE. Example Roles The temperature remain at zero AT - VALUE The temperature rose from zero FROM - VALUE" }, { "page_index": 338, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_035.png", "page_index": 338, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:34:02+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.6 Thematic roles BENEFICIARY role, which is filled by the animate, person for whom a certain event is performed, as in I rolled on the floor for Lucy Find me the paper I gave the book to Jack for Susan Note: the last example demonstrates the need to distinguish the TO-POSS (that is to Jack) from the BENEFICIARY role INSTR role describes a tool, material, or force used to perform some event, as in: Harry broke the glass with telescope The telescope broke the glass I used some flour to make a cake I made a cake with some flour" }, { "page_index": 339, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_036.png", "page_index": 339, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:34:06+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.6 Thematic roles Example: The sun dried the apples Jack used the sun to dry the apples The AGENT and INSTR roles could be combined into more general role named CAUSAL-AGENT. CO -AGENT role, when two actors at the same time make an action together. Henry lifted the piano with Jack CO - THEME role, when two objects are effected by an action. (i) Jack paid $1 to man for the book (ii) Jack bought the book from the man for $1" }, { "page_index": 340, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_037.png", "page_index": 340, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:34:10+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.6 Thematic roles To handle this , we must add the role CO - THEME for second object in an exchange. PRIMARY event and SECONDARY event Example: Jack paid $1 to man for the book. Jack: AGENT of both PRIMARY and SEONDARY event $1: THEME of PRIMARY event The man: TO-POSS of PRIMARY event and FROM-POSS of SECONDARY event; The book: THEME of SECONDARY event. The table of some possible roles in the Figure 8.5" }, { "page_index": 341, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_038.png", "page_index": 341, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:34:19+07:00" }, "raw_text": "Role and Subroles Other Common Names Definition CAUSAL-AGENT the object that caused the cvent AGENT intentional causation INSTR forcc/tool usedin causing the cvent THEME PATIENT the thing affecied by the event EXPERIENCER ue person involved in perception or a physical/psychological state BENEFICIARY the person for whom an act is done AT the state/value on some dinension AT-LOC LOCATION curentlocation AT-POSS POSSESSOR currentpossessor AT.VALUE currentvalue AT-TIME current time TO final value in a state change TO-LOC DESTINATION finallocation TO-POSS RECIPIENT final possessor FO-VAL.UE finalvalue FROM original valuc in a statc change FROML.OC SOURCE oniginallocation FROM-TOSS original posscssor FROM.VALUI originalvalue PATH path over which something travels CO-AGENI secondary agent in an action CO TIIEME secondary themc in an exchange Figure 7.5: Some possible semantic roles" }, { "page_index": 342, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_039.png", "page_index": 342, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:34:22+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.6 Thematic roles Inner role of the verbs Example: (i) John and I ran to the store (ii) I ran to the store and to the bank Thus, AGENT and TO - LOC are inner roles of run. THEME is usually an inner role obliged of verb while AGENT might not obliged inner role (in the passive sentence)" }, { "page_index": 343, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_040.png", "page_index": 343, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:34:25+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.6 Thematic roles The table of inner roles Example Inner roles Jack run AGENT Jack ran with a crutch AGENT + INSTR Jack ran with a crutch for Susan AGENT+INSTR+BENEEICIARY Jack destroyed the car AGENT + THEME Jack put the car through the wall AGENT + THEME + PATH Jack sold Henry the car AGENT + TO - POSS + THEME" }, { "page_index": 344, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_041.png", "page_index": 344, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:34:33+07:00" }, "raw_text": "AGENT only Jackran AGENT+INSTR Jackranwith a crutch AGENT+INSTR Jack ran with a crutch for Susan +BENEFICIARY AGENT +THEME Jack destroyed the car. AGENT+THEME+PATH Jack put the car through the wall AGENT+TO-POSS+THEME Jack sold Henry the car. AGENT+THEME+FROM-LOO Henry pushed the car from Jack's +TO-LOC house to the junkyard. THEME Jack is tall. EXPERIENCER+THEME Henry believes that Jack is tall. AT-POSS+THEME Susan owns a car THEME+AT-LOC Iamin the closet THEME The ice melted. EXPERIENCER+THEME Jack enjoyed the play THEME+PATH+TO-LOC the ball rolled down the hill to thewater." }, { "page_index": 345, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_042.png", "page_index": 345, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:34:39+07:00" }, "raw_text": "Role Realization AGENT as subject in aciive sentences preposition by in passive sentences THEME as object of transitive verbs as subjcct of nonaction vcrbs INSTR as subjcct in active sentences with no agent prepositionwirh EXPERIENCER as animate subject in active sentences with no agent BENEFICIARY as indirect object with transitive verbs prepositionfor AT-LOC prepositions in, on. bevond, ctc. AT-POSS possessive NP as subject of scntence if no agcnt 10.1.00 prepositions to, into TO.POSS prcposition to, indirect object with certain verbs FROM-LOC prepositionsfrom. out of.ctc. FROM-POSS prepositionfon Figure 7.6: Common relalizations of the major roles" }, { "page_index": 346, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_043.png", "page_index": 346, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:34:43+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.7 Speech Acts and Embedded Sentences The logical form language is extend to capture the semantic distinctions of utterances. Each of major sentence types has corresponding operator : that takes the interpretation as an argument and produces what is called surface speech act, They are indicated by new operators as follows: ASSERT -the proposition is being asserted Y/N - QUERY - the proposition is being queried COMMAND - the proposition describes an action to perform WH - QUERY - the proposition describes an object to be identified" }, { "page_index": 347, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_044.png", "page_index": 347, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:34:47+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.7 Speech Acts and Embedded Sentences Example: (1) The man ate a peach -> (ASSERT( e1 < [AGENT][THEME])) (2) Did the man ate a peach? -> (Y/N -QUERY( e1 [AGENT] [THEME])) (3) Eat the peach! -> (COMMAND ( EAT e1 THEME which man > < WH m1 MAN1 > who > < WH p1 PERSON >, which man > who -> how many > < HOW-MANY > how - much > < HOW - MUCH -Example: Who is the leader of every group? is ambiguous asking for the leader of each of the groups." }, { "page_index": 349, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_046.png", "page_index": 349, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:34:54+07:00" }, "raw_text": "Chapter 7: Semantics and Logical Form 7.7 Speech Acts and Embedded Sentences Example: the logical form for WH - question: (i) What did the man eat? >(WH -QUERY(< PAST EAT > e1 [AGENT][THEME]) The logical form for the complex assertive sentence: (ii) The man ate a peach left -> (ASSERT( l1 AGENT e2 [AGENT m1] [THEME ])) >])" }, { "page_index": 350, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_047.png", "page_index": 350, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:35:02+07:00" }, "raw_text": "UTTERANCE- (ASSERTPROPOSITION)I CYN-QUERY PROPOSITION I (COMMANDPROPOSITION) I (WII QUERYPROPOSITION) PROPOSITION=S(-ARY-OPERATORPROPOSITION,...PROPOSITION.)I CQUANTIFIERVARIABLE:PROPOSITIONPROPOSTIIOM) (-ARY-PREDICATETERM,..TERMI (EVENT-STATE-PRED VARIABLEIROLE-NAMETERMI (ROLE-NAMETERM). TERM-. VARIABLEI (NAMEVARIABLENAME-STRING)I (PRO VARIABLEPROPOSITION) I-ARY-OPERATOR ?-NOTIPASTIPERFPROGI 2-ARY-OPERATOR-ANDIBUTIIF.THEN I QOANTTFIER ?-THEISOMEIWH VARlABIE-b11man31 IARYTREDICATE-THPE-PREDICATEIHAPPY1IREDIA TYPE-PREDICATE- EVENT-STATE-PREDI(PLURTYPEPREDICAIE)IMANII EVENT-SIATE-PRED , RUNIILOVE3I GIVE1 I HAPPYI 2-ARY-PREDICATE-ROLE-NAAIEIABOVEII. ROLENAMEEAGENTI THEMEIAT-LOCIINSTRI NAME-S7RING-John\"1Tne NewYorkTimesI Figure 7.7 Aformmaldefinitionofthe syntax ofthe logicalform language" }, { "page_index": 351, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_048.png", "page_index": 351, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:35:08+07:00" }, "raw_text": "TERM-QUANTIFIERVARIABLEPROPOSITION TERM-<-ARY-OPERATORTERM.TERM IARY-PREDICATE -ARY-OPERATOR-ARY-PREDICATE.-ARY-PREDICATE -ARY-OPERATOR-(-ARY-OPERATOR...ARY-OPERATOR... QUANIIFIER-QUANTIFIER...QUANTIFIER.. -ARY-PREDICATE--ARY-PREDICATE-ARY-PREDICATE TYPE-PREDICATE-TYPE-PREDICATE...TYPE-PREDICATE EVENT-STATE-PRED-(EVENT-STATE-PRED...EVENT-STATE-PRED. ROLENAME-ROLENAMEROLE-NAME Figure 7.8: Additional rules defining the quasi-logical form" }, { "page_index": 352, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_7/slide_049.png", "page_index": 352, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:35:11+07:00" }, "raw_text": "EXERCISE of CHAPTER Z 1. State why each of the sentences are ambiguous or not. Specifically, state whether they are ambiguous because of their possible syntactic structures, their word senses, their semantic structures, or a combination of these factors. Give a paraphrase of each reading. a) A man stopped at every truck stop. b) Several people ate the pizza. c) We saw her duck. 2. Specify a quasi-logical form for the following sentences. If the sentence is ambiguous, mark sure you represent all the possibilities, either using ambiguous logical forms or listing several logical forms. George ate a pizza at every road stop. Several employees from every company bought a pizza. We saw John in the park by the beach" }, { "page_index": 353, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_001.png", "page_index": 353, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:35:14+07:00" }, "raw_text": "Chapter 8:Linking Syntax and Semantics 8.1 Semantic Interpretation and Compositionality example: Jill loves every dog Syntactic structure: (( Jill ) ( loves ( every dog ))) the unambiguous logical form: (EVERY d:(DOG1 d)(LOVES1 l1(NAME j1 \"Jill\") d)) There seems to be no simple one to one correspondence between l parts of the logical form and the constituents syntactic analysis. For instance, VP is loves every dog, so every dog is a part of VP, but in the logical form: ( EVERY d : ( DOG1 d ) (LOVE ... We see VP is a part of quantitative phrase (EVERY d :(DOG1 d )...))" }, { "page_index": 354, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_002.png", "page_index": 354, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:35:18+07:00" }, "raw_text": "Linking Syntax and Semantics 8.1 Semantic Interpretation and Compositionality The unscoped version sentence would be. (i)(EVERY d:(DOG1 d)(LOVES1 l1(NAME j1 \"Jill\") d)) (ii)(LOVES1 11 (NAME e j1 \"Jill) < EVERY d DOG1> which is much closer in struc ure to the syntactic form -Note: the idiom problem Example: Jack kicked the bucket has meaning as Jack died. Example: (i) Jack lauqhed One suggestion is that VP Jack laughed is unary predicate" }, { "page_index": 355, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_003.png", "page_index": 355, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:35:21+07:00" }, "raw_text": "Linking Syntax and Semantics 8.1 Semantic Interpretation and Compositionality Jack kissed Sue ->(KISS1 k1 (NAME j1 \"Jack\") (NAME s1 \"Sue\") What is the meaning of the VP kissed Sue ?. Again, it could be a unary predicate. But so far we have no way to express such complex unary predicate. The lambda calculus provides a formalism for this. In particular, the expression: KISS1 k1 (NAME j1 \"Jack\")(NAME s1 \"Sue\") > A x (kiss1 k1 x (NAME s1\"Sue\"))) O This form is a unary predicate" }, { "page_index": 356, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_004.png", "page_index": 356, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:35:25+07:00" }, "raw_text": "Linking Syntax and Semantics 8.1 Semantic Interpretation and Compositionality Proposition:( x (kiss1 k1 x (NAME s1 \"Sue\"))) (NAME j1 \"Jack\") is true if and only if ( NAME j1 \"Jack\") satisfies the predicate e ( x (kiss1 k1 x (NAME s1 \"Sue\"))), which by definition is true if only if (KISS1 k1 (NAME j1 \"Jack\") (NAME s1 \"Sue\") is true. The expression: (KISS1 k1 (NAME j1 \"Jack\") (NAME s1 \"Sue\")) was obtained by applying the Lambda expression; ( x (kiss1 k1 x (NAME s1 \"Sue\"))) to the argument (NAME j1 Jack\") This operation is called Lambda reduction" }, { "page_index": 357, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_005.png", "page_index": 357, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:35:29+07:00" }, "raw_text": "Linking Syntax and Semantics 8.1 Semantic Interpretation and Compositionality Example: Sue laughs and opens the door -> (X a(LAUGH s1l2 a/* VP : lauqhs */ and( a( OPENS O1 a< THE d1 DOOR1 >)))/* opens the door */ These Lambda expression can be combined to form a complex unary predicate (X a ( & (LAUGH s1 l2 a) ( OPEN s1 O1 a< THE d1 DOOR1 > ))) For instance, it can be applied to a subject NP with logical form (NAME s1 \"Sue\") to form the meaning pf the original sentence. (& (LAUGH s1 l2 (NAME s1 \"Sue\") ) (OPEN s1 o1 (NAME s1 \"Sue\") < THE d1 DOOR1 >)))" }, { "page_index": 358, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_006.png", "page_index": 358, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:35:33+07:00" }, "raw_text": "Linking Syntax and Semantics 8.1 Semantic Interpretation and Compositionality The prepositional phrase modifies in noun phrase Example: The man in the store The noun phrase The man in the store contains the preposition in the store. The preposition might have not in dependent meaning in the noun phrase. If prepositional phrase has an independent meaning, in this case the unary predicate: ( o ( IN -LOC1 o< THE s1 STORE1 >)) Then, the logical form of noun phrase the man in the store would be: < THE m1 ( MAN1 m1) ( IN -LOC1 m1< THE s1 STORE1 >) >" }, { "page_index": 359, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_007.png", "page_index": 359, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:35:36+07:00" }, "raw_text": "Linking Syntax and Semantics 8.1 Semantic Interpretation and Compositionality While the logical form of the sentence The man is in the store would be: (IN -LOC1 < THE m1 MAN1 > < THE s1 STORE1 >) There is difficult to build a unary predicate of noun phrase in the case that a noun phrase contains a prepositional phrase" }, { "page_index": 360, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_008.png", "page_index": 360, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:35:40+07:00" }, "raw_text": "Linking Syntax and Semantics 8.2 A simple Grammar and Lexicon with Semantic Interpretation The main extension needed is to add a SEM feature to each lexical entry and grammatical rule. Example : ( S SEM (? semvp ? semnp )) -> ( NP SEM ? semnp ) (VP SEM ? semvp ) The rule NP with SEM(NAME m1 \"Mary\") and VP subconstituent with SEM (X a(SEES1 e8 a(NAME j1\"Jack\"))) Thus SEM of new S constituent is the expression ((A a(SEES1 e8 a(NAME j1\"Jack\")))(NAME m1\"Mary\"))" }, { "page_index": 361, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_009.png", "page_index": 361, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:35:45+07:00" }, "raw_text": "S SEM(SEES1 e8 (NAME m1\"Mary\") (NAME j1\"Jack\")) NP SEM(NAMEml\"Mary\") VP SEM (A a (SEES1 c8 a NAME j1\"Jack\") VSEM1SEES NP SEM (NAMEm1\"Jack Figure 8.1 A parsc trcc showing thc SEM features" }, { "page_index": 362, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_010.png", "page_index": 362, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:35:49+07:00" }, "raw_text": "Linking Syntax and Semantics 8.2 A simple Grammar and Lexicon with Semantic Interpretation After Lambda reduction this expression would be the formula: (SEES1 e8(NAME m1 \"Mary\") (NAME j1 \"Jack\") In the lexicon, a word will have different word sense of every possible subcategorization it has, since there will be different arity predicates. Example: The verb decide that has two entries: (i) SUBCAT- none (ii) SUBCAT r [ - pp: on] , in this case the verb has one more an object .-Word fish has two entries in lexicon, because its SEM depends on whether it is singular or plural" }, { "page_index": 363, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_011.png", "page_index": 363, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:35:58+07:00" }, "raw_text": "a(aitACR 3s SEM INDEFI) can (aux SUBCAT base SEM CANI) car (n SEMI CAR1 AGR 3s) cry (v SEM CRY1 VFORM base SUBCAT_nonc dccide(v SEMI DECIDES1 VFORM base SUBCAT=nonc decide(v SEM DECIDES-ON1 YFORM base SUBCAT _PP:on) dog (n SEMl DOG1 ACR 3s) Fish(n SEMI FIS!1l ACR 3s) fish (n SEM (PLUR FISH1) AGR 3p) house(n SEMI HOUSEl AGR 3s) has (aux VFORMI pres AGR 3s SUBCAT pastpn SEMI PERF) hc (pro SEMIHE1 AGR 3s) in(pPFORMI(LOC MOT) SEMIIN-LOCI) Jill (name AGR 3s SEMJill) man (n SEM MAN1 AGR 3s) men (n SEMI (PLUR MAN1) AGR 3p) on(p PFORMLOC SEMI ON-LOC1) saw (v SEM SEES1 VFORMI past SUBCAT _np AGR 2a) sce (v SEM SEEStVFORMI baSc SUBCAT _np IRREG-PAST + EN PASTPRT t) she(proAGR3s SEMI SHE1) the(ant SEM1 THE AGR 13s 3p)) to(toAGR=VFORMinf) Figure 8.2" }, { "page_index": 364, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_012.png", "page_index": 364, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:36:04+07:00" }, "raw_text": "(S SEM (?scmyp ?semnp)- (NP SEM?semnp)(VP SEM?semvp) (VP VAR?v SEM ( a2(?scmv?v a2)-(V(_none/SEM?semv 3.0 (VP VAR?v SEM( a3 (?semv?v a3?semnp))- (V(np/SEM?semv)(NP SEM?semnp) (NP WH-VAR?v SEM (PRO?v?sempro))-(FRO SEM?sempro) 5. (NP VAR?v SEM (NAME?v ?seinname))-(NAME SEM?semname) 6.(NP VAR ?v SEM)- (ART SEM ?semart) (CNP SEM ?semcnp) 1.60 (CNP SEM?semn)->(NSEM?semn Head fcatures forS.VP.NP.CNP:VAR Grammar 8.3 A simple gramnar with SEM fcatures" }, { "page_index": 365, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_013.png", "page_index": 365, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:36:08+07:00" }, "raw_text": "Linking Syntax and Semantics 8.3 Prepositional phrases and Verb phrase The rules for handling auxiliary verbs ( VP SEM( a1 (?semaux (?semvp a1 )))) -> (AUX SUBCAT? v SEM ? semaux) (VP VFORM ? v SEM ? semvp) This rule inserts a modal operator in the appropriate place for the new VP. For instance, if ?semaux is a modal operator such as CAN1 and ?semvp is lambda expression such as: ( a1 (LAUGHTS1 e3 x)), then according to the auxiliary rule, the SEM of the VP can laugh will be: ( a1 ( CAN1 (( x (LAUGHS1 l3 x )) a1 )) this can be simplified to: ( a1 (CAN1 (LAUGHS1 l3 a1 ))" }, { "page_index": 366, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_014.png", "page_index": 366, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:36:11+07:00" }, "raw_text": "Linking Syntax and Semantics 8.3 Prepositional phrases and Verb phrase Prepositional phrases (PP) The prepositional phrase has two different semantic roles in sentence. (i) PP is a modifier to a noun phrase or verb phrase. (ii) PP is subcategorized for by a head word and preposition acts more as a flag for an argument position the as an independent predicate. For r (i) case: PP has SEM as a unary predicate: (PP SEM ( y(?semp y ?semnp )))->(P SEM?semp)(NP SEM ?semnp)" }, { "page_index": 367, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_015.png", "page_index": 367, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:36:14+07:00" }, "raw_text": "Linking Syntax and Semantics 8.3 Prepositional phrases and Verb phrase Example: PP is in the corner If SEM of P is IN-LOC and SEM of NP is < THE c1 ( CORNER c1) >, then SEM of PP is a unary predicate: ( y(IN-LOC 1 y)) PP modifies the noun phrase: Example: the man in the corner: (CNP SEM( n1 (& (? semcnp n1 ) (? sempp n1)))) -> ( CNP SEM ? semcnp ) ( PP SEM ? sempP ) +if in the corner has SEM of P is IN - LOC1 and SEM of the corner -NP is s < THE c1 (CORNER1 c1) >, then PP would be a unary predicate:( y ( IN -LOC1 y ))" }, { "page_index": 368, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_016.png", "page_index": 368, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:36:18+07:00" }, "raw_text": "Linking Syntax and Semantics 8.3 Prepositional phrases and Verb phrase Given that the SEM of the CNP man is unary predicate MAN1 and SEM of PP in the corner is ( y (IN-LOC1 y < THE c1 CORNER1 >)),new SEM of CNP: ( n1 (&(MAN1 n1) ( y (IN-LOC1 y < THE c1 CORNER1 >)) n1))) IN1 This can be simplified to: X n1 (& (MAN1 1 n1)(IN1 n1))) Combining this unary predicate with quantify such as the using rule 6 would be form a SEM of CNP the man in the corner such as" }, { "page_index": 369, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_017.png", "page_index": 369, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:36:22+07:00" }, "raw_text": "Linking Syntax and Semantics 8.3 Prepositional phrases and Verb phrase < THE m2 ( n1 (& (MAN1 n1) (IN1 n1 < THE c1 CORNER1 >)) m2 >. Which can be simplified to: )) > PP modifies verb phrase Example: VP cry in the corner, The that introduces the PP modifier would be: VP VP PP V cry has the logical form: (x(CRIES1 e1 x)) VP cries in the corner has the logical form: ( x(CRIES1 e1 x)(IN1 e1)))" }, { "page_index": 370, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_018.png", "page_index": 370, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:36:25+07:00" }, "raw_text": "Linking Syntax and Semantics 8.3 Prepositional phrases and Verb phrase Sine PP modifies VP, therefore we add the rule: (VP VAR?v SEM( x(&(? semvp x) (? sempp?v ))) > (VP VAR ? V SEM? semvp ) (PP SEM ? sempP ) PP also appear as subcategorized constituents in verb phrases, in this case must be treated differently. Example: PP on a couch This PP indicates a location of the some object or event. With verb decide, it can indicate the object that is being decided about. Example: sentence Jill decided on a couch." }, { "page_index": 371, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_019.png", "page_index": 371, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:36:31+07:00" }, "raw_text": "VP SEM x (& (CRIES c1 x)(IN-LOC1 e1 )) VARel V SEM (x(CRIES1 cI x) VP SEM A y(IN LOC1y)) VARel NP SEM VAR c1 P SEM ARTSEMTIE NSEMCORNERI VARcI cry the in corner Figure 8.4 Using the VAR featurc for PP modificrs of VP's" }, { "page_index": 372, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_020.png", "page_index": 372, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:36:34+07:00" }, "raw_text": "Linking Syntax and Semantics 8.3 Prepositional phrases and Verb phrase Jill decided on a couch is ambiguous between two readings (ii) Jill made a decision about a couch. For (i) on a couch as adverbial PP. For (ii), the appropriate rule is:VP > V [ - pp : on ] NP PP [ on ] and the logical form of the final VP is: (A s(DECIDES-ON1 d1 s ))" }, { "page_index": 373, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_021.png", "page_index": 373, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:36:39+07:00" }, "raw_text": "Linking Syntax and Semantics 8.3 Prepositional phrases and Verb phrase The grammar in figure 8.5, we add into rule 8 a value +PRED and PP acts as a modifier. We add into rule 9 -PERD, in that case the SEM of the PP is simply the SEM of the object Figure 8.6 shows the two reading of the VP decide on a couch The case: the decision is about a couch (with -PRED value) is shown in upper half of the figure,and its SEM is < A c1 COUCH1). The case in which a decision is made on a couch is shown in the lower haft of the figure. The PP has + PRED, and its SEM is ( x(ON-LOC1 x )" }, { "page_index": 374, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_022.png", "page_index": 374, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:36:46+07:00" }, "raw_text": "8 (PP PRED + SEM (A x (?semp x?semnp))) -) (P SEM?semp) (NP SEM?scmnp) (PP PRED=PFORM?pf SEM?semnp)= (P ROOT?pf)(NP SEM?semnp) 10. (VP VAR?v SEM(A ag1(&(?semvp ag1) (?scmpp?v)))) =) (VP SEM?semvp)(PP PRED+ SEM?sempp) 11. (VP VAR?v SEM ( ag2(?semv?v ag2 ?sempp))-) (V(_up pp:on SEM?semv)(PP PRED-PFORM on SEM?sempP 12. (VP SEM(X a1 (?scmaux(?scmvp a1)))) =) (AUX SUBCAT ?v SEM ?semaux)(VP VFORM?v SEM ?semvp) 13 (CNP SEM(A n1(& (?semcnp n1)(?sempp n1))) = (CNP SEM?semcnp) (PP PRED + SEM ?sempp) HeadleatucsforPPPFORM Head fcatures tor VP.CNP.VAR Grammar 8.5: Rules to handle PPs in verb phrases" }, { "page_index": 375, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_023.png", "page_index": 375, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:36:53+07:00" }, "raw_text": "VPSEM1X(IECIDES-ON c1a PPSENI on a couch decide VP SEM A a (& (DECIDES1 cI a)(ON-lOCI cI )) VARel VPSEM(Ay(DECIDEST c1y)) PPSEM1X(ON-LOCIxA1COUCIi1 VARC1 PRED VSEAIDECIDESI PSEM1ON-LOCI NPSEA (NP WHQ AGR? a)(S INV +GAP (NP AGR ? a)) To make the semantic interpretation work, we add SEM of the gap:" }, { "page_index": 378, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_026.png", "page_index": 378, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:37:03+07:00" }, "raw_text": "Linking Syntax and Semantics 8.5 Handling Simple Questions (S INV - SEM ( WH - query? sems )) -> (NP WH Q AGR? a SEM? semnp) (S INV + SEM GAP (NP Figure 8.7 shows some rules, that are added to handle questions. The lexical entries for the Wh-words would have to be extended with a SEM, for instance: Who: (PRO WH{QR}SEM WHO1 AGR{3s,3p} Figure 8.8 is a example , that is the syntactic tree of the question Who did Jill see?" }, { "page_index": 379, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_027.png", "page_index": 379, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:37:09+07:00" }, "raw_text": "14.S INV-SEMWH-query?sems) (NPWHQAGR ?a SEM ?semnp (S INY+SEM?scmsGAP(NP AGR?a SEM?sernnp) 1S. (S INV+GAP ?g SEM(?scmaux(?scmvp?scmmp)))- (AUX AGR?a SUBCAT?s SEM?semaux) (NP AGR ?a GAP-SEM7sempP (VP VFORM?s GAP ?g SEM 7semvp) 16.(NP WHQVAR?vSEM)- (PRO WH Q SEM?senpro Grammar 8.7: Rules to handle simple wh questions" }, { "page_index": 380, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_028.png", "page_index": 380, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:37:13+07:00" }, "raw_text": "S2 SEM(WH-QUERY((SEES1 (NAME j1 \"Jill\") (WH p1 WHO1)))) S1 GAP SEM (((SEES1 (NAME j1 \"Jill\") ?semnp)) NP1 SEM VP1 GAP SEM (ag(SEES1 ag ?semnp)) V SEM SEES1 GAP1 SEM (PRO WH Q SEM ? sempro WHO1 l AGR{ 3s, 3p} Applying and combining properties from node Who, we have new node NP1 SEM< WH p1 (WHO1 p1) >. and rule S1 has the constituents AUX, NP and VP on the right hand side such as:" }, { "page_index": 382, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_030.png", "page_index": 382, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:37:22+07:00" }, "raw_text": "Linking Syntax and Semantics 8.5 Handling Simple Questions INV + GAP? g ( S1 SEM (? semaux (? smvp, ? semnp )) -> (i) (AUX AGR R ? a SUBCAT? s SEM ? s semaux (ii) (NP AGR? A GAP-SEM semnp) (iii) (VP VFORM ? s GAP? g SEM ? semvp ) We determine SEM of these constituents, such as: (i) has the SEM < PAST DO > (ii) has the SEM NAME j1 \"Jill\" iii) the rule (VP1 VFORM?s GAP?g SEM? semvp ) -> (V s SEM? v GAP? g SEM ? semvp where SEM ? y is SEES1 and GAP? g= SEM< WH p1 (WHO1 l p1) > SEM? Semvp =A ag (? semv ag ? semnp )" }, { "page_index": 383, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_031.png", "page_index": 383, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:37:26+07:00" }, "raw_text": "Linking Syntax and Semantics 8.5 Handling Simple Questions where SEM? v is SEES1 and GAP? g= SEM< WH p1 (WHO1 p1) > SEM? Semvp = ag (? ag ? semnp ) semv Reducing to the node S1: S1 GAP< WH p1(WHO1 p1) > SEM (< PAST DO (SEES1(NAME j1 \"Jill\") ?semnp )) And reducing to the node S2: SEM (WH - QUERY (< PAST DO > (SEES1 ( NAME E j1 Jill ( WH p1 WHO1))))" }, { "page_index": 384, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_032.png", "page_index": 384, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:37:31+07:00" }, "raw_text": "Linking Syntax and Semantics 8.6 Semantic Interpretation Using Feature Unification The basic idea is to introduce new feature for argument position that earlier would have been filled using lambda reduction. Example: The rule 1 in the grammar 8.3 has the form, such as: ( s SEM (? semvp ? semnp )) > ( NP SEM ? semnp ) (VP SEM ? semvp) A new feature SUBJ is added into rule 1, then this rule becomes: ( S SEM? semvp) > ( NP SEM ? semnp) (VP SUBJ ? semnp SEM? semvp ) The SEM of the subject is passed into the VP constituent as the SUBJ feature and the SEM equations for the VP insert the subject in the correct position." }, { "page_index": 385, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_033.png", "page_index": 385, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:37:37+07:00" }, "raw_text": "S4SEM(sI(NAMEJI\"JilI VP1 SUBJ(NAME j1\"Jill\") SEM(s1(NAMIJIJI) NPISEM(NAMEJ1\"JII\" VAR sl NP3 SEMey1 (NAMEJIJill') YARcvl NPSEMINAMEJIJT VP SEM x( evlx)) VARJI VARcyl NP SEM VAR dI V SEM (V[- nonel s SEM ? semv )(NP SEM?semnp) Figure shows how this rule builds the SEM of the sentence Jill saw the dog. Compare this to the analysis built using Grammar 8.3 shown in Figure 8.6. The differences appear in the treatment of the VP Grammar 9.14 is a version of Grammar 8.3 reformulated using this technique" }, { "page_index": 388, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_036.png", "page_index": 388, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:37:51+07:00" }, "raw_text": "Linking Syntax and Semantics An advantage of the feature SUBJ technique No special mechanism need be introduced to handle semantic interpretation. In particular there is no need to have a lambda reduction step. A grammar specified in this form is reversible and hence can be used to generate sentences as well as parse them But not all lambda expressions can be eliminated using these techniques Example: Sue and Sam saw Jack The meaning of the VP must still be a lambda expression" }, { "page_index": 389, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_037.png", "page_index": 389, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:37:58+07:00" }, "raw_text": "SSEM?scmvp) (NP SEM ?scmsubj)(VP SUBJ?scmsubj SEM?senvp 2(VPVAR?vSUBJ?scmsubj SEM(?semv?v?semsubj))- (V_noncSEM?scm) (VP VAR7v SUB,l?scmsubj SEM(?semv?v?semsubj?semmp)- (V/np/SEM?semv)(NP SEM?semnp) 4 (NP VAR?v SEM (PRO?v ?scmpio))-(PRO SEM?sempro) 5 (NPVAR?v SEM(NAME??semame)(NAMESEM?seuamc 6 (NPVAR? SEM- (ART SEM?semat)(CNPSEM?scmcnp (CNPVAR?vSEM(?scmn?v)-(NSEM?scmm Hcad feates for S,VP.NP.CNPVAR Grammar 8.11: A simple grammar with SEM features" }, { "page_index": 390, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_038.png", "page_index": 390, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:38:01+07:00" }, "raw_text": "Linking Syntax and Semantics 8.7 Generating Sentence from Logical Form Example: generate a sentence from a logical form by grammar 9.3: (< PAST SEES1> s1 (NAME j1 \"JilI\") ) The grammar 8.3 has only one rule S. If we try to unify the SEM value in rule 1 with this logical form it will fail. Rule 1 in the grammar 8.3: ( S SEM (? semvp ? semnp ) > (NP SEM ? semnp ) (VP SEM ? semvp)" }, { "page_index": 391, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_039.png", "page_index": 391, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:38:05+07:00" }, "raw_text": "Linking Syntax and Semantics 8.7 Generating Sentence from Logical Form The lambda reduction was used to convert the logical form, which was. ((a(< PAST SEES1 >s1 a< THE d1 (DOG1 d1) >)) (NAME j1\"Jill\") There is an inverse operation to lambda reduction, called lambda abstraction, that could be to find a match. There are three possible lambda abstractions of the logical form namely: ( e ( e (NAME j1 \"Jill\" ) ( a(< PAST SEES1 >s1 a)) (X o(s1(NAMEj1\"Jill) o)" }, { "page_index": 392, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_040.png", "page_index": 392, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:38:09+07:00" }, "raw_text": "Linking Syntax and Semantics 8.7 Generating Sentence from Logical Form Only second abstraction will yield an appropriate sentence. In many ways parsing and realization are very similar processes. A realizer starts with the logical form and tries to find a tree to account for it and hence determine to find the words to realize it. Example: Realizing an S with SEM ( s1(NAME j1 \"Jill\"))" }, { "page_index": 393, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_041.png", "page_index": 393, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:38:13+07:00" }, "raw_text": "Linking Syntax and Semantics 8.7 Generating Sentence from Logical Form Rule 1 in the grammar 9.14, such as: ( S SEM ? semvp ) -> (NP SEM ? semsubj ) (VP SUBJ ? semsubj SEM? semvp ) While SEM semvp is(s1(NAMEj1\"Jill\") < THE d1 (DOG1 d1)>) The SEM of NP is unconstrained. So the technique may fail into an infinite loop One method for avoiding this problem is to expand the constituents in a different order" }, { "page_index": 394, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_042.png", "page_index": 394, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:38:16+07:00" }, "raw_text": "Linking Syntax and Semantics 8.7 Generating Sentence from Logical Form The algorithm in figure 8.12 gives process to perform realization The realization algorithm operates on a list of constituents much like the basic top-down parser described in chapter 3. It continues to rewrite constituents in this list until the list consists only of lexical constituents, at which point the words can be generated Consider this algorithm operating with grammar 8.11 and initial input: (S SEM (< PAST SEES1 > s1 ( NAME j1\"Jill\") < THE d1 ((DOG1 d1 > ))" }, { "page_index": 395, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_043.png", "page_index": 395, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:38:22+07:00" }, "raw_text": "Initialization: Set L to a list containing the constituent thatyouwishto generate Dountil L.contains no nonlexical constituents 1.1 If L. contains a constituent C that is marked as a nonlexical head 2Then use a rule in the granmar to rewrite C.Any variables in C that are bound in therewrite should be instantiated throughout the entire list. 3.94 lse choose a nonlexical constituent Cgiving preference to one whose SEM feature is bound if one exists.Use a rule in the grammar to rewrite C.Any variables in C that are bound in the rewrite should be instantiated throughou the entire list Figure 8.12: A head driven realization algorithm" }, { "page_index": 396, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_044.png", "page_index": 396, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:38:26+07:00" }, "raw_text": "Linking Syntax and Semantics 8.7 Generating Sentence from Logical Form The S constituent is rewritten based on rule1 to produce the following constituent list: (NP semsubj ) SEM ? (VP SUBJ? semsubj SEM ( s1(NAME j1 \"Jill\"))) Expanding VP based on rule 3. The SEM of VP: SEM (?semv ?v ?semsubj ?semnp ) As a result of the match, the following variables are bound: ? semv < < PAST SEES1 > ? v < s1 ? semsubj <(NAME j1 \"Jill\") ? semnp < < THE d1 (DOG1 d1)>" }, { "page_index": 397, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_045.png", "page_index": 397, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:38:30+07:00" }, "raw_text": "Linking Syntax and Semantics 8.7 Generating Sentence from Logical Form Thus we obtain the following list after rewriting the VP and instantiating the variables throughout the list: (NP SEM ( NAME E j1\"Jill (V [ - np ] SEM < PAST SEES1 > ( NP SEM < THE d1(DOG1 d1)>) Since there is no non-lexical head, the algorithm now picks any non-lexical constituent with a bound SEM, say first NP. Only rule 5 will match: (NAME SEM\"Jill\" (V[-np] SEM< PAST SEES1 > (NP SEM< THE d1 (DOG1 c d1) >)" }, { "page_index": 398, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_046.png", "page_index": 398, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:38:34+07:00" }, "raw_text": "Linking Syntax and Semantics 8.7 Generating Sentence from Logical Form The remaining NP is selected next, the rule 6 is matched. [ Jill\" (NAME SEM (V[-np] SEM PAST SEES1 > (ART SEM THE ( CNP SEM DOG1 The CNP is selected next, and rewritten as a common noun with SEM DOG1, and the algorithm is completed. The constituent list is now a sequence of the lexical categories:" }, { "page_index": 399, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_047.png", "page_index": 399, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:38:38+07:00" }, "raw_text": "Linking Syntax and Semantics 8.7 Generating Sentence from Logical Form categories [Jill (NAME SEM (V[-np] SEM PAST SEES1 > (ART SEM THE ( N SEM [ DOG1 It is simple to produce the sentence Jill saw the dog from lexical constituents (after the SEMs)" }, { "page_index": 400, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_048.png", "page_index": 400, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:38:41+07:00" }, "raw_text": "EXERCISE OF CHAPTER 8 1) Simplify following formulas using lambda reduction: (2x(P x)) A) ( x (x A)) ( y (Q y))) ((2 x (( y (P y)) x)) A) 2) Using the interpretation rules defined in this chapter and defining any rules that you need, give a detailed trace of the interpretation of the sentence The man gave the apple to Bill. Give the analysis of each constituent and show its SEM feature." }, { "page_index": 401, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_8/slide_049.png", "page_index": 401, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:38:44+07:00" }, "raw_text": "EXERCISE OF CHAPTER 8 3) Draw the parse trees showing the semantic interpretation for the constituents for following questions. Give the lexical entries showing the SEM feature for each word used that is not defined in this chapter, and define any additional rules needed that are not specified in this chapter. Who saw the dog ? Who did John qive the book to ?." }, { "page_index": 402, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_001.png", "page_index": 402, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:38:48+07:00" }, "raw_text": "Chapter 9: Ambiguity Resolution 9.1 Selectional Restrictions Word senses can be related in the different ways based on the object classes they describe. Some senses are disjoint: that is no object can be in both classes in the same time: DOG1 (sense of doq) and CAT1 (sense of cat) Other senses are subclasses of other senses: class DOG1 will be subclass of class MAMMAL1, and subclass of class PET1 (house pets) Other senses will overlap, such as: MAMMAL1 and PET1. All this knowledge can play a role in semantic disambiguation" }, { "page_index": 403, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_002.png", "page_index": 403, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:38:52+07:00" }, "raw_text": "Ambiguity Resolution 9.1 Selectional Restrictions The subset relation defines an abstraction hierarchy on the word senses. This relation is very important as it allows restriction to be state in terms of very broad classes. For instance: - adjective purple makes sense if it is modifying a physical object. It does not make sense: purple ideas or purple event. Adjective precise makes sense modifying an idea or action Adjective infortunate makes sense modifying event or situation" }, { "page_index": 404, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_003.png", "page_index": 404, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:38:58+07:00" }, "raw_text": "PERSON TIME NON-LIVING PIYSOBI ANIMATE- INDIVIDUAL LIVING ANIMAL ABSTRACTOB VEGETATIVE AL LOCATION ABSTRACIEVENT SITUATION-.EVENT -PHYSEVENI COLLECTION Figure 9.1: A word sense hierarchy" }, { "page_index": 405, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_004.png", "page_index": 405, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:39:02+07:00" }, "raw_text": "Ambiguity Resolution 9.1 s Selectional Restrictions Figure 9.1 shows a fragment of the top of type hierarchy that is structures, that is, senses may have multiple super-type. Example: MALE and FEMALE apply at level ANIMATE/VEGETATIVE ANIMATE and VEGETATIVE combine with these subclasses across the subclass LIVING.. Consider verb read. It has two principal arguments: the agent and the theme. The agent, which must be an object capable of reading.. for something of type PERSON)" }, { "page_index": 406, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_005.png", "page_index": 406, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:39:06+07:00" }, "raw_text": "Ambiguity Resolution 9.1 Selectional Restriction The theme must be an object that contains text (book newspaper...) To introduce a new type for handling correctly the verb read: TEXTOB under NOLIVING, TEXTOB is a superset of BOOK ARCLE/TEXT Example: the noun dishwasher has two senses; either a machine (DISHWASH/MACHI) or a person (DISHWASH/PERS) The noun article can be a paper (ARTICL/TEXT) or a part of speech (ARTICLE1" }, { "page_index": 407, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_006.png", "page_index": 407, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:39:09+07:00" }, "raw_text": "Ambiguity Resolution 9.1 Selectional Restriction These senses are in figure 9.2. Since these two words are ambiguous, the sentence The dishwasher read the article may have four distinct semantic meanings, but only one reading makes sense, namely: (READS1 AGENT < THE d1 DISHWASH/PERS >l THEMETHEME < THE p1 {ARTICLE/ TEXT ARTICLE1 } ]J Unpacking the notation, the unary and binary relations are found: (READS1 r1) ({DISHWASH/ MACH1 DISHWASH/ PERS}d1 ({ ARTICLE/ TEXT ARTICLE1} p1) (AGENT r1 d1) CTHEME E r1 p1" }, { "page_index": 410, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_009.png", "page_index": 410, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:39:23+07:00" }, "raw_text": "Ambiguity Resolution 9.1 Selectional Restriction The allowable combinations can be viewed as a constraint satisfaction problem. The selectional restrictions of READS1 are expressed as follows: AGENT READS1 PERSON ) THEME READS1 TEXTOBJ ) For (AGENT r1 d1) to be valid, d1 must be person. Thus the unary constraints on d1 can be simplified from {DISHWASH/ MACH1 DISHWASH/ PERS } d1) to (DISHWASH/ PERS d1 Similarly, the interpretation of p1 is simplified to (ARTICLE/TEXT p1):" }, { "page_index": 411, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_010.png", "page_index": 411, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:39:28+07:00" }, "raw_text": "Ambiguity Resolution 9.1 s Selectional Restriction By transferring these constraints back into the logical form, we end up with a single unambiguous reading, as desired. Note that the verb read has two senses READ1 and say READ2 as a form of understanding a person's intentions, as Jill can read John 's mind. The selectional restrictions for READ2 might be AGENT READS2 PERSON) THEME READS2 MENTAL - STATE ) With the additional sense, the initial logical form of The dishwasher read the article is: (READS1 READS2) r1[AGENT ] [ THEME < THE p1{ARTICLE/ TEXT ARTICLE}>]" }, { "page_index": 412, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_011.png", "page_index": 412, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:39:31+07:00" }, "raw_text": "Ambiguity Resolution 9.1 Selectional Restriction This additional ambiguity does not effect the final result, because the READ2 requires a M.ENTAL-STATE as a THEME We also need extend this technique to pronouns, proper noun adjectives Example 1: Proper name: John might be MALE, that is animate object. Unknown name might just default to having proper name INDIVIDUAL Example 2: The pronoun: SHE1 should be a subclass of FEMALE IT1 would be anything but PERSON" }, { "page_index": 413, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_012.png", "page_index": 413, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:39:34+07:00" }, "raw_text": "Ambiguity Resolution 9.1 Selectional Restriction Example 3: for adjective: using the state variable representation and new thematic relation MOD + happy dishwasher; instead of using predicate-argument form (HAPPY1 d1), we use the unary relation (HAPPY- STATE h1) and binary relation (MOD h1 d1) Example 4: the set relations derived from the sentence The happy dishwasher read the paper would be: (READS1 r1) ({DISHWASH/CHI DISHWASH/PERS} d1 ({ARTICLE/TEXT ARTICLE1} p1) (HAPPY-STATE h1) (AGENT r1 d1) (THEME r1 p1) (MOD h1 d1)" }, { "page_index": 414, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_013.png", "page_index": 414, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:39:38+07:00" }, "raw_text": "Ambiguity Resolution 9.1 Selectional Restriction The selectional restriction for happy dishwasher would be: (MOD HAPPY - STATE ANIMATE HAPPY - STATE must modify an animate object To explore the constraint satisfaction algorithm in a little more detail in figure 9.3. As example, consider running this algorithm on the sentence: The dishwasher read the article" }, { "page_index": 415, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_014.png", "page_index": 415, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:39:44+07:00" }, "raw_text": "Initialization Step Assign types(variable,) to the list of possible senses for variable i Iteration Step Iterate through each binary relation (rel variable variable): I. For each sense in types(variable): find all selectional restrictions (rel sense sense2) where sense2 intersects a. with soinc sense in types(variable2). b. If none found, remove sense, from types(variablc). 2 Eliminate from typcs(variable) any sense that did not match at least one restriction in step I. Termination Step If any changes were made to the types of the variables in the last iteration, then perform the iteration step once again. Otherwise, if type(variable) is empty for any i, then fail Figure 9.3: A simple constraint satisfaction algorithm" }, { "page_index": 416, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_015.png", "page_index": 416, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:39:48+07:00" }, "raw_text": "Ambiguity Resolution 9.1 Selectional Restriction The initial step produces the following types: type(r1) = READS1 . READS2 type(p1) = ARTICLE/TEXT,ARTICLE1 type(d1) = DISHWASH/PERS,DISHWASH/ MACH1 Interation step (first time) There are binary relations: (AGENT r1 d1) and(THEME E r1 p1) For : (AGENT r1 d1 ), we iterate through the senses of r1: READ1 and READ2:" }, { "page_index": 417, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_016.png", "page_index": 417, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:39:52+07:00" }, "raw_text": "Ambiguity Resolution 9.1 Selectional Restriction + READS1 - we find selectional restriction (AGENT READS1 PERSON ); PERSON matches only DlSHWASH/ PERS ( with result t DlSHWASH/ PERS + READ2 - we find selectional restriction (AGENT READS2 PERSON) and PERSON matches DlSHWASH/ PERS ( with result DISHWASH/ PERS Thus the type (d1) becomes ( DISHWASH/ PERS ) , that is, DISHWASH/ MACH1 has been eliminated because it can not satisfy any binary constraint." }, { "page_index": 418, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_017.png", "page_index": 418, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:39:55+07:00" }, "raw_text": "Ambiguity Resolution 9.1 Selectional Restriction For (THEME r1 p1 ), we iterate through the senses of r1 + READS1 - we find selectional restriction ( THEME READS1 TEXTOBJ , TEXTOBJ matches ARTICLE/ TEXT (with result ARTlCLE/ TEXT READS2 -we find no matching selectional restriction, + that is,(THEME READS2 MENTAL - STATE ) can not be satisfied Thus type ( r1) becomes (READS1 ). READS2 is eliminated and type ( p1) becomes ( ARTICLE/ TEXT ) because ARTlCLE is eliminated. since changes we made, we iterate again" }, { "page_index": 419, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_018.png", "page_index": 419, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:39:59+07:00" }, "raw_text": "Ambiguity Resolution 9.1 Selectional Restriction Interation step (second time) For (AGENT r1 d1), only one sense of r1 remains + READS1 - we find selectional restriction AGENT PERSON READS1 For THEME r1 p1) + READS1 -we find selectional restriction THEME READS1 TEXTOBJ are:" }, { "page_index": 420, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_019.png", "page_index": 420, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:40:03+07:00" }, "raw_text": "Ambiguity Resolution 9.1 Selectional Restriction type(r1) = READS1 type(p1) = ARTICLE/TEXT type(d1) = DISHWASH/ PERS Selectional restrictions are also very useful for further refining the type of unknown object. Example: He read it Assuming just the READS1 sense of the verb. The logical form of He read it would be (READS1 r3 [AGENT(PRO i1 HE1)][THEME(PRO n1(IT1 n1))] The unary and binary constraints on the objects are: (READS1 r3)) (AGENT r3 i1), (THEME r3 n1),(MALE i1),(IT1n1)" }, { "page_index": 421, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_020.png", "page_index": 421, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:40:08+07:00" }, "raw_text": "Ambiguity Resolution 9.1 Selectional Restriction (READS1 r3) >r3.r3 and (AGENT r3 i1) > i1>he ( From sense from [AGENT(PRO i1 HE1)]) and (MALE i1 ) then type of he will be constrained to be of type MALE - PERSON (is intersection of MALE and PERSON) From sense e(READS1 r3) > r3,and l (THEME r3 n1) -> n1,and THEME(PRO n1(IT1 n1))] -> IT, that is the type of it will be constrained to be a TEXTOBJ (is intersection IT vaoTEXTOBJ) Thus, after applying the selectional restrictions, the logical form of the sentence would be : (READS1 r3 [AGENT(PRO i1 (&(MALE i1) (PERSON i1)] [THEME(PRO n1(&(IT1 n1)(TEXTOBJ n1)))])" }, { "page_index": 422, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_021.png", "page_index": 422, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:40:12+07:00" }, "raw_text": "Ambiguity Resolution 9.2 Semantic Filtering Using Selectional Restrictions Two ways that selectional restrictions can be added to a parser: sequential model and incremental model. An incremental model Consider the sentence He booked a flight to the city for me PPs to be attached to either VPs or NPs. PP - to the city may modify the verb booked or noun flight PP- for me may modify noun city or verb booked. There are five ways these possibilities into a legal syntactic structure, however there is only one plausible reading, that would be: flight to the city and booked for me" }, { "page_index": 423, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_022.png", "page_index": 423, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:40:17+07:00" }, "raw_text": "Ambiguity Resolution 9.2 Semantic Filtering Using Selectional Restrictions The selectional restriction is implemented on verb booked, nouns flight and city. The selectional restrictions for the sentence He booked a flight to the city for me, that would be: (AGENT BOOKS1 PERSON1) FLIGHT1 THEME BOOKS1 (BENEFICIARY ACTION1 PERSON1 CDESTINATION FLIGHT1 CITY1 PHYSOBJ) NEARBY PHY SOBJ (NEARBY ACTION PHYSOBJ" }, { "page_index": 424, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_023.png", "page_index": 424, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:40:24+07:00" }, "raw_text": "I. (S SEM ?scinvp)-,(NP SEM?semsubj)(VP SUBJ ?semsubj SEM?senvp) 2 (VP SUBJ?semsubj SEM?semv)->(V_none) SUBJ?semsubjSEM?seuv) (VP SUBJ ?semsubj SEM?senv)- (V[_np/ SUBJ?semsubj OB]?semnp SEM?semv)(NP SEM?semnp) 4. (NP VAR?v SEM (PRO?v (?sempro?v))->(PRO SEM?sempro) S. (NP VAR?v SEM )->(ART SEM?semari)(CNP SEM?semcup) 6. (CNP VAR?v SEM(?senn?v))->(N SEM?sem) 7. (CNP SEM(&?semcnp?sempp))- 1 (CNP VAR?v SEM?senv) (PP PRED + ARGVAR ?v SEM?sempp) 8. (VP SEM(&?semvp?sempp)-, (VP VAR?v SEM?semvp)(PP PRED+ ARGVAR?v SEM?sempp) 9. (PP PRED +ARGVAR ?vl SEMI(?semp?v1?semnp)- (P SEM?sevp)(NP SEM?semnp) HeadfeaturcsforS.VP.NP.CNPVAR Grammar 9.4: A small grammar allowing PP attachment ambiguity" }, { "page_index": 425, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_024.png", "page_index": 425, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:40:32+07:00" }, "raw_text": "1111 PERSON MEI PIIYSOBJ FLIGHTI TOP INANIMATE COLLEGEI CITYI ACTION BOOKS1 a (ART AGR3sSEMINDEF1) booked (V SUBCAT np VFORM past SUB.J ?subj OBJ ?obj SEM (&(BOOKS1)(AGENT* ?subj)(TIEME?scmobj city N AGR 3sSEM CITYD college (N AGR3sSEM COLLEGE1) fight (N AGR3s SEM FLIGIITI) for (PPFORM1TorSEM1BENEFICIARY he (PRO AGR 3s SEMHEI) nc (I'RO AGR 1s SEM MEI) (PPFORMIncA SEM NEARBY) ncar the (ART AGR (3s3p) SEM1HE) to (PPFORM IO SEMDESTINATION Figure 9.5: A small lexicon and word sense hierarchy" }, { "page_index": 426, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_025.png", "page_index": 426, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:40:36+07:00" }, "raw_text": "Ambiguity Resolution 9.2 Semantic Filtering Using Selectional Restrictions Given the grammar 9.4, consider the bottom up chart parser on the He booked the flight to the city for me. sentence and g generates 52 constituents on the chart. With semantic filtering parser finds one interpretations and generates 33 constituents. Consider the first constituent parser suggested by the parser that is rejected by semantic filtering: (VP SEM (BOOKS1 V258) ) [AGENT ?semsubj] [THEME< INDEF1 V260(FLIGHT1 V260) >] [DESTINATION]) VAR v258 SUBJ? Semsubj" }, { "page_index": 427, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_026.png", "page_index": 427, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:40:38+07:00" }, "raw_text": "Ambiguity Resolution 9.2 Semantic Filtering Using Selectional Restrictions There are constituents, such as: VP- book the flight, PP- to the city It is rejected because it violates the selectional restrictions on the DESTINATION predicate,that is ( DESTINATION BOOKS1 CITY1) is not matched any selectional restriction." }, { "page_index": 428, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_027.png", "page_index": 428, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:40:44+07:00" }, "raw_text": "Ambiguity Resolution 9.3 Semantic Networks Semantic networks ease the construction of the lexicon by following inheritance of the properties. ALL S S SITUAIION PHYSOBJ ANIMATE EVENT NON-ANIMATE S VEGETABLE DOG PERSON NON-LIVING Figure 9.6: Part of a type hierarchy" }, { "page_index": 429, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_028.png", "page_index": 429, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:40:48+07:00" }, "raw_text": "Ambiguity Resolution 9.3 Semantic Networks In figure 9.6, the s arc indicates the subtype relationship. The Selectional restrictions for semantic relations can be in a network form using arcs. ACTION AGENT ANIMATE Figure 9.7: All actions have an animate agent" }, { "page_index": 430, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_029.png", "page_index": 430, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:40:53+07:00" }, "raw_text": "Ambiguity Resolution 9.3 Semantic Networks Figure 9.7 introduces here is a new node type, an extential node depicted by a square, which represents a particular value. ACTION AGENT ANIMATE RUNI Figure 9.8: A network showing inheritance of roles" }, { "page_index": 431, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_030.png", "page_index": 431, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:40:56+07:00" }, "raw_text": "Ambiguity Resolution 9.3 Semantic Networks - An important property of semantic networks is the inheritance of the properties Given the network is shown in the figure 9.8, the action class RUNS1 would inherit the property that every instance has an AGENT role filled by an ANIMATE object. Inheritance hierarchies are extremely useful for expressing selectional restrictions across abroad classes of verbs. Figure 9.9 shows the selectional restriction for a set of verb senses that are subslasses of ACTION" }, { "page_index": 432, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_031.png", "page_index": 432, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:41:03+07:00" }, "raw_text": "ANIMATE AGENT AT-TIME TIME ACTION AT-LOC LOCATION IIIYSOBJ JTUEME INSTR NON-ANIMATE OBJ/ACTION TO-POSS TRANSFER-ACTION ANIMATE READS! THEME PUSHES! GIVES! TO-LOC TEXTOBJ PUTSI LOCATION Figure 9.9: Action hierarchy with roles" }, { "page_index": 433, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_032.png", "page_index": 433, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:41:06+07:00" }, "raw_text": "Ambiguity Resolution 9.3 Semantic Networks In the figure 9.9, using the inheritance mechanism, we can see the action class TRANSFER-ACTION alows semantic relations AGENT, AT-TIME, AT-LOC inherited form the class ACTION THEME and INSTR inherited form the class OBJ/ACTION. The case TO-POSS is explicitly defined .for TANSFER-ACTION. Another important is part of hierarchy in which objects are related to their subparts (figure 9.10 Example: The desk drawer (the drawer is a part of the desk) - The man 's head (the head is part of the man - The handle of the drawer ( the handle is a part of the drawer" }, { "page_index": 434, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_033.png", "page_index": 434, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:41:12+07:00" }, "raw_text": "Ambiguity Resolution 9.3 semantic Networks DOORHANDLE DOOR HOUSE ROOM SUBPART SUBPART ISA ISA ISA SUBPART Figure 9.10: Some subpart relationships" }, { "page_index": 435, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_034.png", "page_index": 435, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:41:16+07:00" }, "raw_text": "Ambiguity l Resolution 9.4 Statistical Word Sense Disambiguity Selectional restrictions provide only a coarse classification of acceptable and unacceptable form, many cases of sense ambiguity cannot be resolved. To better model human processing, more predictive techniques must be developed that give a preference for the common interpretation of senses over rarer senses. Thus, the way to use is statistic technique The simplest techniques are based on simple unigram statistics. Given a suitable labeled corpus. We collect information on usage of the different senses of each word. Example: there are 5845 uses of the word bridge. 5651 uses of STRUCTURE1 194 uses of DENTAL - DEV37" }, { "page_index": 436, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_035.png", "page_index": 436, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:41:20+07:00" }, "raw_text": "Ambiguity Resolution 9.4 Statistical Word Sense Disambiguity Given this data, we would guess that bridge occurs in the STRUCTURE1 sense every time and has 97% ( 5651 times/5845 times). We would like to do much better than this by including some effect of context. Consider the rare sense DENTAL-DEV37, it occurs very rarely in the entire corpus. But in the certain texts ( dentistry or orthodontics), it will be the most common sense of the word It is concerned with word collocations. Collocation: that is what words would tend to appear together. We may consider bigram probabilities, trigrams or large groups, say five surrounding words. The amount of text examined for each word is called the window." }, { "page_index": 437, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_036.png", "page_index": 437, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:41:24+07:00" }, "raw_text": "Ambiguity Resolution 9.4 Statistical Word Sense Disambiguity To adapt part-of speech-tagging techniques to use word senses rather than syntactic categories + To need a corpus of words tagged with their senses. + Then we could compute unigram, bigram statistics ( the probability that word w has sense s). Estimating the probability of the senses of a word w relative of a window of the word in the text centered on w. Given a window size of n centered on the word w , the words in the window are indicated as follows: W1 W2...Wn/2 W" }, { "page_index": 438, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_037.png", "page_index": 438, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:41:28+07:00" }, "raw_text": "Ambiguity Resolution 9.4 Statistical Word Sense Disambiguity We want to compute the sense s of word w that masimazes the formula: PROB(w/SW1 W2...Wn/2 W W (n/2+1 ... Wn-1 ) We rewrite the formula by using Baye's rule and then make independence assumptions, the formula becomes: PROB(w/s) PROB(W1.Wn-1 PROB ( w1 ... wn-1) is not change for each sentence. Assuming that each word w, appear independently of other words in the window." }, { "page_index": 439, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_038.png", "page_index": 439, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:41:31+07:00" }, "raw_text": "Ambiguity Resolution 9.4 Statistical Word Sense Disambiguity PROB(wjw/s ) PROB, ( wjw/S ) is the probability that word w: occurs in a n-word window centered on word w in sense S. The best sense S will be the one that maximizes the formula PROB (wjw/S ) Count (#times wi in a window centered on w/S) PROB,(wjw/s)= Count (#times w/S is the center of a window) Given the data in figure 9.11, we will find a sense for the word bridge. by using the window size 11 words in the corpus with 10.000.000 words" }, { "page_index": 440, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_039.png", "page_index": 440, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:41:38+07:00" }, "raw_text": "with bridge with bridgel inany STRUCTURE1 DENTAL-DEV37 window Iceth 1 10 303 suspension 200 1 2000 the 5500 180 500,000 dentist 2 35 900 lotal.occurrences 5651 194 501.500 Figure 9.11: The counts for the senses for bridge in a hypothetical corpus" }, { "page_index": 441, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_040.png", "page_index": 441, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:41:42+07:00" }, "raw_text": "Ambiguity Resolution 9.4 Statistical Word Sense Disambiguity Given the data in figure10.11, we get the following estimates: PROBn (teeth/bridge/ STRUCTURED ) = 1/ 5651 = 1.77 * 10 -4 PROB, (teeth/bridge/ DENTAL - DEV37 ) = 10/ 194 = 0.052 PROB, (suspension/bridge/ STRUCTURE ) = 200/ 5651 = 0.35 PROBn (suspension/bridge/DENTAL - DEV37) = 1/194 = 5.15 * 10-3 PROB,(the/bridge/ STRUCTURE1 ) = 5500/ 5651 = 0. 97 PROB, (the/bridge/ DENTAL - DEV37 ) = 180/194 = 0.93 PROB,(dentist/ bridge / STRUCTURE1 ) = 2/5651 = 3.54.10 -4 PROB, (dentist/ bridge / DENTAL- DEV37) = 35/194 = 0.18 PROB,(bridge/ STRUCTURE1 ) = 5651 / 501500 = 0.113 PROBn (bridge/ DENTAL - DEV37 ) = 194/ 501500 = 3.87 * 10 -4 PROBn(the/bridge/STRUCTURE1)*PROB(bridge/STRUCTURE1)= 0.97* 0.113 = 0.109 PROB, (the/bridge/ DENTAL- DEV37 ) * PROB (bridge/ DENTAL- DEV37) = .93 * 3.87 * 10 -4 = 3.6 * 10-4" }, { "page_index": 442, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_041.png", "page_index": 442, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:41:46+07:00" }, "raw_text": "Ambiguity Resolution 9.4 Statistical Word Sense Disambiguity The context independent probabilities of the word senses are easily estimated: PROB (bridge/ STRUCTURE1) = 5651/501500 = 0.113 PROB(bridge/ DENTAL - DEV37) = 194/501500 = 3.878*10-4 Note that the probability estimates for the senses in the window that contains the word the are very similar to the no-context estimate: PROB,(the/bridge/ STRUCTURE1)* PROB,(bridge/ STRUCTURE1) 0.97*0.113 = 0.109 PROB, (the/bridge/ DENTAL - DEV37)* PROBn (bridge/ DENTAL - DEV37) = 0.93* 3.87 * 10 -4 = 3.6* 10-4" }, { "page_index": 443, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_042.png", "page_index": 443, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:41:50+07:00" }, "raw_text": "Ambiguity Resolution 9.4 Statistical Word Sense Disambiguity It is content words, like teeth in this example that has the most dramatic effect. For instance: PROB,(dentist/bridge/STRUCTURE1)*PROB(bridge/STRUCTURE1) PROB,(dentist/bridge/DENTAL-DEV37)*PROB(bridge/DENTAL- DEV37) = 0.18 * 3.87 * 10 -4 = 6.97 * 10 -5 Of course, with a larger window, there are many more chances for content words that strong effect the decision Example: The dentist put a bridge on my teeth The words teeth and dentist together in the same window combine to strongly prefer the rare sense of the word bridge." }, { "page_index": 444, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_043.png", "page_index": 444, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:41:54+07:00" }, "raw_text": "Ambiguity Resolution 9.4 Statistical Word Sense Disambiguity In the fact, the estimate for the sense DENTAL-DEV37 would be 3.6*10-6, considerably greater than the estimate of 7.08*10-7 for STRUCTURE1 Collocations and Mutual Information In the area uses collocations, which measure how likely two words are to co-occur in a window of text. One way to compute such a measure is to consider a correlation statistic (where n is the window size). PROB (w/ s & w' are in the same window) Cn (w/ S,w') = PROB(w/S in the window)*PROB(w' in the window)" }, { "page_index": 445, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_044.png", "page_index": 445, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:41:58+07:00" }, "raw_text": "Ambiguity Resolution 9.4 Statistical Word Sense Disambiguity If K is the number of windows in the corpus, then each of the probabilities above could be: Count (#times event occurs in window)/K After substituting such estimates in for each probability uses in Cn (w/ S, w'), simplifying we get the formula: K*Count (#times w/ S & w' co-occur in window) Cn(w/S,w')= Count(#times w/S in window*Count(#times w' in window) In our sample corpus K is 107 . Base on the date in figure 9.11, the estimates for C, are as follows:" }, { "page_index": 446, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_045.png", "page_index": 446, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:42:01+07:00" }, "raw_text": "Ambiguity Resolution 9.4 Statistical Word Sense Disambiguity PROB,(bridge/ STRUCTURED,teeth)=(10 7 *1)/(5651*300)= 5.9 PROB, (bridge/DENTAL-DEV37,teeth)=(10 7 *10)/(194*300)=171.9 PROB,( bridge/ STRUCTURED, suspension) =(10 7 *200)/(5651*2000)=17.7 PROB,(bridge/DENTAL-DEV37,suspension) = (10 7 *1)/(194*2000)=2.5 PROBn(bridge/ STRUCTURED, the) =(10 7 *5500)/(5651*500.000)= 1.94 PROB, (bridge/DENTAL-DEV37, the) = (10 7 *180)/(194*500.000)=1.84 PROB,(bridge/ STRUCTURED,dentist) ) =(10 7 * 2)/(5651* 900)= 3.9 PROB, (bridge/DENTAL-DEV37, dentist) = (10 7 * 35)/(194*900)=200" }, { "page_index": 447, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_046.png", "page_index": 447, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:42:05+07:00" }, "raw_text": "Ambiguity Resolution 9.4 Statistical Word Sense Disambiguity - To better distinguish statistics based on ratios, work in this area is often presented in terms of the log of ratio. - For word ratios as described in this section, this measure is called the mutual information of the two words and is written as In ( w1,w2). In (w1, w,) = log Cn( w1, w2) For example involving the two senses of bridge, the mutual information statistics are: I3 (bridge/STRUCTURE1, teeth) = 1.77 I3 (bridge/DENTAL-DEVS7,teeth) = 5.14 I3 (bridge/STRUCTURE1, the) = 0.66" }, { "page_index": 448, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_047.png", "page_index": 448, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:42:08+07:00" }, "raw_text": "Ambiguity Resolution 9.4 Statistical Word Sense Disambiguity Note that words that have no association with each other and co- occur together according to chance will have a mutual information number close to zero, if words are anticorrelated, that is, they co-occur together at a rate less than chance, then the mutual information number will be negative" }, { "page_index": 449, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_9/slide_048.png", "page_index": 449, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:42:12+07:00" }, "raw_text": "EXERCISE OF CHAPTER 9 1) Using the Cn function described in section 9.4 compute the score of each of the senses of the word bridqe in the five- word window \"the suspension bridge the construction \". 2) Extend the grammar, lexicon, sense hierarchy and selectional restriction given in section 9.2 as necessary to appropriately interpret the following sentences: He gave the book to the college He knows the route to the college 3) The technique for disambiguation in section 9.4 was based only on the probability of the binary relations. How might you extend this to account for unary relations as well ?. Describe your algorithm in detail and show it would operate given the sentence: He painted the suspension bridge at night" }, { "page_index": 450, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_001.png", "page_index": 450, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:42:15+07:00" }, "raw_text": "Chapter 10: Other Strategies for Semantic Interpretation Some techniques allow for the relatively rapid development of systems for a specific application. These techniques range from loosely coupled syntax and semantics to techniques that are essentially driven and use minimal syntactic information. So that it parses based on semantic structure rather than syntactic structure" }, { "page_index": 451, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_002.png", "page_index": 451, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:42:19+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.1 Grammatical Relations The idea underlying this approach is that the parser produces an output that is an abstracts away the details of the actual sentence, but retains the structure important for semantics as a set of grammatical relations or grammatical dependencies. The semantic interpreter then produces a meaning representation as a separate interpretation process that uses the grammatical relation as its input. Grammatical relations would be as follows: logical subject (LSUBJ). logical object (LOBJ) indirect object (IOBJ)" }, { "page_index": 452, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_003.png", "page_index": 452, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:42:26+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.1 Grammatical Relations Jack bought a tickct (s1 PRED BUYS1) (s1 TNS PAST (s1 LSUBJ (NAME j2\"JacK\")) (s1LOBJ) A ticket was bought by Jill (s2 PRED BUYS1) (s2 TNS PAST) (s2 LSUBJ (NAME j2 \"JiII\")) (s2 LOBJ Jill gave Jack a book (s3 PRED GIVES1) (TNS s3 PAST) (s3 LSUBJ (NAME j1 \"JiII')) (s3 LOBJ (s3 IOBJ (NAME j2\"Jack\") Jill gave a booktoJack (s4 PRED GIVESI)(TNS s TAST) (s4 LSUBJ (NAME j1JilI) (GIVES1LOBJ ) (GIVES1 TO (NAMEj2Jack Jill thinks thatJack stole (s5 PRED TIINKS1) the book (s5 LSUBJ (NAM1E j1\"JilI))(s5 LOBJ s6) (s6 PRED STEALS1) (s6 TNS IAST) (s61.SUBJ (NAM1Ej2Jack s6LOBJ Figure 10.1: A representation based on grammatical relations" }, { "page_index": 453, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_004.png", "page_index": 453, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:42:34+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.1 Grammatical Relations Pattern Logical Form 1. (PRED) (31) 2 (AT) (AT-LOC13 3. (LSUBJ) (AGENT13) 4. (LOBJ) (THEME13 5. (TO) (TO-POSS13) 6 (IOBJ) (TO.POSS13) 7. (LSUBJ) EXPERIENCERI3 8. (LOBJ) THEME13 9 (PAST ) (PAST1) Figure 10.2: Some patterns for interpreting grammatical relations" }, { "page_index": 454, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_005.png", "page_index": 454, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:42:38+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.1 Grammatical Relations VERB ACTION.VERB ATTITUDE.VERI TO.VERB GIVE STEAL BUY THINK Figure 10.3: A verb hierarchy for pattern matching" }, { "page_index": 455, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_006.png", "page_index": 455, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:42:42+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.1 Grammatical Relations Figure 10.1 shows some sample sentences and their representations in terms of grammatical relations. Each relation is of form (discourse-variable relation value), where the value may be another discourse variable or a SEM structure. One method is to represent each of the grammatical relations as a feature. The resulting feature is easily converted into triples. The features (PRED BUYS1 LSUBJ (NAME j2 \"Jack\") LOBJ) Are easily converted into the triples (s1 PRED BUYS1),(s1 LSUBJ (NAME j2 \"Jack\")) (s1 LOBJ ))" }, { "page_index": 456, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_007.png", "page_index": 456, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:42:45+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.1 Grammatical Relations The rules consist of a pattern, in which matches any element of type T. The right side of the rule specifies the semantic interpretation, where the number n indicates the value of the n'th element in the pattern (figure 10.2). - For the patterns , verbs must be classified as shown in figure 11.3 ACTION - VERB (for buy, give,steal) ATTITUDE - VERB (for believe, think TO - VERB (for give, donate, throw)" }, { "page_index": 457, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_008.png", "page_index": 457, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:42:50+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.1 Grammatical Relations Example: Jack bouqht a ticket which omitting the tense information, has structure: BUY S1) (1) (s1 PRED (2) (s1 LSUBJ (NAME e j2 \"Jack\") (3) (s1 LOBJ (1) I Matches the first triple in figure 11.2 (2) Matches the second triple (3) Matches the fourth triple Producing the three logical form fragments (BUY S1 s1) (AGENT s1 (NAME j2 \"Jack\")) s1 ) THEME" }, { "page_index": 458, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_009.png", "page_index": 458, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:42:54+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.1 Grammatical Relations These three fragments conjoined give the full logical form BUYS1 s1[AGENT(NAME j2\"Jack\")] [THEME] Example: Jill thinks that Jack stole the book (s5 PRED THINKS1) -> (THINKS1 l s5) (s5 LSUBJ (NAME j1 \"Jill\")) > (EXPERIENCER s5 (NAME j1 \"Jill\")) (s5 LOBJ s6) -> (THEME s5 s6)) (s6 PRED STEALS1) > (STEALS1 s6) (s6 TNS PAST) -> (PAST s6) (s6 LSUBJ (NAME j2\"Jack\")) -> (AGENT s6 (NAME j2\"Jack\")) (s6 LOBJ < THE b1 BOOK1 >) -> (THEME s6 < THE b1 BOOKS1 >)" }, { "page_index": 459, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_010.png", "page_index": 459, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:42:56+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.1 Grammatical Relations Merging these semantic translations would produce the following logical form in abbreviated form (ignoring tense) (THINKS1 s5 [ EXPERIENCER ( NAME j1 \"Jill) ] [ THEME ( STEALS1 s6 [AGENT ( NAME j2 \"Jack\" ) ] [THEME s6< THE b1 BOOK1>])]" }, { "page_index": 460, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_011.png", "page_index": 460, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:00+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.2 Semantic Grammars - When building a system for a particular application, there are often techniques that can be used to improve the efficiency and performance of the parsing and semantic interpretation. This section describes a technique for building a custom- tailored grammar for the application A general grammar of a natural language will contain many constructs that are necessary for wide coverage of the language but may not be needed in the application at hand. In these circumstances the general syntactic rule might be replaced in the grammar with a more specific semantically motivated rule" }, { "page_index": 461, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_012.png", "page_index": 461, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:04+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.2 Semantic Grammars Consider an application that supports queries to an airline database about flights. The following noun phrases referring to flights occur in this domain. the flight to Chicago the 8'oclock flight the first out flight 457 to Chicago To handle these noun phrases, a general grammar must contain the following rules NP > DET CNP (the flight) CNP > N PART (flight out ) CNP -> N ( flight ) CNP > CNP PP ( flight to Chicago) CNP > PRE - MOD CNP ( 8'oclock flight) NP N NUMB ( flight 457)" }, { "page_index": 462, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_013.png", "page_index": 462, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:08+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.2 Semantic Grammars For cities in this domain, we find the following types of the noun phrases: Chicago, the nearest city to Dallas These phrases can be handle by the general grammar we just created, with the addition of one more rule to handle proper names. The problem with this is that now we have to restrict rules to apply to the appropriate categories. Example: the city to Chicago ( the flight to Chicago ) the 8'clock.flight ( the 8'oclock.flight ) the first city out (+flight out ) city 567 (+flight 457)" }, { "page_index": 463, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_014.png", "page_index": 463, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:13+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.2 Semantic Grammars In the limited domain, it is often simpler to introduce new specialized lexical categories based on their semantic properities such as FLIGHT-N. With the lexical categories, the general grammar might be rewritten as follows: FLIGHT-NP > DET FLIGHT-CNP (the flight) ELIGHT-CNP Z FLIGHT-N (flight) ELIGHT-CNP Z FLIGH-CNP FLIGHT-DEST (flight to Chicago) ELIGHT-CNP > FLIGHT-CNP FLIGHT-SOURCE (flight from Boston) FLIGHT-CNP > FLIGHT-N FLIGHT-PART (flight out) ELIGHT-CNP > FLIGHT-PRE -MOD FLIGHT-CNP (8'oclock flight) ELIGHT-NP Z FLIGHT-N NUMBER (flight 457) CITY-NP Z CITY-NAME (Boston) CITY-NP -> DET CITY-CNP (the city CITY-CNP > CITY-N (city) CITY-CNP ->CITY-MOD ClTY-CNP CITY-MOD-AGR(nearest city to Dallas" }, { "page_index": 464, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_015.png", "page_index": 464, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:17+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.2 Semantic Grammars Of course, many other rules are needed, but these use semantic allows prepositional phrases that specify the destination cities of flights: FLlGHT - DEST > t0 ClTY - NP FLIGHT -SOURSE > from ClTY - NP Higher level syntactic structures can be similarly tailored to these categories, such as the rule: TIME - QUERY -> When does FLIGHT - NP FLIGHT - VP A grammar that is cast in terms of the major semantic categories of the domain is called a semantic grammar." }, { "page_index": 465, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_016.png", "page_index": 465, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:21+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.2 Semantic Grammars - We can augment a semantic grammar to produce a logical form in the normal way. An alternative method, however it is simply to use the parse tree itself as the logical form. Example: Figure.10.4 shows the parse tree for the query: Does the flight to Chicago leave ? The tree structure in figure 10.4, might be represented as TIME - QUERY CFLIGHT - NP ( DET the (FLIGHT - CNP ( FLIGHT - CNP flight ) ( FLIGHT - DEST to ( CITY - NP Chicago )))) (FLIGHT-EVENT leave ))" }, { "page_index": 466, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_017.png", "page_index": 466, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:27+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 11.2 Semantic Grammars TIME-QUERY FLIGHT-NP FLIGHT-VP DET FLIGHIT-CNP FLIGHT-CNPFLIGHT-DEST CITY-NI when does the flight to Chicago leave Figure 10.4: The parse tree for \"When does the flight to Chicago leave ?\"" }, { "page_index": 467, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_018.png", "page_index": 467, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:31+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.2 Semantic Grammars This structure would be at least as easy to convert to the database query as the full logical form for the sentence. So there is no advantage to further semantic analysis. Semantic grammars combine aspects of the syntax, semantics and selectional restriction in the simple uniform framework. The results of semantic grammar have proven useful for the rapid development of the parsers in the limited application domain The down side, however, is that they do not port well to new domains and can not handle applications in broad domains. A new domain will require a completely new semantic grammar, whereas most of syntactic grammar for one domain will apply to another domain" }, { "page_index": 468, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_019.png", "page_index": 468, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:33+07:00" }, "raw_text": "Other Strategies for Semantic Interpretation 10.3 Template Matching (Study oneself) 10.4 Semantically driven Parsing Techniques (Study oneself)" }, { "page_index": 469, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_10/slide_020.png", "page_index": 469, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:38+07:00" }, "raw_text": "EXERCISE OF CHAPTER 10 i 1) Show the logical form generated from the grammatical dependency representation of the sentence A ticket was bought at the theater, using the data shown in figure 10.1. 10.2 and 10.3 2) Draw the parse tree produced by the semantic grammar described in section 10.2 for the query When does the 8 pm train from Boston arrive in Chicago ?. Show additional rules that you have to add to the grammar so that it will accept this sentence 3) Extend the treatment based on grammatical relations described in section 10.1 so that it handles the beneficiary case as in: Jack bought me a ticket. Jack bought a ticket for me" }, { "page_index": 470, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_001.png", "page_index": 470, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:41+07:00" }, "raw_text": "Chapter 11: Knowledge Representation and Reasoning Many problems arose in earlier chapters were not resolved because they required knowledge of context. Two important aspects of context are general knowledge and specific knowledge. To analyze these we need the formalism for representing knowledge and reasoning. This area of study is called knowledge representation (KR)" }, { "page_index": 471, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_002.png", "page_index": 471, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:44+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.1 Knowledge Representation There are two forms of knowledge that are crucial in any knowledge system: general knowledge and specific knowledge. and so on. For instance, it might code that OwN1 is a relation between people and object but not individual person . V General world knowledge is essential for solving many language being interpretation problems, one of most important disambiguation" }, { "page_index": 472, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_003.png", "page_index": 472, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:48+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.1 Knowledge Representation For example, the proper attachment of the final PP in the following two sentences dependents solely on the reader's background knowledge of the appropriate time needed for reading and for evolution: I read a story about evolution in ten minutes I read a story about evolution in the last million years Specific Knowledge: is important for many issues, including determining the referent of noun phrases and disambiguating word senses based on what makes sense in the current situation - The knowledge representation is coding the knowledge and beliefs of the understanding system." }, { "page_index": 473, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_004.png", "page_index": 473, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:52+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.1 Knowledge Representation - A knowledge representation consists of a database of sentences called the knowledge base (KB) and set of inference techniques , that can be used to derive new sentences given the current KB. - The language in which the sentences in the KB are defined is called knowledge representation language (KRL). The KRL could be the same as the logical form, but there are practical reasons why they are often differ. In logical form language a wide range of quantifiers that is closely corresponding to the different word senses of English quantifiers. - In the most current KRL, however there are usually only a few quantification." }, { "page_index": 474, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_005.png", "page_index": 474, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:55+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.1 Knowledge Representation The KB must support two forms of inference: entail and implications. Entail: formula P must be true given the formulas in KB or given the formulas representing the meaning of a sentence, then we say that the KB (or sentence) entails P. Implications are conclusions that can typically be drawn from a sentence but that could be explicitly denied in specific circumstances Example: Jack owns two cars - entails that: Jack owns a car (that is, this fat cannot be denied) But only implies that he does not own three cars, as we could continue by saying In fact, he owns three cars." }, { "page_index": 475, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_006.png", "page_index": 475, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:43:59+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.1 Knowledge Representation Types of Inference: Many different forms of inference are necessary to understand natural language. Inference techniques can be classified into deductive and non- deductive forms Deductive inference: give a set of facts, a deductive inference process will make only conclusions that logically follow from those facts. Non-deductive inference falls into several classes: inductive inference and abductive reference. Inductive inference is learning generalities from examples. Abductive reference is inferring causes from effects." }, { "page_index": 476, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_007.png", "page_index": 476, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:03+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.1 Knowledge Representation Abductive inference can be contrasted with deductive inference by considering the axiom: A D B deductive inference would use this axiom to infer B when given A Abductive inference would use it to infer A given B, since A is reason that B is true Many systems allow the use of default information. The default rule is an inference rule to which there may be exceptions, thus it is defeasible. If we write a default information using notation A => B, then the default inference rule could be stated as follows: if A = B, and a is true, and B is not provable, then conclude B." }, { "page_index": 477, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_008.png", "page_index": 477, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:06+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.1 Knowledge Representation Example: Given sentence Birds fly, could be represented by the FOPC formula: V x BIRD (x) = FLIES (x) This has the effect that whenever there is a bird B for which is not provable that FLIES (B), then it can be inferred that FLIES (x). Defeasible rule introduce a new set of complexities in a representation without such rules, most representations are monotonic, because adding a new assertion only increases the number of formulas entailed" }, { "page_index": 478, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_009.png", "page_index": 478, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:11+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.1 Knowledge Representation For instance, if the knowledge base KB1 entails a conclusion C, and if we add an additional formula to KB1 to form a new consistent knowledge base KB2, the KB2 will also entail C. This is not true of representation that uses defeasible rules, and hence they are called nonmonotonic representation. Consider a knowledge base KB consisting of the formulas: Cat(Sampson) Sampson is a cat TabbyCat(Sampson) Sampson is a tabby cat V c. Cat(c) =Purrs(c) Cats purr Given this KB we can conclude Purrs (Sampson) using the default rule because there is no information to contradict Purrs(S)" }, { "page_index": 479, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_010.png", "page_index": 479, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:14+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.1 Knowledge Representation On the other hand, if we add a new fact that no tabby cats purr then the extended knowledge have would no longer entail that Sampson purrs. Inference Techniques The two main classes of inference techniques found in knowledge representation systems are procedural and declarative. Most systems combine these techniques to some extent forming a continuum from purely declarative representation to purely procedural ones. :" }, { "page_index": 480, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_011.png", "page_index": 480, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:16+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.1 Knowledge Representation ACTION AGENT ANIMATE S OBJIACTION THEME PHYSOBJ Figure 11.1 An example of simple inheritance" }, { "page_index": 481, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_012.png", "page_index": 481, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:21+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.1 Knowledge Representation In chapter 10 the techniques of inheritance was introduced for semantic networks. This inference process can be realized procedurally or declaratively. Given simple network in figure 11.1, the following FOPC axioms might represent this information. 1. V x.ACTION(x) 3 a.AGENT(x,a) & ANIMATE(a) 2. V a x.ACTION(x) & AGENT(x,a) ANIMATE(a) 3. V x.OBJ /ACTION (x) ACTION (x) 4. V x.OBJ/ACTION(x) 3 o.THEME(x,o) & PHYSOBJ (0) 5. o. x.OBJ/ACTION (x) & THEME E(x,o) D PHYSOBJ" }, { "page_index": 482, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_013.png", "page_index": 482, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:25+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.1 Knowledge Representation Declarative system Using these axioms, we can prove that the class OBJ/ACTION inherits the AGENT role. In other words, for any A such that OBJ/ACTION(A) is true, we could prove that A has an AGENT role, that is (by using axioms 3 and 1): a.AGENT(A,a) & ANIMATE(a) Procedural system (1) As described in chapter 9, a procedural version of this would be a program that start at the specified node OBJ/ACTION, finds all roles attached at that node and the follow the S arc up to the supertype ACTION and finds all the roles attached there." }, { "page_index": 483, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_014.png", "page_index": 483, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:28+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.1 Knowledge Representation Procedural system (2) The complete set of the roles gathered by this procedure is the answer. Thus any OBJ/ACTION has an AGENT role inherited from the class ACTION Note that: Both these techniques compute the same result, but the first does it by deduction over logical formulas, while the second uses a program that performs a graph traversal. The first technique seems more rigorously defined, but the second is probably more efficient." }, { "page_index": 484, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_015.png", "page_index": 484, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:32+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.2 A Representation Based on FOPC KRL will be an extended version of the first-order predicate calculus. Note that by choosing the language , we are not committed to any particular form of inference. We will focus on the extensions to standard FOPC that are needed to represent the meaning of the natural language sentences. + The terms in the language are constants (such as John1) functions (such as father(John1)), and variables (such as x and y)) + Logical form language did not use constants." }, { "page_index": 485, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_016.png", "page_index": 485, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:35+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.2 A Representation Based on FOPC Example: - logical form term (NAME j1 \"John\") - in KB, actual person referred to in a given context might be represented by the constant John1. Restricted quantification are used in the KRL that are similar to the generalized quantifier notation in the logical form language. Restrictions follow the quantifier variable separated by \":' : 3 x : Man (x) Happy (x)< 3 x. Man (x) & Happy Example: (x) V x : Man (x) Happy (x) V x.Man (x) = Happy (x)" }, { "page_index": 486, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_017.png", "page_index": 486, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:39+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.2 A Representation Based on FOPC The equality predicate (a = b), which states that terms a and b have the same denotation. Given simple proposition Pa involving a constant a, if Pa is true and a = b, then Pb must be true as well, where Pa is the Pb, except that a has been replaced b same as" }, { "page_index": 487, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_018.png", "page_index": 487, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:41+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.3 Frames: Representation Stereotypical Information (Study oneself)" }, { "page_index": 488, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_019.png", "page_index": 488, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:45+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.4 Handling Natural Language Quantifications We can now consider some issues in mapping the logical form language into the KRL. One of the most obvious differences between the two languages is the treatment of quantifiers. Significant progress can be made to reduce the differences, however by extending the ontology of the KRL to allow sets as objects A set is a collection of objects as a unit. + Sets may be finite (such as the set of John and Mary) or + infinite (such as the set of numbers greater than 7). We will only use a finite sets in KRL. + A set can be indicated by listing its members in curly brackets" }, { "page_index": 489, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_020.png", "page_index": 489, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:50+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.4 Handling Natural Language Quantifications Example: { John1, Mary1}. That order does not matter; {John1,Mary1} ={Mary1,John1} - We also allow constants to denote sets. Thus S1 might be a set defined by the formula S1={ John1, Mary1} Sets might be members of other sets. Sets will usually be defined in terms of some property. This will be written in the form {y Py} In addition, the following predicates to relate sets and individuals: the following predicates to relate sets and individuals; S1 c S2 iff all the elements of S1 are in S2 x E S iff x is a member of the set S" }, { "page_index": 490, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_021.png", "page_index": 490, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:53+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.4 Handling Natural Language Quantifications An interpretation for Some men met at three, as follows: M:M c{ xMan(x)}.Meet1(M,3 PM Consider the different formulas that arise from the collective/distributive readings. Example: There are two interpretations of the sentence Some men bought a suit, which has following logical forms (omitting the tense operator) SOME m1 : (PLUR MAN1 (A s1 : SUIT1 BUY1 m1 s1)))" }, { "page_index": 491, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_022.png", "page_index": 491, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:44:57+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.4 Handling Natural Language Quantifications The collective reading would map to: M1:M1 c{zMan(z)} s:Suit(s).Buy1(M1,s that is, there is a subset of the set of all men who together bought a suit. The distributive reading would map to: M2:M2 c{zMan(z)} V m:m EM2 s : Suit(s).Buy1 (m,s ) that is, there are some men individually buying suit . Note that the collective and distributive readings involve a common core meaning involving the subset of men The only difference is whether we use the set as a unit or quantify over all members of the set." }, { "page_index": 492, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_023.png", "page_index": 492, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:45:01+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.4 Handling Natural Language Quantifications New function of cardinality of set For any given set S, let S be the number of elements in S. For example, the meaning of Three men entered the room would be as follows (tense information is omitted): M:(M c{yMan(y)} &M=3 V m : m e M . Enter1 ( m, Room1 ) By changing the restriction to M > 3, we get the meaning of At least three men entered the room. If we define e most as being true if more than half of some set has a given property, then Most men laughed might have the meaning:" }, { "page_index": 493, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_024.png", "page_index": 493, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:45:04+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.4 Handling Natural Language Quantifications If we define most as being true if more than half of some set has a given property, then Most men laughed might have the meaning M:(M c{yMan(y)}&M>{yMan(y)} V m : m E M. Laughed (m )" }, { "page_index": 494, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_025.png", "page_index": 494, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:45:08+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.5 Time and Aspectual Classes of Verbs One of the central components of any knowledge representation that supports natural language is the treatment of verbs and time. Time In the logical form language, temporal information was handled in several ways. There were modal operators to present tense: PAST PRES, PROG, FUT, and temporal connectives: PEFORE, DURING There are several different types of time: time point, interval durations The following predicates are allowed for temporal relations: t1 < t2 point / interval t1 is before point / interval t2" }, { "page_index": 495, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_026.png", "page_index": 495, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:45:12+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.5 Time and Aspectual Classes of Verbs t1 : t2 interval t1 meets interval t2, or point t1 defines the beginning of the interval t2, or point t2 defines the end of interval t1. t1 C t2 point/ interval t1 is contained in interval t1 Some predicates can be true only over interval of times, whereas others can be true only at points, and other can be true at either. Aspectual Classes of Verbs Sentences describe propositions that fall into at least three distinct classes: + those that describe states (stative proposition); + those define ongoing activities (activity proposition) + those define completed events (telic proposition)" }, { "page_index": 496, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_027.png", "page_index": 496, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:45:15+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.5 Time and Aspectual Classes of Verbs Example: - Sentences describe states: Jack is happy. I believe the world is flat The door was swinging to and fro Sentences define completed events: Jack fell asleep. They climbed the mountain in two days. Encodinq Tense Tense operators can also be represented directly in the temporal logic without the need for modal operators." }, { "page_index": 497, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_028.png", "page_index": 497, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:45:19+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.5 Time and Aspectual Classes of Verbs Let's assume that the constant NOW denotes the current time Given this we could map the PAST operator into a formula that existentially quantifies over a time before now. The sentence John was happy would map to the KR expression: T1.T1 < NOW1.Happy (Jack1,T1) The same sentence in the simple present Jack is happy: T1.NOW1 c T1.Happy(John1,NOW1 And the simple future Jack will be happy: T1.T1 >NOW1.Happy (Jack1,T1)" }, { "page_index": 498, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_029.png", "page_index": 498, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:45:22+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.5 Time and Aspectual Classes of Verbs Specifically, Reichenbach (1947) developed a theory that tense gives information about three times: S -the time of speech E - the time of event /state R - the reference time Example: Jack sings simple present : S = R, E = R Jack sang simple past : R1 ... , n) - return true if there is some binding of the variables such that each literal is found in the database" }, { "page_index": 502, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_033.png", "page_index": 502, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:45:40+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.6 Procedural Semantics and Question Answering (Retrieve < var> 1 ... , n) - like Test, but if it succeeds it returns every instance of the indicated variable that provides a solution. - For example, given the database in figure 11.3, the query: (Retrieve? x ( FLIGHT ? x ) (ATIME? x CH1 1000 HR ) will return the list (F2) because F2 is only binding of ?x where both these literals are in the database. - The procedural semantics approach gets more interesting as it interprets logical connective and quantifiers, which have no corresponding construct in the relational database" }, { "page_index": 503, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_034.png", "page_index": 503, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:45:43+07:00" }, "raw_text": "(FLIGHT F1 ATIME F1 CHI 1700HR (FLIGHT F2) ATIME F2 CHI 1000HR) (FLIGHT F3) ATIME F3 CHI 900HR) (FLlGHT F4) ATIME F4 BOS 1700HR (AIRPORT BOS (DTIME F1 BOS 1600HR (AIRPORT CH (DTIME F2 BOS 900HR) (DTIME F3 BOS 800HR) (DTIME F4 CHI 1600HR H.11.3 A simple database of airline schedules" }, { "page_index": 504, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_035.png", "page_index": 504, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:45:47+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.6 Procedural Semantics and Question Answering The logical operators are interpreted as follows. Conjunctions (& R1 , ... Rn) - will translate into a program of the form (CHECK - ALL - TRUE (R1) , ... T (Rn)) + Disjunctions: ( OR R1 , ... Rn ) - will translate into a program of the form FIND - ONE - TRUE ET(R1),...,T(Rn) The procedure for negation is interpreted as follows. (NOT R) - translates into a program of the form (UNLESS T(R))" }, { "page_index": 505, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_036.png", "page_index": 505, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:45:51+07:00" }, "raw_text": "Knowledge Representation and Reasoning 1.6 Procedural Semantics and Question Answering - Three quantifiers important in question-answering applications: THE,EACH,and WH (THE x : Rx Px) - translates into a program (FIND -THE ?x T(R?x) T (p ?x)) + ( EACH x : Rx Px ) - translates to a program (ITERATE ?x T(R?x)T(P?x)) + ( WH x : Rx Px ) - translates to a program PRINT - ALL ?x T(R?x)T(P?x)) To show some examples using the database in figure 12.3." }, { "page_index": 506, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_037.png", "page_index": 506, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:45:54+07:00" }, "raw_text": "Knowledge Representation and Reasoning 11.6 Procedural Semantics and Question Answering Example: the query Which flight to Chicago leaves at 4 PM ? would have the logical form (after scoping) (WH f1 :(&(FLIGHT f1) (DEST f1 (NAME c1 \"Chicago\") (LEAVE l1 (NAME t1 <4 PM\")) This would translate into a query of the form (PRINT-ALL? f (FLIGHT? f)(ATIME? f CH1? t (DTIME? f? s 1600 HR) The DEST relation maps to an ATIME relation and the LEAVE predicate maps into the DTIME relation. Note that the departure location was not specified in the logical form and so is treated as a" }, { "page_index": 507, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3085", "source_file": "/workspace/data/converted/CO3085_Natural_Language_Processing/Chapter_11/slide_038.png", "page_index": 507, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:45:59+07:00" }, "raw_text": "EXERCISE OF CHAPTER 11 1) Give a plausible logic-based representation for the meaning of the following sentences. Focusing on the interpretation of the give both interpretations Several men cried Seven men in the book met in the park All but three men bought a suit. 2) Given the sentence What arrived time of each flight from Boston is ?. a) Translate the query into a logical form. b) Translate the logical form of a) into a procedural semantics based on the database in figure 11.3. c) Present step by step how the computer performs the procedural semantics to export the answer." } ] }