ACL-OCL / Base_JSON /prefixY /json /Y08 /Y08-1048.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y08-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:38:00.498648Z"
},
"title": "Automatically Extracting Templates from Examples for NLP Tasks *",
"authors": [
{
"first": "Ethel",
"middle": [],
"last": "Ong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "De La Salle University",
"location": {
"settlement": "Manila",
"country": "Philippines"
}
},
"email": "onge@dlsu.edu.ph"
},
{
"first": "Bryan",
"middle": [
"Anthony"
],
"last": "Hong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "De La Salle University",
"location": {
"settlement": "Manila",
"country": "Philippines"
}
},
"email": ""
},
{
"first": "Vince",
"middle": [
"Andrew"
],
"last": "Nu\u00f1ez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "De La Salle University",
"location": {
"settlement": "Manila",
"country": "Philippines"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present the approaches used by our NLP systems to automatically extract templates for example-based machine translation and pun generation. Our translation system is able to extract an average of 73.25% correct translation templates, resulting in a translation quality that has a low word error rate of 18% when the test document contains sentence patterns matching the training set, to a high 85% when the test document is different from the training corpus. Our pun generator is able to extract 69.2% usable templates, resulting in computer-generated puns that received an average score of 2.13 as compared to 2.7 for human-generated puns from user feedback.",
"pdf_parse": {
"paper_id": "Y08-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present the approaches used by our NLP systems to automatically extract templates for example-based machine translation and pun generation. Our translation system is able to extract an average of 73.25% correct translation templates, resulting in a translation quality that has a low word error rate of 18% when the test document contains sentence patterns matching the training set, to a high 85% when the test document is different from the training corpus. Our pun generator is able to extract 69.2% usable templates, resulting in computer-generated puns that received an average score of 2.13 as compared to 2.7 for human-generated puns from user feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Templates have been used in IE as extraction patterns to retrieve relevant information from documents (Muslea, 1999) , and in NLG as forms that can be filled in to generate syntactically correct and coherent text for human readers. They have also been used in machine translation (Cicekli and Guvenir, 2003) and (McTait, 2001) , and in pun generation (Ritchie et al, 2006) .",
"cite_spans": [
{
"start": 102,
"end": 116,
"text": "(Muslea, 1999)",
"ref_id": "BIBREF12"
},
{
"start": 280,
"end": 307,
"text": "(Cicekli and Guvenir, 2003)",
"ref_id": null
},
{
"start": 312,
"end": 326,
"text": "(McTait, 2001)",
"ref_id": "BIBREF11"
},
{
"start": 351,
"end": 372,
"text": "(Ritchie et al, 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this paper, we present two NLP systems. TExt (Go et al, 2006 and Nunez, 2008) , a bidirectional English-Filipino machine translator, extracts translation templates from a bilingual corpus, and together with a bilingual lexicon, uses these templates to translate an input text to another language. T-Peg (Hong, 2008) utilizes semantic and phonetic knowledge to capture the wordplay used in a training set of human jokes, resulting in templates that contain variables, tags, and word relationships that are used to generate punning riddles.",
"cite_spans": [
{
"start": 48,
"end": 67,
"text": "(Go et al, 2006 and",
"ref_id": "BIBREF5"
},
{
"start": 68,
"end": 80,
"text": "Nunez, 2008)",
"ref_id": null
},
{
"start": 306,
"end": 318,
"text": "(Hong, 2008)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Because manually creating templates can be tedious and time consuming, several researches have worked on automatically extracting templates from training examples that have been preprocessed. In our previous example-based MT, SalinWika (Bautista et al, 2005) , templates are extracted from a bilingual corpus that has been pre-tagged and manually annotated with word features, resulting in a long training process. Its successor, TExt (Go et al, 2006) , did away with a tagger and instead requires a parallel bilingual corpus and an English-Filipino lexicon to align pairs of untagged sentences to extract translation templates.",
"cite_spans": [
{
"start": 222,
"end": 258,
"text": "MT, SalinWika (Bautista et al, 2005)",
"ref_id": null
},
{
"start": 435,
"end": 451,
"text": "(Go et al, 2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Our pun generator, T-Peg (Hong, 2008) , on the other hand, subjects the training examples through a pre-processing stage to identify nouns, verbs and adjectives. Instead of manually annotating the example set, the training algorithm relies on existing linguistic resources and tools to perform its task.",
"cite_spans": [
{
"start": 25,
"end": 37,
"text": "(Hong, 2008)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "TExt Translation (Go et al, 2006 ) is an EBMT system that automatically extracts translation templates from a bilingual corpus and uses these to translate English text to Filipino and vice versa. It relies on a bilingual corpus for its training examples, which contains a set of sentences in the source language with a corresponding translation in the target language. Correspondences between the sentences are learned and stored in a database of translation templates. A translation template is a bilingual pair of patterns where corresponding words and phrases are aligned and replaced with variables. Each template is a sentence preserving the syntactic structure and ordering of words in the source text, regardless of the variance in the sentence structures of the source and target languages. During translation, the input sentence is used to find a matching source template, while the target template is used to generate the translation.",
"cite_spans": [
{
"start": 17,
"end": 32,
"text": "(Go et al, 2006",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting and Using Templates for Machine Translation",
"sec_num": "2."
},
{
"text": "TExt learns two types of translation templates using the Translation Template Learner heuristic presented in (Cicekli and G\u00fcvenir, 2003) . A similarity translation template contains a sequence of similar items learned from a pair of input sentences and variables representing the differences. A difference translation template contains a sequence of differing items from the pair of input sentences, and variables representing the similarities. Consider the sentence pairs S1 and S2, and the learned similarity (T1) and difference templates (T2 and T3).",
"cite_spans": [
{
"start": 109,
"end": 136,
"text": "(Cicekli and G\u00fcvenir, 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2.1.Translation Templates and Chunks",
"sec_num": null
},
{
"text": "Naglalakad ang batang lalaki. S2: The teacher is walking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S1: The boy is walking.",
"sec_num": null
},
{
"text": "Naglalakad ang guro. Using the lexicon to align the corresponding English and Filipino words in the input sentences, the tokens \"The\", \"is walking\", and \"Naglalakad ang\" are retained as constants of T1, while \"boy/batang lalaki\", and \"teacher/guro\" are retained as constants of T2 and T3, respectively. [1], [2] , and [3] are variables in the template. A template variable, called chunk, is represented by a numeric value, e.g., [1] , to refer to its domain. The domain allows chunks to have a reference from their source template. Specific chunks are labelled as [X.n] , where X is its domain and n is its sequence number in the domain. Only the domain is needed to identify if a chunk can be used in translation. For example, if the domain in a template is [X] , then any chunk with a domain \"X\" can be used to fill the variables in the template. From S1 and S2, chunks [1.1] and [1.2] are learned. If another chunk [1.3] is learned from a different set of input sentence pairs in a later training session, then all these chunks can be used during translation to fill variable [1] If refinement cannot be performed, DTS is performed to compare the new sentences pair with other aligned sentence pairs. Both Similarity Template Learning (STL) and Difference Template Learning (DTL), as presented in Cicekli and Guvenir (2003) , are performed. The differing elements in the input are created as chunks for the similarity templates, while the similar elements are created as chunks for the difference templates. DTL always generates two difference templates for each matching input sentence pairs. Consider sentence pairs S4 and S5. S4: My favorite pet is a dog.",
"cite_spans": [
{
"start": 308,
"end": 311,
"text": "[2]",
"ref_id": null
},
{
"start": 318,
"end": 321,
"text": "[3]",
"ref_id": null
},
{
"start": 429,
"end": 432,
"text": "[1]",
"ref_id": null
},
{
"start": 564,
"end": 569,
"text": "[X.n]",
"ref_id": null
},
{
"start": 759,
"end": 762,
"text": "[X]",
"ref_id": null
},
{
"start": 1079,
"end": 1082,
"text": "[1]",
"ref_id": null
},
{
"start": 1300,
"end": 1326,
"text": "Cicekli and Guvenir (2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "S1: The boy is walking.",
"sec_num": null
},
{
"text": "Aso ang aking paboritong alaga. S5: My favorite color is red.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S1: The boy is walking.",
"sec_num": null
},
{
"text": "Pula ang aking paboritong kulay. Templates can also be derived from chunks using DTC. Consider the new sentence pair S6 and existing chunks [10] and [11] . DTC simply takes matching chunks from the knowledge base and uses them as variables to replace parts of S6, resulting in template T8.",
"cite_spans": [
{
"start": 140,
"end": 144,
"text": "[10]",
"ref_id": null
},
{
"start": 149,
"end": 153,
"text": "[11]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "S1: The boy is walking.",
"sec_num": null
},
{
"text": "Filipinos are known to be cheerful and hospitable. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S6:",
"sec_num": null
},
{
"text": "Input sentence tokens are analyzed to collect candidate templates and chunks, which must have at least one word used in the input sentence. The candidates are assigned scores according to the structure of the template or chunk, the presence or absence of chunk variables in templates, and the presence of word matches in templates. The translation output that produces the highest total score is used. In case of a tie, the first candidate with the highest score is selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.3.Using the Learned Templates in Translation",
"sec_num": null
},
{
"text": "T-Peg (Hong, 2008) generates punning riddles using templates learned from training examples of human-generated puns. Punning riddles are jokes that use wordplay and covers pronunciation, spelling, and possible semantic similarities and differences. Various resources are utilized by the learning algorithm, namely the Unisyn phonetic lexicon (Fitt, 2002) that provides the phonological information of words, the MontyTagger (Liu, 2003) for POS tagging, the Electronic Lexical Knowledge Base (Jarmasz and Szpakowicz, 2006) to get the base form of words, the WordNet (2006) for synonym lookup, and the ConceptNet (Liu, et. al. 2004) for semantic analysis to describe the relations between objects.",
"cite_spans": [
{
"start": 6,
"end": 18,
"text": "(Hong, 2008)",
"ref_id": "BIBREF6"
},
{
"start": 342,
"end": 354,
"text": "(Fitt, 2002)",
"ref_id": "BIBREF4"
},
{
"start": 424,
"end": 435,
"text": "(Liu, 2003)",
"ref_id": "BIBREF10"
},
{
"start": 491,
"end": 521,
"text": "(Jarmasz and Szpakowicz, 2006)",
"ref_id": "BIBREF8"
},
{
"start": 611,
"end": 630,
"text": "(Liu, et. al. 2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting and Using Templates for Pun Generation",
"sec_num": "3."
},
{
"text": "A T-Peg template contains the source pun (in question-answer format) with variables replacing keywords in the pun. Variables are of three types. Similar-sounding variables represent words with the same pronunciation as the regular variable, for example, waist and waste. Compound word variables are two variables that combine to form a word, for example sun and burn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.1.Extracting Punning Templates",
"sec_num": null
},
{
"text": "A template is annotated with word relationships, represented as <varName1> <relationship type> <varName2>, to show how one variable is related to another. Synonym relationships denote that the first variable is synonymous with the second variable. Isa-word relationships denote that the first variable combined with the second variable should form a word. Sounds-like relationships denote that the first variable should have the same pronunciation with the second variable. Semantic relationships show how the first variable is related to the second variable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.1.Extracting Punning Templates",
"sec_num": null
},
{
"text": "The training corpus is preprocessed by the tagger, stemmer, and synonym finder. The tagged corpus undergoes valid word selection to identify which of the nouns, verbs, and adjectives in the punning riddle are candidate variables. Word relationships between these variables are then determined by the phonetic checker, synonym checker, and semantic analyzer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.1.Extracting Punning Templates",
"sec_num": null
},
{
"text": "Consider the pun P1 and its corresponding template T1, where \"Xn\" represents question-side variables, and \"Yn\" represents answer side variables. \"<var>-0\" represents the similar sounding word of <var> from Unisyn, for example, Y1-0 represents the word \"sun\" which has the same pronunciation as the keyword \"son\" (variable Y1). Table 1 lists the semantic word relationships derived from ConceptNet for the variables of P1.",
"cite_spans": [],
"ref_spans": [
{
"start": 327,
"end": 334,
"text": "Table 1",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "3.1.Extracting Punning Templates",
"sec_num": null
},
{
"text": "A son-burn. (Binsted, 1996) T1: What kind of <X3> <X4>? A <Y1>-<Y2>. A compound word (word with a dash \"-\") is also checked and marked if at least one of its parts has an existing word relationship. From P1, the compound word relationship extracted is Y1-0 IsAWord Y2 (sun IsAWord burn).",
"cite_spans": [
{
"start": 12,
"end": 27,
"text": "(Binsted, 1996)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "P1: What kind of boy burns?",
"sec_num": null
},
{
"text": "The extracted templates are then validated for usability. A template is usable if all of the word relationships form a complete chain. If the chain is incomplete, the template cannot be used in the generation phase since not all of the variables will be filled with possible values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P1: What kind of boy burns?",
"sec_num": null
},
{
"text": "Generation of puns starts with a keyword input, which is tried with all of the available templates, by substituting it on each variable that has the same POS tag. Word relationship grouping is then performed. Given two variables, say X1 and Y4, there may be more than one word relationship connecting these two variables, e.g., X1 IsA Y4 and X1 ConceptuallyRelatedTo Y4. A word relationship group is satisfied if at least one of the word relationships in the group is satisfied. Consider the pun P3 and its word relationship groupings shown in Table 2. P3: How is a window like a headache? They are both panes. (Binsted, 1996) T3: How is a <X3> like a <X5>? They are both <Y4>.",
"cite_spans": [
{
"start": 611,
"end": 626,
"text": "(Binsted, 1996)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 544,
"end": 552,
"text": "Table 2.",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "3.2.Using the Learned Templates in Generation",
"sec_num": null
},
{
"text": "The possible word generator checks if the variables can be populated with values to satisfy the word relationships starting with the keyword, while the possible word connector connects the possible words together to form groups of variables to be used for a sentence. The surface form generator takes the groups of variables and substitutes them to the slots in the template to form the punning riddle, before passing to the surface realizer for output to the user. Given the keyword \"garbage\", the possible values for the variables of template T3 and the sequence of their derivation from the linguistic resources are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.2.Using the Learned Templates in Generation",
"sec_num": null
},
{
"text": "X5 ==> Y4-0 ==> Y4 ==> X3 Garbage ==> waste ==> waist ==> trunk",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.2.Using the Learned Templates in Generation",
"sec_num": null
},
{
"text": "The word relationships that were satisfied and the filled template are shown in Table 3 , resulting in the T-Peg generated pun \"How is a trunk like a garbage? They are both waists.\" ",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 87,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "3.2.Using the Learned Templates in Generation",
"sec_num": null
},
{
"text": "TExt was trained with four sets of bilingual corpora containing a total of 163 sentence pairs. Corpora#1-3, containing 49, 15 and 41 sentences, respectively, were created by the proponents and verified by a linguist; they contain sentences that have similar structures so that templates can be learned. Corpus #4, containing 58 sentences, was adapted from an essay given by the Filipino Department of De La Salle University -Manila. Using a Strict Chunk Alignment with Splitting (SCAS) approach in deriving templates from sentences requires all tokens to be aligned and the number of chunks in the source to be equal to that in the target. This resulted in learning more templates that are of good quality, as shown in Table 4 (for Corpus #4), compared to the Loose Chunk Alignment approach (LCA). Correctness refers to the actual templates and chunks learned as well as the proper alignment of tokens in the source and target template or chunk. Notice that LCA has a high error rate, and learning is not bi-directional as it did not learn the same number of templates and chunks. The extracted templates also contained too many frequently occurring words which were filtered to prevent learning templates that have small coverage during translation since they contain only common words as constants. Table 5 shows the results of performing common words filtering combined with SCAS for all four corpora. NCWF (no common words filtering) generated fewer templates and more chunks. CWF (common words filtering) generated more templates and fewer chunks which is preferable because templates are able to capture proper sentence structures that preserve word order in the resulting translation. More templates would also mean more candidates for refinement in subsequent training. The last column in Table 5 shows the number of templates learned from Corpora #1-4 when both similarity and difference template learning algorithms are used. The additional templates were mostly derived from existing chunks (Nunez, 2007) .",
"cite_spans": [
{
"start": 2002,
"end": 2015,
"text": "(Nunez, 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 719,
"end": 726,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 1301,
"end": 1308,
"text": "Table 5",
"ref_id": "TABREF8"
},
{
"start": 1797,
"end": 1804,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Translation Quality using Extracted Templates",
"sec_num": "4."
},
{
"text": "To determine the translation quality using the learned templates and chunks, Corpus #5 containing 30 sentences was derived from Corpora #1-4. Table 6 shows the number of sentences that were translated using templates alone, chunks alone, word-for-word translation, and combination of all three. The STL approach was able to match more templates to the input text while the DTL approach utilizes more chunks. These results correspond to the training results, where STL learned more templates and DTL learned more chunks. Table 7 shows the evaluated translation output of Corpora #5 and #6 (containing 126 sentence pairs whose patterns and words do not match the training set). The automatic evaluation methods used were word error rate (WER), sentence error rate (SER), and bilingual evaluation understudy (BLEU). For Corpus #5, since STL was able to match more templates to the input text, the translation is of better quality with lower error rates. In the translation of Corpus #6, cases arise when no matching templates can be found for an input sentence. Chunks are then used, resulting in poorer quality translation with 100% sentence error rate.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 520,
"end": 527,
"text": "Table 7",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Translation Quality using Extracted Templates",
"sec_num": "4."
},
{
"text": "T-Peg was trained with a corpus of 39 punning riddles derived from JAPE (Binsted, 1996) and The Crack-a-Joke Book (Webb, 1978) . Each riddle generates one template, and of these, only 27 (69.2%) are usable. The unusable templates contain missing relationships due to two factors. The phonetic lexicon (Unisyn) contains entries only for valid words and not for syllables. Thus, in P4, the \"house-wall\" relationship is missing because \"wal\" is not found in Unisyn to produce the word \"wall\". The semantic analyzer (ConceptNet) is also unable to determine the relationship between two words, for example, in P5, the \"infantry-army\" relationship.",
"cite_spans": [
{
"start": 72,
"end": 87,
"text": "(Binsted, 1996)",
"ref_id": "BIBREF1"
},
{
"start": 114,
"end": 126,
"text": "(Webb, 1978)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Puns Generated from Learned Templates",
"sec_num": "5."
},
{
"text": "P4: What nuts can you use to build a house? Wal-nuts. (Binsted, 1996) P5: What part of the army could a baby join? The infant-ry. (Webb, 1978) The usable templates were manually verified if they contain sufficient information in capturing the wordplay. The rating used is based on the word relationships in the pun. 10 templates were chosen based on their completeness and correctness in capturing the most crucial word relationships. The 10 templates received an average score of 4.0 out of 5, with missing word relationships due to limitations of Unisyn and ConceptNet, for example, in P6, between \"heaviest\" and \"weight\"; while in P7, between \"tap\" and \"plumber\" and the syllable \"ber\" that was incorrectly classified as a valid word.",
"cite_spans": [
{
"start": 54,
"end": 69,
"text": "(Binsted, 1996)",
"ref_id": "BIBREF1"
},
{
"start": 130,
"end": 142,
"text": "(Webb, 1978)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Puns Generated from Learned Templates",
"sec_num": "5."
},
{
"text": "P6: Which bird can lift the heaviest weights? The crane. (Webb, 1978) T6: Which <X1> can <X3> the heaviest <X6>? The <Y1>. P7: What kind of fruit fixes taps? The plum-ber. (Binsted, 1996) T7: What kind of <X3> <X4> taps? A <Y1>-<Y2>. Table 8 lists sample puns in the training set and the corresponding generated puns. User feedback gave an average score of 2.7 to the original puns, while the generated puns received an average score of 2.13, showing that computer puns are almost at par with human-made puns. What do you call a lizard on the wall? A rep-tile. (Binsted, 1996) What do you call a movie on the floor? A holly-wood.. What part of a fish weighs the most? The scales. (Webb, 1978) What part of a man lengthens the most? The shadow. What keys are furry? Mon-keys. (Webb, 1978) What verses are endless? Uni -verses .",
"cite_spans": [
{
"start": 57,
"end": 69,
"text": "(Webb, 1978)",
"ref_id": "BIBREF16"
},
{
"start": 172,
"end": 187,
"text": "(Binsted, 1996)",
"ref_id": "BIBREF1"
},
{
"start": 561,
"end": 576,
"text": "(Binsted, 1996)",
"ref_id": "BIBREF1"
},
{
"start": 680,
"end": 692,
"text": "(Webb, 1978)",
"ref_id": "BIBREF16"
},
{
"start": 775,
"end": 787,
"text": "(Webb, 1978)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 234,
"end": 241,
"text": "Table 8",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Quality of Puns Generated from Learned Templates",
"sec_num": "5."
},
{
"text": "The works presented here explored the use of learning algorithms to automatically extract templates from training examples provided by the user. TExt demonstrated that similarity and difference bilingual translation templates can be extracted from an unannotated and untagged corpus. The learning algorithm also performs template refinement and extracts chunks to supplement the limited lexicon and for deriving additional templates. Further work on TExt may involve semantic analysis of the words in the input sentences in order to select the most appropriate translation for a given word that has different meanings depending on its context in the sentence. The addition of a morphological analyzer for English and Filipino can also help the alignment process of the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "T-peg demonstrated that computers can be trained to be as humorous as humans by automatically extracting patterns of human-created jokes and using these as templates for the system to create its own jokes, utilizing various linguistic resources. Computer-generated jokes can find application in human-computer dialog systems, to make the conversation and interaction between the human and the computer sound more natural. Future work for T-Peg involves exploring template refinement or merging, which could improve the quality of the learned templates. Some form of manual intervention may also be added to increase the number of usable templates by addressing the missing word relationships caused by limitations of the external linguistic resources. We are planning to explore automatic extraction of story patterns for use by our children story generation system, Picture Books (Hong et al, 2008) . The templates pair approach of TExt can be used to present a basic story structure in different forms suitable for various reading age groups. The approach of T-Peg in extracting and storing word relationships can be explored further as a means of teaching vocabulary and related concepts to young readers.",
"cite_spans": [
{
"start": 881,
"end": 899,
"text": "(Hong et al, 2008)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "22nd Pacific Asia Conference on Language, Information and Computation, pages 452-459",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SalinWika: An Example-Based Machine Translation System Using Templates",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bautista",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Fule",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Gaw",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Hernandez",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bautista, M., Fule, M., Gaw, K., and K.L Hernandez. 2004. SalinWika: An Example-Based Machine Translation System Using Templates. Undergraduate Thesis. De La Salle University, Manila.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Machine Humour: An Implemented Model of Puns",
"authors": [
{
"first": "K",
"middle": [],
"last": "Binsted",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Binsted, K. 1996. Machine Humour: An Implemented Model of Puns. Ph.D. Thesis. University of Edinburgh.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning Translation Templates from Bilingual Translation Examples. Recent Advances in Example-Based Machine Translation",
"authors": [
{
"first": "I",
"middle": [],
"last": "Cicekli",
"suffix": ""
},
{
"first": "H",
"middle": [
"A"
],
"last": "G\u00fcvenir",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "255--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cicekli, I. and H.A. G\u00fcvenir. 2003. Learning Translation Templates from Bilingual Translation Examples. Recent Advances in Example-Based Machine Translation, pp. 255-286. Kluwer Publishers.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An Introduction to Natural Language Generation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dale, R. 1995. An Introduction to Natural Language Generation. Technical Report, Microsoft Research Institute (MRI). Macquarie University, Australia.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unisyn Lexicon Release",
"authors": [
{
"first": "S",
"middle": [],
"last": "Fitt",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fitt, S. 2002. Unisyn Lexicon Release. http://www.cstr.ed.ac.uk/projects/unisyn/.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "TExt Translation: Template Extraction for a Bidirectional English-Filipino Example-Based Machine Translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Go",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Morga",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Nunez",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Veto",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Go, K., Morga, M., Nunez, V. and F. Veto. 2006. TExt Translation: Template Extraction for a Bidirectional English-Filipino Example-Based Machine Translation. Undergraduate Thesis. De La Salle University, Manila.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Template-Based Pun Extractor and Generator",
"authors": [
{
"first": "B",
"middle": [],
"last": "Hong",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong, B. 2008. Template-Based Pun Extractor and Generator. MSCS Thesis. De La Salle University, Manila.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Picture Books: An Automated Story Generator",
"authors": [
{
"first": "A",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "J",
"middle": [
"T"
],
"last": "Siy",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Solis",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Tabirao",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong, A., Siy, J.T., Solis, C. and E. Tabirao. 2008. Picture Books: An Automated Story Generator. Ongoing Undergraduate Thesis. De La Salle University, Manila.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Roget's Thesaurus -Electronic Lexical Knowledge Base ELKB",
"authors": [
{
"first": "M",
"middle": [],
"last": "Jarmasz",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jarmasz, M. and S. Szpakowicz. 2006. Roget's Thesaurus -Electronic Lexical Knowledge Base ELKB. http://www.nzdl.org/ELKB/.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "MontyTagger",
"authors": [
{
"first": "H",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, H. 2003. MontyTagger. http://web.media.mit.edu/~hugo/montytagger/.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Linguistic Knowledge and Complexity in an EBMT System Based on Translation Patterns",
"authors": [
{
"first": "K",
"middle": [],
"last": "Mctait",
"suffix": ""
}
],
"year": 2001,
"venue": "MT Summit VIII",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McTait, K. 2001. Linguistic Knowledge and Complexity in an EBMT System Based on Translation Patterns. In MT Summit VIII, September 2001, Spain.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Extraction Patterns for Information Extraction Tasks: A Survey",
"authors": [
{
"first": "I",
"middle": [],
"last": "Muslea",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings AAAI-99 Workshop on Machine Learning for Information Extraction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muslea, I. 1999. Extraction Patterns for Information Extraction Tasks: A Survey. Proceedings AAAI-99 Workshop on Machine Learning for Information Extraction.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Combining Similarity and Difference Templates for a Bidirectional Example-Based Machine Translation",
"authors": [
{
"first": "V",
"middle": [],
"last": "Nunez",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nunez, V. 2007. Combining Similarity and Difference Templates for a Bidirectional Example- Based Machine Translation. MSCS Thesis. De La Salle University, Manila.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Extracting and Using Translation Templates in an Example-Based Machine Translation System",
"authors": [
{
"first": "E",
"middle": [],
"last": "Ong",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Go",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Morga",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Nunez",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Veto",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Research in Science, Computing, and Engineering",
"volume": "4",
"issue": "3",
"pages": "81--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ong E., Go, K., Morga, M., Nunez, V. and F. Veto. 2007. Extracting and Using Translation Templates in an Example-Based Machine Translation System. Journal of Research in Science, Computing, and Engineering, 4(3), 81-98. De La Salle University, Manila.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The STANDUP Interactive Riddle Builder",
"authors": [
{
"first": "G",
"middle": [],
"last": "Ritchie",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Manurung",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Pain",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waller",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mara",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE Intelligent Systems",
"volume": "21",
"issue": "",
"pages": "67--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritchie, G., Manurung, R., Pain, H., Waller, A., and O'Mara, D. (2006). The STANDUP Interactive Riddle Builder. In IEEE Intelligent Systems, 21(2), 67-69, March/April 2006.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Crack-a-Joke Book. Puffin Books",
"authors": [
{
"first": "K",
"middle": [],
"last": "Webb",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Webb, K. 1978. The Crack-a-Joke Book. Puffin Books. London, England.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "WordNet: A Lexical Database for the English Language",
"authors": [
{
"first": "",
"middle": [],
"last": "Wordnet",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "WordNet, 2006. WordNet: A Lexical Database for the English Language. Princeton University.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table><tr><td colspan=\"2\">[4.2]: boy</td><td colspan=\"2\">lalaki</td><td>[5.2]: hopping</td><td>nagkakandirit</td></tr><tr><td colspan=\"2\">[6.2]: park</td><td colspan=\"2\">parke</td></tr><tr><td/><td/><td/><td/><td>in T1.</td></tr><tr><td colspan=\"2\">[1.1]: boy</td><td colspan=\"2\">batang lalaki</td><td>[1.2]: teacher</td><td>guro</td></tr><tr><td colspan=\"3\">[1.3]: carpenter</td><td>karpentero</td></tr><tr><td colspan=\"4\">2.2.Learning Translation Templates</td></tr><tr><td colspan=\"5\">Aligned sentence pairs are analyzed and translation templates are extracted following three</td></tr><tr><td colspan=\"5\">steps, namely template refinement (TR), deriving templates from sentences (DTS), and deriving</td></tr><tr><td colspan=\"5\">templates from chunks (DTC). TR compares an aligned sentence pair against existing templates</td></tr><tr><td colspan=\"5\">in the database. An aligned sentence pair is said to match a given template if it contains a token</td></tr><tr><td colspan=\"5\">that matches exactly with a corresponding token in the template itself. There must be a</td></tr><tr><td colspan=\"5\">corresponding match in both the source and target languages for the template to be considered.</td></tr><tr><td colspan=\"5\">Through these similarities, a candidate refinement is identified. Consider the input sentence</td></tr><tr><td colspan=\"5\">pair S3, and the existing template T4 and chunks [4], [5] and [6].</td></tr><tr><td>S3:</td><td colspan=\"4\">The boy is hopping in the park.</td></tr><tr><td/><td colspan=\"4\">Nagkakandirit ang lalaki sa parke.</td></tr><tr><td>T4:</td><td colspan=\"4\">The [4] is [5] in the [6].</td><td>[5] ang [4] sa [6].</td></tr><tr><td colspan=\"2\">[4.1]: girl</td><td colspan=\"2\">babae</td><td>[5.1]: walking</td><td>naglalakad</td></tr><tr><td colspan=\"3\">[6.1]: street</td><td>kalsada</td></tr></table>",
"html": null,
"text": "TR considers S3 as a candidate refinement for T4 because of their matching tokens (in italics). The identified differences are used to create new chunks, namely[4.2], [5.2] and[6.2].",
"num": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td>T5:</td><td colspan=\"4\">My favorite [7] is [8]</td><td>[8] ang aking paboritong [7]</td></tr><tr><td colspan=\"2\">[7.1]: pet</td><td>alaga</td><td/><td>[7.2]: color</td><td>kulay</td></tr><tr><td colspan=\"2\">[8.1]: a dog</td><td>aso</td><td/><td>[8.2]: red</td><td>pula</td></tr><tr><td>T6:</td><td colspan=\"3\">[9] pet is a dog</td><td>Aso ang [9] alaga.</td></tr><tr><td>T7:</td><td colspan=\"3\">[9] color is red</td><td>Pula ang [9] kulay.</td></tr><tr><td colspan=\"3\">[9.1]: My favorite</td><td colspan=\"2\">aking paboritong</td></tr></table>",
"html": null,
"text": "All similar tokens between S4 and S5 (in italics) are preserved as constants in the new similarity template T5 while the differing elements are created as chunks[7] and[8]. On the other hand, all differing tokens are preserved as constants in the new difference templates T6 and T7 while the similar element is created as a new chunk[9].",
"num": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td>Word Relationship</td><td>For Readability</td></tr><tr><td>X3 ConceptuallyRelatedTo Y1</td><td>boy ConceptuallyRelatedTo son</td></tr><tr><td>X4 ConceptuallyRelatedTo Y1-0</td><td>burn ConceptuallyRelatedTo sun</td></tr><tr><td>Y1-0 CapableOf Y2</td><td>sun CapableOf burn</td></tr></table>",
"html": null,
"text": "Word relationships extracted from P1 using ConceptNet",
"num": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table><tr><td>Word Relationship</td><td>For Readability</td></tr><tr><td>X3 ConceptuallyRelatedTo Y4</td><td>window ConceptuallyRelatedTo pane</td></tr><tr><td>Y4 ConceptuallyRelatedTo X3</td><td>pane ConceptuallyRelatedTo window</td></tr><tr><td>Y4 PartOf X3</td><td>pane PartOf window</td></tr><tr><td>X5 ConceptuallyRelatedTo Y4-0</td><td>headache ConceptuallyRelatedTo pain</td></tr><tr><td>X5 IsA Y4-0</td><td>headache IsA pain</td></tr><tr><td>Y4-0 ConceptuallyRelatedTo X5</td><td>pain ConceptuallyRelatedTo headache</td></tr><tr><td>Y4-0 SoundsLike Y4</td><td>pain SoundsLike panes</td></tr></table>",
"html": null,
"text": "Word relationship groupings for P3",
"num": null,
"type_str": "table"
},
"TABREF6": {
"content": "<table><tr><td>Word Relationship</td><td>Filled Template</td></tr><tr><td>X3 ConceptuallyRelatedTo Y4</td><td/></tr><tr><td>Y4 ConceptuallyRelatedTo X3</td><td/></tr><tr><td>Y4 PartOf X3</td><td>waist PartOf trunk</td></tr><tr><td>X5 ConceptuallyRelatedTo Y4-0</td><td/></tr><tr><td>X5 IsA Y4-0</td><td>garbage IsA waste</td></tr><tr><td>Y4-0 ConceptuallyRelatedTo X5</td><td/></tr><tr><td>Y4-0 SoundsLike Y4</td><td>waste SoundsLike waist</td></tr></table>",
"html": null,
"text": "Filled template T3 for keyword \"garbage\"",
"num": null,
"type_str": "table"
},
"TABREF7": {
"content": "<table><tr><td>LCA</td><td>SCAS</td></tr></table>",
"html": null,
"text": "Test results for chunk alignment algorithms applied on Corpus #4",
"num": null,
"type_str": "table"
},
"TABREF8": {
"content": "<table><tr><td>SCAS with</td><td>NCWF</td><td/><td>CWF</td></tr><tr><td>Template Learning Algorithm</td><td>STL</td><td>STL</td><td>STL + DTL</td></tr><tr><td>Total # of template pairs learned</td><td>59</td><td>73</td><td>119</td></tr><tr><td>Total # of chunk pairs learned</td><td>237</td><td>210</td><td>218</td></tr></table>",
"html": null,
"text": "Test results for common words filtering with strict chunk alignment algorithm",
"num": null,
"type_str": "table"
},
"TABREF9": {
"content": "<table/>",
"html": null,
"text": "Using templates in the translation of Corpus #5",
"num": null,
"type_str": "table"
},
"TABREF10": {
"content": "<table><tr><td/><td/><td>Corpus #5</td><td/><td colspan=\"2\">Corpus #6</td><td/></tr><tr><td>Approach</td><td colspan=\"2\">WER (%) SER (%)</td><td>BLEU</td><td colspan=\"2\">WER (%) SER (%)</td><td>BLEU</td></tr><tr><td/><td/><td colspan=\"3\">English to Filipino Translation</td><td/><td/></tr><tr><td>STL + DTL</td><td>15.17</td><td>73.33</td><td>0.7126</td><td>89.90</td><td>100.00</td><td>0.0523</td></tr><tr><td>STL</td><td>13.49</td><td>60.00</td><td>0.7470</td><td>89.96</td><td>100.00</td><td>0.0517</td></tr><tr><td>DTL</td><td>43.25</td><td>86.67</td><td>0.4531</td><td>91.69</td><td>100.00</td><td>0.0299</td></tr><tr><td/><td/><td colspan=\"3\">Filipino to English Translation</td><td/><td/></tr><tr><td>STL + DTL</td><td>21.85</td><td>63.33</td><td>0.6771</td><td>80.78</td><td>100.00</td><td>0.0334</td></tr><tr><td>STL</td><td>18.12</td><td>56.67</td><td>0.6990</td><td>83.19</td><td>100.00</td><td>0.0322</td></tr><tr><td>DTL</td><td>55.49</td><td>83.33</td><td>0.3455</td><td>85.46</td><td>100.00</td><td>0.0337</td></tr></table>",
"html": null,
"text": "Error rates in the translation of Corpora #5 and #6",
"num": null,
"type_str": "table"
},
"TABREF11": {
"content": "<table><tr><td>Training</td></tr></table>",
"html": null,
"text": "Examples of generated punning riddles",
"num": null,
"type_str": "table"
}
}
}
}