| { |
| "paper_id": "A97-1016", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T02:14:27.072341Z" |
| }, |
| "title": "Automatic Acquisition of Two-Level Morphological Rules", |
| "authors": [ |
| { |
| "first": "Pieter", |
| "middle": [], |
| "last": "Theron", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stellenbosch University", |
| "location": { |
| "postCode": "7600", |
| "settlement": "Stellenbosch", |
| "country": "South Africa" |
| } |
| }, |
| "email": "ptheron@cs@sun.ac.za" |
| }, |
| { |
| "first": "Ian", |
| "middle": [], |
| "last": "Cloete", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stellenbosch University", |
| "location": { |
| "postCode": "7600~", |
| "settlement": "Stellenbosch", |
| "country": "South Africa" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We describe and experimentally evaluate a complete method for the automatic acquisition of two-level rules for morphological analyzers/generators. The input to the system is sets of source-target word pairs, where the target is an inflected form of the source. There are two phases in the acquisition process: (1) segmentation of the target into morphemes and (2) determination of the optimal two-level rule set with minimal discerning contexts. In phase one, a minimal acyclic finite state automaton (AFSA) is constructed from string edit sequences of the input pairs. Segmentaiion of the words into morphemes is achieved through viewing the AFSA as a directed acyclic graph (DAG) and applying heuristics using properties of the DAG as well as the elementary edit operations. For phase two, the determination of the optimal rule set is made possible with a novel representation of rule contexts, with morpheme boundaries added, in a new DAG. We introduce the notion of a delimiter edge. Delimiter edges are used to select the correct twolevel rule type as well as to extract minimal discerning rule contexts from the DAG. Results are presented for English adjectives, Xhosa noun locatives and Afrikaans noun plurals.", |
| "pdf_parse": { |
| "paper_id": "A97-1016", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We describe and experimentally evaluate a complete method for the automatic acquisition of two-level rules for morphological analyzers/generators. The input to the system is sets of source-target word pairs, where the target is an inflected form of the source. There are two phases in the acquisition process: (1) segmentation of the target into morphemes and (2) determination of the optimal two-level rule set with minimal discerning contexts. In phase one, a minimal acyclic finite state automaton (AFSA) is constructed from string edit sequences of the input pairs. Segmentaiion of the words into morphemes is achieved through viewing the AFSA as a directed acyclic graph (DAG) and applying heuristics using properties of the DAG as well as the elementary edit operations. For phase two, the determination of the optimal rule set is made possible with a novel representation of rule contexts, with morpheme boundaries added, in a new DAG. We introduce the notion of a delimiter edge. Delimiter edges are used to select the correct twolevel rule type as well as to extract minimal discerning rule contexts from the DAG. Results are presented for English adjectives, Xhosa noun locatives and Afrikaans noun plurals.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Computational systems based on the two-level model of morphology (Koskenniemi, 1983) have been remarkably successful for many languages (Sproat, 1992) . The language specific information of such a system is stored as 1. a morphotactic description of the words to be processed as well as 2. a set of two-level morphonological (or spelling) rules.", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 84, |
| "text": "(Koskenniemi, 1983)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 136, |
| "end": 150, |
| "text": "(Sproat, 1992)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Up to now, these two components had to be coded largely by hand, since no automated method existed to acquire a set of two-level rules for input sourcetarget word pairs. To hand-code a 100% correct rule set from word pairs becomes almost impossible when a few hundred pairs are involved. Furthermore, there is no guarantee that such a hand coded lexicon does not contain redundant rules or rules with too large contexts. The usual approach is rather to construct general rules from small subsets of the input pairs. However, these general rules usually allow overrecognition and overgeneration -even on the subsets from which they were inferred. Simons (Simons, 1988) describes methods for studying morphophonemic alternations (using annotated interlinear text) and Grimes (Grimes, 1983) presents a program for discovering affix positions and cooccurrence restrictions. Koskenniemi (Koskenniemi, 1990 ) provides a sketch of a discovery procedure for phonological two-level rules. Golding and Thompson (Golding and Thompson, 1985) and Wothke (Wothke, 1986 ) present systems to automaticaily calculate a set of word-formation rules. These rules are, however, ordered one-level rewrite rules and not unordered two-level rules, as in our system. Kuusik (Kuusik, 1996 ) also acquires ordered onelevel rewrite rules, for stem sound changes in Estonian. Daelemans et al. (Daelemans el al., 1996) use a general symbolic machine learning program to acquire a decision tree for matching Dutch nouns to their correct diminutive suffixes. The input to their process is the syllable structure of the nouns and a given set of five suffix allomorphs. They do not learn rules for possible sound changes. Our process automatically acquires the necessary two-level sound changing rules for prefix and suffix allomorphs, as well as the rules for stem sound changes. Connectionist work on the acquisition of morphology has been more concerned with implementing psychologically motivated models, than with acquisition of rules for a practical system ( (Sproat, 1992, p.216) and (Gasser, 1994) ).", |
| "cite_spans": [ |
| { |
| "start": 653, |
| "end": 667, |
| "text": "(Simons, 1988)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 773, |
| "end": 787, |
| "text": "(Grimes, 1983)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 882, |
| "end": 900, |
| "text": "(Koskenniemi, 1990", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 980, |
| "end": 1029, |
| "text": "Golding and Thompson (Golding and Thompson, 1985)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1041, |
| "end": 1054, |
| "text": "(Wothke, 1986", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1249, |
| "end": 1262, |
| "text": "(Kuusik, 1996", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1347, |
| "end": 1388, |
| "text": "Daelemans et al. (Daelemans el al., 1996)", |
| "ref_id": null |
| }, |
| { |
| "start": 2031, |
| "end": 2052, |
| "text": "(Sproat, 1992, p.216)", |
| "ref_id": null |
| }, |
| { |
| "start": 2057, |
| "end": 2071, |
| "text": "(Gasser, 1994)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The contribution of this paper is to present a complete method for the automatic acquisition of an op-timal set of two-level rules (i.e. the second component above) for source-target word pairs. It is assumed that the target word is formed from the source through the addition of a prefix and/or a suffix 1. Furthermore, we show how a partial acquisition of the morphotactic description (component one) results as a by-product of the rule-acquisition process. For example, the morphotactic description of the target word in the input pair y:i \u00a2~ p:pcan be derived. These processes are described in detail in the rest of the paper: Section 2 provides an overview of the two-level rule formalism, Section 3 describes the acquisition of morphotactics through segmentation and Section 4 presents the method for computing the optimal two-level rules. Section 5 evaluates the experimental results and Section 6 summarizes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Two-level rules view a word as having a lezical and a surface representation, with a correspondence between them (Antworth, 1990) , e.g.:", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 129, |
| "text": "(Antworth, 1990)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Two-level Rule Formalism", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Lexical: h appy + e r Surface: h appi 0 e r Each pair of lexical and surface characters is called a feasible pair. A feasible pair can be written as lezicabcharac~er:surface-charac~er. Such a pair is called a default pair when the lexicai character and surface character are identical (e.g. h:h). When the lexical and surface character differ, it is called a special pair (e.g. y:i). The null character (0) may appear as either a lexical character (as in +:0) or a surface character, but not as both.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "[51", |
| "sec_num": null |
| }, |
| { |
| "text": "1Non-linear operations (such as infixation) are not considered here, since the basic two-level model deals with it in a round-about way. We can note that extensions to the basic two-level model have been proposed to handle non-linear morphology (Kiraz, 1996) .", |
| "cite_spans": [ |
| { |
| "start": 245, |
| "end": 258, |
| "text": "(Kiraz, 1996)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "[51", |
| "sec_num": null |
| }, |
| { |
| "text": "Two-level rules have the following syntax (Sproat, 1992, p.145) : [ 6] CP op LC _ RC ce (correspondence part), LC (le# contezt) and ac (right contez~) are regular expressions over the alphabet of feasible pairs. In most, if not all, implementations based on the two-level model, the correspondence part consists of a single special pair. We also consider only single pair CPs in this paper. The operator op is one of four types: ", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 63, |
| "text": "(Sproat, 1992, p.145)", |
| "ref_id": null |
| }, |
| { |
| "start": 66, |
| "end": 70, |
| "text": "[ 6]", |
| "ref_id": null |
| }, |
| { |
| "start": 135, |
| "end": 150, |
| "text": "(right contez~)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "[51", |
| "sec_num": null |
| }, |
| { |
| "text": "The exclusion rule (/~) is used to prohibit the application of another, too general rule, in a particular subcontext. Since our method does not overgeneralize, we will consider only the ~, ~ and ~:~ rule types.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Composite rule: a:b \u00a2V LC _ RC", |
| "sec_num": "4." |
| }, |
| { |
| "text": "3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Composite rule: a:b \u00a2V LC _ RC", |
| "sec_num": "4." |
| }, |
| { |
| "text": "The morphotactics of the input words are acquired by (1) computing the string edit difference between each source-target pair and (2) merging the edit sequences as a minimal acyclic finite state automaton. The automaton, viewed as a DAG, is used to segment the target word into its constituent morphemes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acquisition of Morphotactics", |
| "sec_num": null |
| }, |
| { |
| "text": "A string edit sequence is a sequence of elementary operations which change a source string into a target string (Sankoff and Kruskal, 1983 , Chapter 1).", |
| "cite_spans": [ |
| { |
| "start": 112, |
| "end": 138, |
| "text": "(Sankoff and Kruskal, 1983", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining String Edit Sequences", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The elementary operations used in this paper are single character deletion (DELETE), insertion (IN-SERT) and replacement (REPLACE). We indicate the copying of a character by NOCHANGE. A cost is associated with each elementary operation. Typically, INSERT and DELETE have the same (positive) cost and NOCHANGE has a cost of zero. RE-PLACE could have the same or a higher cost than INSERT or DELETE. Edit sequences can be ranked by the sum of the costs of the elementary operations that appear in them. The interesting edit sequences are those with the lowest total cost. For most word pairs, there are more than one edit sequence (or mapping) possible which have the same minimal total cost. To select a single edit sequence which will most likely result in a correct segmentation, we added a morphology-specific heuristic to a general string edit algorithm (Vidal et al., 1995) . This heuristic always selects an edit sequence containing two subsequences which identify prefix-root and root-suffix boundaries. The heuristic depends on the elementary operations being limited only to INSERT, DELETE and NOCHANGE, i.e. no RE-PLACEs are allowed. We assume that the target word contains more morphemes than the source word. It therefore follows that there are more IN-SERTs than DELETEs in an edit sequence. Furthermore, the letters forming the morphemes of the target word appear only as the right-hand components of INSERT operations. Consider the edit sequence to change the string happy into the string unhappier:", |
| "cite_spans": [ |
| { |
| "start": 857, |
| "end": 877, |
| "text": "(Vidal et al., 1995)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining String Edit Sequences", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "0:u INSERT 0:n INSERT h:h NOCHANGE a:a NOCHANGE p:p NOCHANGE p:p NOCHANGE y:0 DELETE 0:i INSERT 0:e INSERT 0:r INSERT [7]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining String Edit Sequences", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Note that the prefix un-as well as the suffixer consist only of INSERTs. Furthermore, the prefix-root morpheme boundary is associated with an INSERT followed by a NOCHANGE and the root-suffix boundary by a NOCHANGE-DELETE-INSERT sequence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining String Edit Sequences", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In general, the prefix-root boundary is just the reverse of the root-suffix boundary, i.e. INSERT-DELETE-NOCHANGE, with the DELETE operation being optional. The heuristic resulting from this observation is a bias giving highest precedence to INSERT operations, followed by DELETE and NOCHANGE, in the first half of the edit sequence. In the second half, the precedence is reversed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining String Edit Sequences", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "A single source-target edit sequence may contain spurious INSERTs which are not considered to form part of a morpheme. For example, the O:i insertion in Example 7 should not contribute to the suffixer to form -ier, since -ier is an allomorph of -er. To combat these spurious INSERTs, all the edit sequences for a set of source-target words are merged as follows: A minimal acyclic finite state automaton (AFSA) is constructed which accepts all and only the edit sequences as input strings. This AFSA is then viewed as a DAG, with the elementary edit operations as edge labels. For each edge a count is kept of the number of different edit sequences which pass through it. A path segment in the DAG consisting of one or more INSERT operations having a similar count, is then considered to be associated with a morpheme in the target word. The O:e O:r INSERT sequence associated with the -er suffix appears more times than the O:i O:e O:r INSERT sequence associated with the -ier suffix, even in a small set of adjectively-related source-target pairs. This means that there is a rise in the edge counts from O:i to O:e (indicating a root-suffix boundary), while O:e and O:r have similar frequency counts. For prefixes a fall in the edge frequency count of an INSERT sequence indicates a prefix-root boundary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Merging Edit Sequences", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "To extract the morphemes of each target word, every path through the DAG is followed and only the target-side of the elementary operations serving as edge labels, are written out. The null characters 0 Phase one can segment only one layer of affix additions at a time. However, once the morpheme boundary markers (+) have been inserted, phase two should be able to acquire the correct two-level rules for an arbitrary number of affix additions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Merging Edit Sequences", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "prefizl +prefiz2+. . . +roo~+suffizl +suffiz2+ ....", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Merging Edit Sequences", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "To acquire the optimal rules, we first determine the full length lexical-sufface representation of each word pair. This representation is required for writing two-level rules (Section 2). The morphotactic descriptions from the previous section provide sourcetarget input pairs from which new string edit sequences are computed: The right-hand side of the morphotactic description is used as the source and the left-hand side as the target string. For instance, Example 8 is written as: Lexical: u n + h a pp y + e r Surface: u n 0 h a ppi 0 e r The REPLACE elementary string edit operations (e.g. y:i) are now allowed since the morpheme boundary markers (+) are already present in the source string. REPLACEs allow shorter edit sequences to be computed, since one REPLACE does the same work as an adjacent INSERT-DELETE pair. REPLACE, INSERT and DELETE have the same associated cost and NOCHANGE has a cost of zero. The morpheme boundary marker (+) is always mapped to the null character (0), which makes for linguistically more understandable mappings. Under these conditions, the selection of any minimal cost string edit mapping provides an acceptable lexical-surface representation 2. To formulate a two-level rule for the source-target pair happy-unhappier, we need a correspondence pair (CP) and a rule type (op), as well as a left context (LC) and a right context (RC) (see Section 2). Rules need only be coded for special pairs, i.e. IN-SERTs, DELETEs or REPLACEs. The only special pair in the above example is y:i, which will be the CP of the rule. Now the question arises as to how large the context of this rule must be? It should be large enough to uniquely specify the positions in the lexical-surface input stream where the rule is applied. On the other hand, the context should not be too large, resulting in an overspecified context which prohibits the application of the rule to unseen, but similar, words. Thus to make a rule as general as possible, its context (LC and RC) should be as short as possible s . By inspecting the edit sequence in Example 10, we see that y changes into i when y is preceded by a p:p, which serves as our first attempt at a (left) context for y:i. Two questions must be asked to determine the correct rule type to be used (Antworth, 1990, p.53) : Question 1 Is E the only environment in which L:S is allowed? Question 2 Must L always be realized as S in E? The term environment denotes the combined left and right contexts of a special l~air. E in our example is \"p:p_\", L is y and S is i. Thus the answer to question one is true, since y:i only occurs after p:p in our example. The answer to question two is also true, since y is always realized as i after a p:p in the above edit sequence. Which rule type to use is gleaned from Table 1. Thus, to continue our example, we should use the composite rule type (\u00a2:~):", |
| "cite_spans": [ |
| { |
| "start": 2269, |
| "end": 2291, |
| "text": "(Antworth, 1990, p.53)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acquiring Optimal Rules", |
| "sec_num": null |
| }, |
| { |
| "text": "[ 12] y:i \u00a2~ p:p _", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acquiring Optimal Rules", |
| "sec_num": null |
| }, |
| { |
| "text": "2Our assumption is that such a minimal cost mapping will lead to an optimal rule set. In most (if not all) of the examples seen, a minimal mapping was also intuitively acceptable. sit abstractions (e.g. sets such as V denoting vowels) over the regular pairs are introduced, it will not be so simple to determine what is \"a more general context\". However, current implementations require abstractions to be explicitly instantiated during the compilation process ( (Karttunen and Beesley, 1992, pp.19-21) and (Antworth, 1990, pp.49-50) This example shows how to go about coding the set of two-level rules for a single source-target pair.", |
| "cite_spans": [ |
| { |
| "start": 463, |
| "end": 502, |
| "text": "(Karttunen and Beesley, 1992, pp.19-21)", |
| "ref_id": null |
| }, |
| { |
| "start": 507, |
| "end": 533, |
| "text": "(Antworth, 1990, pp.49-50)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acquiring Optimal Rules", |
| "sec_num": null |
| }, |
| { |
| "text": "However, this soon becomes a tedious and error prone task when the number of source-target pairs increases, due to the complex interplay of rules and their contexts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acquiring Optimal Rules", |
| "sec_num": null |
| }, |
| { |
| "text": "It is important to acquire the minimal discerning context for each rule. This ensures that the rules are as general as possible (to work on unseen words as well) and prevents rule conflicts. Recall that one need only code rules for the special pairs. Thus it is necessary to determine a rule type with associated minimal discerning context for each occurrence of a special pair in the final edit sequences. This is done by comparing all the possible contiguous 4 contexts of a special pair against all the possible contexts of all the other feasible pairs. To enable the computational comparison of the growing left and right contexts around a feasible pair, we developed a \"mixed-context\" representation. We call the particular feasible pair for which a mixed-context is to be constructed, a marker pair (MP), to distinguish it from the feasible pairs in its context. The mixedcontext representation is created by writing the first feasible pair to the left of the marker pair, then the first right-context pair, then the second left-context pair and so forth:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimal Discerning Rule Contexts", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "[ 13] LC1, RC1, LC2, RC2, LC3, RC3, ..., MP", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimal Discerning Rule Contexts", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The marker pair at the end serves as a label. Special symbols indicate the start (SOS) and end (EOS) of an edit sequence. If, say, the right-context ofa MP is shorter than the left-context, an out-of-bounds symbol (OOB) is used to maintain the mixed-context format. For example the mixed-context of y:i in the edit sequence in Example 10, is represented as: p, +:0, p:p, e:e, a:a, r:r, h:h, EOS, +:0, OOB, n:n, OOB, u:u, SOS, OOB, y:i The common prefixes of the mixed-contexts are merged by constructing a minimal AFSA which accepts all and only these mixed-context sequences.", |
| "cite_spans": [ |
| { |
| "start": 358, |
| "end": 360, |
| "text": "p,", |
| "ref_id": null |
| }, |
| { |
| "start": 361, |
| "end": 365, |
| "text": "+:0,", |
| "ref_id": null |
| }, |
| { |
| "start": 366, |
| "end": 370, |
| "text": "p:p,", |
| "ref_id": null |
| }, |
| { |
| "start": 371, |
| "end": 375, |
| "text": "e:e,", |
| "ref_id": null |
| }, |
| { |
| "start": 376, |
| "end": 380, |
| "text": "a:a,", |
| "ref_id": null |
| }, |
| { |
| "start": 381, |
| "end": 385, |
| "text": "r:r,", |
| "ref_id": null |
| }, |
| { |
| "start": 386, |
| "end": 390, |
| "text": "h:h,", |
| "ref_id": null |
| }, |
| { |
| "start": 391, |
| "end": 395, |
| "text": "EOS,", |
| "ref_id": null |
| }, |
| { |
| "start": 396, |
| "end": 400, |
| "text": "+:0,", |
| "ref_id": null |
| }, |
| { |
| "start": 401, |
| "end": 405, |
| "text": "OOB,", |
| "ref_id": null |
| }, |
| { |
| "start": 406, |
| "end": 410, |
| "text": "n:n,", |
| "ref_id": null |
| }, |
| { |
| "start": 411, |
| "end": 415, |
| "text": "OOB,", |
| "ref_id": null |
| }, |
| { |
| "start": 416, |
| "end": 420, |
| "text": "u:u,", |
| "ref_id": null |
| }, |
| { |
| "start": 421, |
| "end": 425, |
| "text": "SOS,", |
| "ref_id": null |
| }, |
| { |
| "start": 426, |
| "end": 430, |
| "text": "OOB,", |
| "ref_id": null |
| }, |
| { |
| "start": 431, |
| "end": 434, |
| "text": "y:i", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimal Discerning Rule Contexts", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "[ 14] p:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimal Discerning Rule Contexts", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "4A two-level rule requires a contiguous context.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimal Discerning Rule Contexts", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The transitions (or edges, when viewed as a DAG) of the AFSA are labeled with the feasible pairs and special symbols in the mixed-context sequence. There is only one final state for this minimal AFSA. Note that all and only the terminal edges leading to this final state will be labeled with the marker pairs, since they appear at the end of the mixed-context sequences. More than one terminal edge may be labeled with the same marker pair. All the possible (mixed) contexts of a specific marker pair can be recovered by following every path from the root to the terminal edges labeled with that marker pair. If a path is traversed only up to an intermediate edge, a shortened context surrounding the marker pair can be extracted. We will call such an intermediate edge a delimiter edge, since it delimits a shortened context. For example, traversing the mixed context path of y:i in Example 14 up to e:e would result in the (unmixed) shortened context: which is more general than a rule using the full context:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question 2 have the", |
| "sec_num": null |
| }, |
| { |
| "text": "[ 27] y:i op SOS u:u n:n h:h a:a p:p p:p _ +:0 e:e r:r", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question 2 have the", |
| "sec_num": null |
| }, |
| { |
| "text": "For each marker pair in the DAG which is also a special pair, we want to find those delimiter edges which produce the shortest contexts providing a true answer to at least one of the two rule type decision questions given above. The mixed-context prefix-merged AFSA, viewed as a DAG, allow us to rephrase the two questions in order to find answers in a procedural way:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EOS", |
| "sec_num": null |
| }, |
| { |
| "text": "Question 1 Traverse all the paths from the root to the terminal edges labeled with the marker pair L:S. Is there an edge el in the DAG which all these paths have in common? If so, then question one is true for the environment E constructed from the shortened mixed-contexts associated with the path prefixes delimited by el.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EOS", |
| "sec_num": null |
| }, |
| { |
| "text": "Consider the terminal edges which same L-component as the marker pair L:S and which are reachable from a common edge e2 in the DAG. Do all of these terminal edges also have the same S-component as the marker pair? If so, then question two is true for the environment E constructed from the shortened mixed-contexts associated with the path prefixes delimited by e2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EOS", |
| "sec_num": null |
| }, |
| { |
| "text": "For each marker pair, we traverse the DAG and mark the delimiter edges nearest to the root which allow a true answer to either question one, question two or both (i.e. el = e2). This means that each path from the root to a terminal edge can have at most three marked delimiter edges: One delimiting a context for a ~ rule, one delimiting a context for a rule and one delimiting a context for a ~ rule. The marker pair used to answer the two questions, serves as the correspondence part (Section 2) of the rule. To continue with Example 14, let us assume that the DAG edge labeled with e:e is the closest edge to the root which answers true only to question one. Then the ~ rule is indicated: \u2022 seems to be preferred in systems designed by linguistic experts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EOS", |
| "sec_num": null |
| }, |
| { |
| "text": "[ IS]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EOS", |
| "sec_num": null |
| }, |
| { |
| "text": "Furthermore, from inspecting examples, a delimiter edge indicating a ~ rule generally delimits the shortest contexts, followed by the delimiter for \u00a2~ and the delimiter for ~. The shorter the selected context, the more generally applicable is the rule. We therefore select only one rule per path, in the following preference order: (1) \u00a2~, (2) ~ and (3) ~. Note that any of the six possible precedence orders would provide an accurate analysis and generation of the pairs used for learning. However, our suggested precedence seems to strike the best balance between over-or underrecognition and over-or undergeneration when the rules would be applied to unseen pairs. The mixed-context representation has one obvious drawback: If an optimal rule has only a left or only a right context, it cannot be acquired. To solve this problem, two additional minimal AFSAs are constructed: One containing only the left context information for all the marker pairs and one containing only the right context information. The same process is then followed as with the mixed contexts. The final set of rules is selected from the output of all three the AFSAs: For each special pair 1. we select any of the \u00a2~ rules with the shortest contexts of which the special pair is the lefthand side, or 2. if no \u00a2~ rules were found, we select the shortest and ~ rules for each occurrence of the special pair. They are then merged into a single \u00a2~ rule with disjuneted contexts. The rule set learned is complete since all possible combinations of marker pairs, rule types and contexts are considered by traversing all three DAGs. Furthermore, the rules in the set have the shortest possible contexts, since, for a given DAG, there is only one delimiter edge closest to the root for each path, marker pair and rule type combination.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EOS", |
| "sec_num": null |
| }, |
| { |
| "text": "Our process works correctly for examples given in (Antworth, 1990 ). There were two incorrect segmentations in the twenty one adjective pairs given on page 106. It resulted from an incorrect string edit mapping of (un)happy to (un)happily. For the suffix, the sequence . .. O:i O:l y:y was generated instead of the sequence.., y:O O:i 0:I O:y. The reason for this is that the root word and the inflected form end in the same letter (y) and one NOCHANGE (y:y) has a lower cost than a DELETE (y:O) plus an INSERT (O:y). The acquired segmentation for the 21 pairs, with the suffix segmentation of (un)happily manually corrected, is:", |
| "cite_spans": [ |
| { |
| "start": 50, |
| "end": 65, |
| "text": "(Antworth, 1990", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "[ 20] From these segmentations, the morphotactic component (Section 1) required by the morphological analyzer/generator is generated with uncomplicated text-processing routines. Three correct ~ rules, including two gemination rules, resulted for these twenty one pairsS:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "5The results in this paper were verified on the twolevel processor PC-KIMMO (Antworth, 1990) . The two- To better illustrate the complexity of the rules that can be learned automatically by our process, consider the following set of fourteen Xhosa nounlocative pairs:", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 92, |
| "text": "(Antworth, 1990)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Source Word --~ Target Word inkosi --~ enkosini iinkosi ~ ezinkosini ihashe -~ ehasheni imbewu -~ embewini amanzi --~ emanzini ubuchopho -~ ebucotsheni ilizwe --, elizweni ilanga --* elangeni ingubo -~ engubeni ingubo -, engutyeni indlu -, endlini indlu --~ endlwini ikhaya ~ ekhayeni ikhaya --~ ekhaya [ 22]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Note that this set contains ambiguity: The locative of ingubo is either engubeni or engutyeni. Our process must learn the necessary two-level rules to map ingubo to engubeni and engutyeni, as well as to map both engubeni and engutyeni in the other direction, i.e. to ingubo. Similarly, indlu and ikhaya each have two different locative forms. Furthermore, the two source words inkosi and iinkosi (the plural of inkosi) differ only by a prefixed i, but they have different locative forms. This small difference between source words provides an indication of the sensitivity required of the acquisition process to provide the necessary discerning information to a two-level morphological processor. At the same time, our process needs to cope with possibly radical modifications between source and target words. Consider the mapping between ubuchopho and its locative ebucotsheni. Here, the only segments which stay the same from the source to the target word, are the three letters -buc-, the letter -o-(the deletion of the first -h-is correct) and the second -h-. because both the two questions becomes true for the disjuncted environment e:e _ I o:y _ I u:w -In:n. The vertical bar (\"1\") is the traditional twolevel notation which indicate the disjunction of two (or more) contexts. The five ~ rules and the single rule of the special pair i:O in Example 24 can be merged in a similar way. In this instance, the context of the ~ rule (4-:0 -) needs to be added to some of the contexts of the ~ rules of i:O. The following \u00a2:~ rule results: [26] i: O ~ 4-:0 -n:n I 4-:0 _ h:h I 4-:0 _ k:k I 4-:0 -l:l I 4-:0 _ m:m In this way the 24 rules are reduced to a set of 16 rules which contain only a single \u00a2~ rule for each special pair. This merged set of 16 two-level rules analyze and generate the input word pairs 100% correctly. The next step was to show the feasibility of automatically acquiring a minimal rule set for a wide coverage parser. To get hundreds or even thousands of input pairs, we implemented routines to extract the lemmas (\"head words\") and their inflected forms from a machine-readable dictionary. In this way we extracted 3935 Afrikaans noun-plural pairs which served as the input to our process. Afrikaans plurals are almost always derived with the addition of a suffix (mostly -e or -s) to the singular form. Different sound changes may occur during this process. For example 6, gemination, which indicates the shortening of a preceding vowel, occurs frequently (e.g. hat ---* katte), as well as consonant-insertion (e.g. has ---* haste) and elision (ampseed --~ ampsede). Several sound changes may occur in the same word. For example, elision, consonant replacement and gemination occurs in loof ---* lowwe. Afrikaans (a Germanic language) has borrowed a few words from Latin. Some of these words have two plural forms, which introduces ambiguity in the word mappings: One plural is formed with a Latin suffix (-a) (e.g. During phase one, all but eleven (0.3%) of the 3935 input word pairs were segmented correctly. To facilitate the evaluation of phase two, we define a simple rule as a rule which has an environment consisting of a single context. This is in contrast with an environment consisting of two or more contexts disjuncted together. Phase two acquired 531 simple rules for 44 special pairs. Of these 531 simple rules, 500 are ~ rules, nineteen are \u00a2~ rules and twelve are ~ rules. The average length of the simple rule contexts is 4.2 feasible pairs. Compare this with the nAil the examples comes from the 3935 input word pairs. average length of the 3935 final input edit sequences which is 12.6 feasible pairs. The 531 simple rules can be reduced to 44 ~ rules (i.e. one rule per special pair) with environments consisting ofdisjuncted contexts. These 44 ~ rules analyze and generate the 3935 word pairs 100% correctly. The total number of feasible pairs in the 3935 final input edit strings is 49657. In the worst case, all these feasible pairs should be present in the rule contexts to accurately model the sound changes which might occur in the input pairs. However, the actual result is much better: Our process acquires a two-level rule set which accurately models the sound changes with only 4.5% (2227) of the input feasible pairs.", |
| "cite_spans": [ |
| { |
| "start": 1541, |
| "end": 1545, |
| "text": "[26]", |
| "ref_id": null |
| }, |
| { |
| "start": 1549, |
| "end": 1613, |
| "text": "O ~ 4-:0 -n:n I 4-:0 _ h:h I 4-:0 _ k:k I 4-:0 -l:l I 4-:0 _ m:m", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "To obtain a prediction of the analysis and generation accuracy over unseen words, we divided the 3935 input pairs into five equal sections. Each fifth was held out in turn as test data while a set of two-level rules was learned from the remaining fourfifths. The average recognition accuracy as well as the generation accuracy over the held out test data is 93.9%.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We have described and experimentally evaluated, for the first time, a process which automatically acquires optimal two-level morphological rules from input word pairs. These can be used by a publicly available two-level morphological processor. We have demonstrated that our acquisition process is portable between at least three different languages and that an acquired rule set generalizes well to words not in the training corpus. Finally, we have shown the feasibility of automatically acquiring twolevel rule sets for wide-coverage parsers, with word pairs extracted from a machine-readable dictionary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary", |
| "sec_num": "6" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Part of this work was completed during the first author's stay as visiting researcher at ISSCO (University of Geneva). We gratefully acknowledge the support of ISSCO, as well as the Swiss Federal Government for providing a bursary which made this visit possible. For helpful comments on an earlier draft of the paper, we wish to thank Susan Armstrong and Sabine Lehmann as well as the anonymous reviewers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "PC-KIMMO: A Two-level Processor for Morphological Analysis", |
| "authors": [ |
| { |
| "first": "Evan", |
| "middle": [ |
| "L" |
| ], |
| "last": "Antworth", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Summer Institute of Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Evan L. Antworth. 1990. PC-KIMMO: A Two-level Processor for Morphological Analysis. Summer In- stitute of Linguistics, Dallas, Texas.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Unsupervised Discovery of Phonological Categories through Supervised Learning of Morphological Rules", |
| "authors": [ |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Berck", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Gillis", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "COLING-96: 16th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "95--100", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Walter Daelemans, Peter Berck and Steven Gillis. 1996. Unsupervised Discovery of Phonological Categories through Supervised Learning of Mor- phological Rules. In COLING-96: 16th Interna- tional Conference on Computational Linguistics, pages 95-100, Copenhagen, Denmark.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Acquiring Receptive Morphology: A Connectionist Model", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Gasser", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of ACL-94. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Gasser. 1994. Acquiring Receptive Mor- phology: A Connectionist Model. In Proceedings of ACL-94. Association for Computational Lin- guistics, Morristown, New Jersey.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A morphology component for language programs", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Andrew", |
| "suffix": "" |
| }, |
| { |
| "first": "Henry", |
| "middle": [ |
| "S" |
| ], |
| "last": "Golding", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Thompson", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "Linguistics", |
| "volume": "23", |
| "issue": "", |
| "pages": "263--284", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew R. Golding and Henry S. Thompson. 1985. A morphology component for language programs. Linguistics, 23:263-284.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Affiz positions and cooccurrences: the PARADIGM program", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [ |
| "E" |
| ], |
| "last": "Grimes", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "Summer Institute of Linguistics Publications in Linguistics No. 69", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joseph E. Grimes. 1983. Affiz positions and cooc- currences: the PARADIGM program. Summer In- stitute of Linguistics Publications in Linguistics No. 69. Dallas: Summer Institute of Linguistics and University of Texas at Arlington.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Two-level Rule Compiler", |
| "authors": [ |
| { |
| "first": "Laud", |
| "middle": [], |
| "last": "Karttunen", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Kenneth", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Beesley", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laud Karttunen and Kenneth R. Beesley. 1992. Two-level Rule Compiler. Technical Report ISTL- 92-2. Xerox Palo Alto Research Center.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "SEMHE: A generalized two-level System", |
| "authors": [ |
| { |
| "first": "Kiraz", |
| "middle": [], |
| "last": "George Anton", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of ACL-96", |
| "volume": "", |
| "issue": "", |
| "pages": "159--166", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George Anton Kiraz. 1996. SEMHE: A general- ized two-level System. In Proceedings of ACL-96. Association for Computational Linguistics, pages 159-166, Santa Cruz, California.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Two-level Morphology: A General Computational Model for Word-Form Recognition and Production. PhD Dissertation", |
| "authors": [ |
| { |
| "first": "Kimmo", |
| "middle": [], |
| "last": "Koskenniemi", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kimmo Koskenniemi. 1983. Two-level Morphol- ogy: A General Computational Model for Word- Form Recognition and Production. PhD Disserta- tion. Department of General Linguistics, Univer- sity of Helsinki.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A discovery procedure for two-level phonology", |
| "authors": [ |
| { |
| "first": "Kimmo", |
| "middle": [], |
| "last": "Koskenniemi", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Computational Lexicology and Lexicography: Special Issue dedicated to Bernard Quemada", |
| "volume": "VI", |
| "issue": "", |
| "pages": "451--465", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kimmo Koskenniemi. 1990. A discovery procedure for two-level phonology. Computational Lexicol- ogy and Lexicography: Special Issue dedicated to Bernard Quemada, Vol. I (Ed. L. Cignoni, C. Pe- ters). Linguistica Computazionale, Pisa, Volume VI, 1990, pages 451-465.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Learning Morphology: Algorithms for the Identification of Stem Changes", |
| "authors": [ |
| { |
| "first": "Evelin", |
| "middle": [], |
| "last": "Kuusik", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "COLING-96: i6th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1102--1105", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Evelin Kuusik. 1996. Learning Morphology: Al- gorithms for the Identification of Stem Changes. In COLING-96: i6th International Conference on Computational Linguistics, pages 1102-1105, Copenhagen, Denmark.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Time warps, string edits, and macromoleeules: the theory and practice of sequence comparison", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Sankoff", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [ |
| "B" |
| ], |
| "last": "Kruskal", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Sankoff and Joseph B. Kruskal. 1983. Time warps, string edits, and macromoleeules: the the- ory and practice of sequence comparison. Addison- Wesley, Massachusetts.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Studying morphophonemic alternation in annotated text, parts one and two", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Gary", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Simons", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Notes on Linguistics", |
| "volume": "41", |
| "issue": "", |
| "pages": "27--38", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gary F. Simons. 1988. Studying morphophonemic alternation in annotated text, parts one and two. Notes on Linguistics, 41:41-46; 42:27-38.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Morphology and Computation", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Sproat", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Sproat. 1992. Morphology and Computa- tion. The MIT Press, Cambridge, England.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Fast Computation of Normalized Edit Distances", |
| "authors": [ |
| { |
| "first": "Enrique", |
| "middle": [], |
| "last": "Vidal", |
| "suffix": "" |
| }, |
| { |
| "first": "Andros", |
| "middle": [], |
| "last": "Marzal", |
| "suffix": "" |
| }, |
| { |
| "first": "Pablo", |
| "middle": [], |
| "last": "Aibar", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "IEEE Trans. Pattern Analysis and Machine Intelligence", |
| "volume": "17", |
| "issue": "", |
| "pages": "899--902", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Enrique Vidal, AndrOs Marzal and Pablo Aibar. 1995. Fast Computation of Normalized Edit Dis- tances. IEEE Trans. Pattern Analysis and Ma- chine Intelligence, 17:899-902.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Machine learning of morphological rules by generalization and analogy", |
| "authors": [ |
| { |
| "first": "Klaus", |
| "middle": [], |
| "last": "Wothke", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "COLING-86: 11~h International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "289--293", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Klaus Wothke. 1986. Machine learning of morpho- logical rules by generalization and analogy. In COLING-86: 11~h International Conference on Computational Linguistics, pages 289-293, Bonn.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "Exclusion rule: a:b /~ LC _ RC 2. Context restriction rule: a:b ::~ LC _ RC 3. Surface coercion rule: a:b ~ LC _ RC", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "text": "n:n -t-:O h:h a:a p:p p:p y:i 4:0 e:e r:r maps the source into the target and provides the lexical and surface representation required by the two-level rules:[ 11]", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "text": "y:i ~ p:p p:p _ +:0 e:e However, if the edge labeled with r:r answers true to both questions, we prefer the composite rule (\u00a2#) associated with it although this results in a larger context: [19] y:i \u00a2* a:a p:p p:p _ \u00f7:0 e:e r:r The reasons for this preference are that the \u00a2~ rule \u2022 provides a more precise statement about the applicable environment of the rule and it", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "text": "is computed for all the inall but ekhaya (a correct have +hi as a suffix. data, phase two correctly O~ rules of a special pair can be merged into a single ~=~ rule. For example the four rules above for the special pair q-:O can be merged into[ 25] 4-:0 \u00a2=~ e:e _ [ o:y _ ] u:w _ [ _ n:n", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "text": "emetikum --~ emetika) and one with an indigenous suffix (-s) (emetih.m emetih ms). Allomorphs occur as well, for example -ens is an allomorph of the suffix -s in bed + s ---, beddens.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "text": "on the target-side of DELETEs are ignored while the target-side of INSERTs are only written if their frequency counts indicate that they are not sporadic allomorph INSERT operations.", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td colspan=\"2\">For Example 7, the</td></tr><tr><td colspan=\"4\">following morphotactic description results:</td></tr><tr><td/><td/><td/><td>is]</td></tr><tr><td colspan=\"4\">Target Word --Prefix + Source + Suffix</td></tr><tr><td>unhappier</td><td>= un</td><td>+ happy</td><td>+ er</td></tr></table>", |
| "html": null |
| }, |
| "TABREF2": { |
| "text": "Truth table to select the correct rule type.", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Q1</td><td>Q2</td><td>op</td></tr><tr><td colspan=\"3\">false false none</td></tr><tr><td>true</td><td>false</td><td/></tr><tr><td colspan=\"2\">false true</td><td/></tr><tr><td>true</td><td>true</td><td>\u00a2~z</td></tr></table>", |
| "html": null |
| } |
| } |
| } |
| } |