{ "paper_id": "P96-1020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:02:50.545138Z" }, "title": "Pattern-Based Context-Free Grammars for Machine Translation", "authors": [ { "first": "Koichi", "middle": [], "last": "Takeda", "suffix": "", "affiliation": { "laboratory": "Tokyo Research Laboratory", "institution": "IBM Research", "location": { "postCode": "1623-14, 242", "settlement": "Shimotsuruma, Yamato", "region": "Kanagawa", "country": "Japan" } }, "email": "takeda@trl@vnet.ibm.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper proposes the use of \"patternbased\" context-free grammars as a basis for building machine translation (MT) systems, which are now being adopted as personal tools by a broad range of users in the cyberspace society. We discuss major requirements for such tools, including easy customization for diverse domains, the efficiency of the translation algorithm, and scalability (incremental improvement in translation quality through user interaction), and describe how our approach meets these requirements.", "pdf_parse": { "paper_id": "P96-1020", "_pdf_hash": "", "abstract": [ { "text": "This paper proposes the use of \"patternbased\" context-free grammars as a basis for building machine translation (MT) systems, which are now being adopted as personal tools by a broad range of users in the cyberspace society. We discuss major requirements for such tools, including easy customization for diverse domains, the efficiency of the translation algorithm, and scalability (incremental improvement in translation quality through user interaction), and describe how our approach meets these requirements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "With the explosive growth of the World-Wide Web (WWW) as information source, it has become routine for Internet users to access textual data written in foreign languages. In Japan, for example, a dozen or so inexpensive MT tools have recently been put on the market to help PC users understand English text in WWW home pages. The MT techniques employed in the tools, however, are fairly conventional. For reasons of affordability, their designers appear to have made no attempt to tackle the well-known problems in MT, such as how to ensure the learnability of correct translations and facilitate customization. As a result, users are forced to see the same kinds of translation errors over and over again, except they in cases where they involve merely adding a missing word or compound to a user dictionary, or specifying one of several word-to-word translations as a correct choice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are several alternative approaches that might eventually liberate us from this limitation on the usability of MT systems:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Unification-based grammar formalisms and lexical-semantics formalisms (see LFG (Kaplan and Bresnan, 1982) , HPSG (Pollard and Sag, 1987) , and Generative Lexicon (Pustejovsky, 1991) , for example) have been proposed to facilitate computationally precise description of natural-language syntax and semantics. It is possible that, with the descriptive power of these grammars and lexicons, individual usages of words and phrases may be defined specifically enough to give correct translations. Practical implementation of MT systems based on these formalisms, on the other hand, would not be possible without much more efficient parsing and disambiguation algorithms for these formalisms and a method for building a lexicon that is easy even for novices to use.", "cite_spans": [ { "start": 79, "end": 105, "text": "(Kaplan and Bresnan, 1982)", "ref_id": "BIBREF17" }, { "start": 113, "end": 136, "text": "(Pollard and Sag, 1987)", "ref_id": "BIBREF20" }, { "start": 162, "end": 181, "text": "(Pustejovsky, 1991)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Corpus-based or example-based MT (Sato and Nagao, 1990; Sumita and Iida, 1991) and statistical MT (Brown et al., 1993) systems provide the easiest customizability, since users have only to supply a collection of source and target sentence pairs (a bilingual corpus). Two open questions, however, have yet to be satisfactorily answered before we can confidently build commercial MT systems based on these approaches:", "cite_spans": [ { "start": 33, "end": 55, "text": "(Sato and Nagao, 1990;", "ref_id": "BIBREF22" }, { "start": 56, "end": 78, "text": "Sumita and Iida, 1991)", "ref_id": "BIBREF26" }, { "start": 98, "end": 118, "text": "(Brown et al., 1993)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Can the system be used for various domains without showing severe degradation of translation accuracy?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 What is the minimum number of examples (or training data) required to achieve reasonable MT quality for a new domain?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "TAG-based MT (Abeill~, Schabes, and Joshi, 1990) 1 and pattern-based translation (Maruyama, 1993) share many important properties for successful implementation in practical MT systems, namely:", "cite_spans": [ { "start": 81, "end": 97, "text": "(Maruyama, 1993)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 The existence of a polynomial-time parsing algorithm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 A capability for describing a larger domain of locality (Schabes, Abeill~, and Joshi, 1988) \u2022 Synchronization (Shieber and Schabes, 1990) of the source and target language structures ing algorithm for TAGs has O(IGIn6) 2 worst case time complexity (Vijay-Shanker, 1987) , and that the \"patterns\" in Maruyama's approach are merely context-free grammar (CFG) rules. Thus, it has been a challenge to find a framework in which we can enjoy both a grammar formalism with better descriptive power than CFG and more efficient parsing/generation algorithms than those of TAGs. 3", "cite_spans": [ { "start": 58, "end": 93, "text": "(Schabes, Abeill~, and Joshi, 1988)", "ref_id": "BIBREF23" }, { "start": 112, "end": 139, "text": "(Shieber and Schabes, 1990)", "ref_id": "BIBREF25" }, { "start": 250, "end": 271, "text": "(Vijay-Shanker, 1987)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we will show that there exists a class of \"pattern-based\" grammars that is weakly equivalent to CFG (thus allowing the CFG parsing algorithms to be used for our grammars), but that it facilitates description of the domain of locality. Furthermore, we will show that our framework can be extended to incorporate example-based MT and a powerful learning mechanism.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Pattern-Based", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "Pattern-based context-free grammars (PCFG) consists of a set of translation patterns. A pattern is a pair of CFG rules, and zero or more syntactic head and link constraints for nonterminal symbols. 1. Head constraints: The nonterminal symbol V in the source rule must have the verb miss as a syntactic head. The symbol V in the target rule must have the verb manquer as a syntactic head. The head of symbol S in the source (target) rule is identical to the head of symbol V in the source (target) rule as they are co-indexed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Free Grammars", "sec_num": null }, { "text": "2. Link constraints: Nonterminal symbols in source and target CFG rules are linked if they 2Where ]G] stands for the size of grammar G, and n is the length of an input string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Free Grammars", "sec_num": null }, { "text": "3Lexicalized CFG, or Tree Insertion Grammar (TIG) (Schabes and Waters, 1995) , has been recently introduced to achieve such efficiency and lexicalization.", "cite_spans": [ { "start": 50, "end": 76, "text": "(Schabes and Waters, 1995)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Context-Free Grammars", "sec_num": null }, { "text": "4and its inflectional variants --we will discuss inflections and agreement issues later. 5The meaning of the word \"synchronized\" here is exactly the same as in STAG (Shieber and Schabes, 1990) . See also bilingual signs (Tsujii and Fujita, 1991) for a discussion of the importance of combining the appropriate domain of locality and synchronization. are given the same index \":i\". Linked nonterminal must be derived from a sequence of synchronized pairs. Thus, the first NP (NP:I) in the source rule corresponds to the second NP (NP:I) in the target rule, the Vs in both rules correspond to each other, and the second NP (NP:3) in the source rule corresponds to the first NP (NP:3) in the target rule.", "cite_spans": [ { "start": 165, "end": 192, "text": "(Shieber and Schabes, 1990)", "ref_id": "BIBREF25" }, { "start": 220, "end": 245, "text": "(Tsujii and Fujita, 1991)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Context-Free Grammars", "sec_num": null }, { "text": "The source and target rules are called CFG skeleton of the pattern. The notion of a syntactic head is similar to that used in unification grammars, although the heads in our patterns are simply encoded as character strings rather than as complex feature structures. A head is typically introduced 6 in preterminal rules such as leave ---* V V *--partir where two verbs, \"leave\" and \"partir,\" are associated with the heads of the nonterminal symbol V. This is equivalently expressed as leave:l --~ V:I V:I ~ partir:l which is physically implemented as an entry of an English-French lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Free Grammars", "sec_num": null }, { "text": "A set T of translation patterns is said to accept an input s iff there is a derivation sequence Q for s using the source CFG skeletons of T, and every head constraint associated with the CFG skeletons in Q is satisfied. Similarly, T is said to translate s iff there is a synchronized derivation sequence Q for s such that T accepts s, and every head and link constraint associated with the source and target CFG skeletons in Q is satisfied. The derivation Q then produces a translation t as the resulting sequence of terminal symbols included in the target CFG skeletons in Q. Translation of an input string s essentially consists of the following three steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Free Grammars", "sec_num": null }, { "text": "1. Parsing s by using the source CFG skeletons 2. Propagating link constraints from source to target CFG skeletons to build a target CFG derivation sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Free Grammars", "sec_num": null }, { "text": "The third step is a trivial procedure when the target CFG derivation is obtained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "Theorem 1 Let T be a PCFG. Then, there exists a CFG GT such that for two languages L(T) and L(GT) accepted by T and GT, respectively, L(T) = L(GT) holds. That is, T accepts a sentence s iff GT accepts s.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "Proof: We can construct a CFG GT as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "1. GT has the same set of terminal symbols as T.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "6A nonterminal symbol X in a source or target CFG rule X --* X1 ... Xk can only be constrained to have one of the heads in the RHS X1 ... X~. Thus, monotonicity of head constraints holds throughout the parsing process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "2. For each nonterminal symbol X in T, GT ineludes a set of nonterminal symbols {X~ ]w is either a terminal symbol in T or a special symbol e}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "3. For each preterminal rule", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "X:i --+ wl:l w2:2 ... wk:k (1 < i < k), GT includes z Xwi --~ wl w2 ... wk (1 < i < k).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "If X is not co-indexed with any of wl, GT includes", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "Xe ~Wl w2 ... Wk.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "4. For each source CFG rule with head constraints (hi, h2, ..., hk) and indexes (il, i2,..., ik),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "Y :ij ---* hl :Xl :il ... hk :Xk :ik (1 <_ j < k), GT includes Yhj ---* Xhl Xh2 ... Xhk.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "If Y is not co-indexed with any of its children, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "Y~ --* Xh~ Xh2 ... Xhk.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "If Xj has no head constraint in the above rule, GT includes a set of (N + 1) rules, where Xhj above is replaced with Xw for every terminal symbol w and Xe (Yhj will also be replaced if it is co-indexed with Xj).s Now, L(T) C_ L(GT) is obvious, since GT can simulate the derivation sequence in T with corresponding rules in GT. L(GT) C L(T) can be proven, with mathematical induction, from the fact that every valid derivation sequence of GT satisfies head constraints of corresponding rules in T.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "[3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating t from the target CFG derivation sequence", "sec_num": "3." }, { "text": "Since a valid derivation sequence in T is always a valid derivation sequence in G, the proof is immediate. Similarly, we have Proposition 2 Let a CFG H be a subset of source CFG skeletons in T such that a source CFG skeleton k is in H iffk has no head constraints associated with it. Then, L(H) C L(T).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1 Let a CFG G be a set of source CFG skeletons in T. Then, L(T) C n(c).", "sec_num": null }, { "text": "THead constraints ate trivially satisfied or violated in preterminal rules. Hence, we assume, without loss of generality, that no head constraint is given in pretetminal rules. We also assume that \"X ---* w\" implies \"X:I w:l\". STherefore, a single rule in T can be mapped to as many as (N + 1) k rules in GT, where N is the number of terminal symbols in T. GT could be exponentially larger than T.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1 Let a CFG G be a set of source CFG skeletons in T. Then, L(T) C n(c).", "sec_num": null }, { "text": "Two CFGs G and H define the range of CFL L(T). These two CFGs can be used to measure the \"default\" translation quality, since idioms and collocational phrases are typically translated by patterns with head constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1 Let a CFG G be a set of source CFG skeletons in T. Then, L(T) C n(c).", "sec_num": null }, { "text": "Theorem 2 Let a CFG G be a set of source CFG", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1 Let a CFG G be a set of source CFG skeletons in T. Then, L(T) C n(c).", "sec_num": null }, { "text": "skeletons in T. Then, L(T) C L(G) is undecidable. Proof\" The decision problem, L(T) C L(G), of two CFLs such that L(T) C L(G) is solvable iff L(T) = L(G)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1 Let a CFG G be a set of source CFG skeletons in T. Then, L(T) C n(c).", "sec_num": null }, { "text": "is solvable. This includes a known undecidable problem, L(T) = E*?, since we can choose a grammar U with L(U) = E*, nullify the entire set of rules in U by defining T to be a vacuous set {S:I a:Sb:l, Sb:l --+ b:Su:l} U U (Sv and S are start symbols in U and T, respectively), and, finally, let T further include an arbitrary CFG F.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1 Let a CFG G be a set of source CFG skeletons in T. Then, L(T) C n(c).", "sec_num": null }, { "text": "L(G) = E* is obvious, since G has {S --* Sb, Sb --* Sv} U U. Now, we have L(G) = L(T) iff L(F) = E*.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1 Let a CFG G be a set of source CFG skeletons in T. Then, L(T) C n(c).", "sec_num": null }, { "text": "[3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1 Let a CFG G be a set of source CFG skeletons in T. Then, L(T) C n(c).", "sec_num": null }, { "text": "Theorem 2 shows that the syntactic coverage of T is, in general, only computable by T itself, even though T is merely a CFL. This may pose a serious problem when a grammar writer wishes to know if there is a specific expression that is only acceptable by using at least one pattern with head constraints, for which the answer is \"no\" iff L(G) = L(T). One way to trivialize this problem is to let T include a pattern with a pair of pure CFG rules for every pattern with head constraints, which guarantees that L(H) = L(T) = L(G). In this case, we know that the coverage of \"default\" patterns is always identical to L(T).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1 Let a CFG G be a set of source CFG skeletons in T. Then, L(T) C n(c).", "sec_num": null }, { "text": "Although our \"patterns\" have no more theoretical descriptive power than CFG, they can provide considerably better descriptions of the domain of locality than ordinary CFG rules. For example, be:V:l year:NP:2 old ---* VP:I VP:I *-avoir:V:l an:NP:2 can handle such NP pairs as \"one year\" and \"un an,\" and \"more than two years\" and \"plus que deux ans,\" which would have to be covered by a large number of plain CFG rules. TAGs, on the other hand, are known to be \"mildly context-sensitive\" grammars, and they can capture a broader range of syntactic dependencies, such as cross-serial dependencies. The computational complexity of parsing for TAGs, however, is O(IGIn6), which is far greater than that of CFG parsing. Moreover, defining a new STAG rule is not as easy for the users as just adding an entry into a dictionary, because each STAG rule has to be specified as a pair of tree structures. Our patterns, on the other hand, concentrate on specifying linear ordering of source and target constituents, and can be written by the users as easily as 9 9By sacrificing linguistic accuracy for the description of syntactic structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1 Let a CFG G be a set of source CFG skeletons in T. Then, L(T) C n(c).", "sec_num": null }, { "text": "to leave * --de quitter * to be year:* old = d'avoir an:* Here, the wildcard \"*\" stands for an NP by default. The preposition \"to\" and \"de\" are used to specify that the patterns are for VP pairs, and \"to be\" is used to show that the phrase is the BE-verb and its complement. A wildcard can be constrained with a head, as in \"house:*\" and \"maison:*\". The internal representations of these patterns are as follows: These patterns can be associated with an explicit nonterminal symbol such as \"V:*\" or \"ADJP:*\" in addition to head constraints (e.g., \"leave:V:*'). By defining a few such notations, these patterns can be successfully converted into the formal representations defined in this section. Many of the divergences (Doff, 1993) in source and target language expressions are fairly collocational, and can be appropriately handled by using our patterns. Note the simplicity that results from using a notation in which users only have to specify the surface ordering of words and phrases. More powerful grammar formalisms would generally require either a structural description or complex feature structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1 Let a CFG G be a set of source CFG skeletons in T. Then, L(T) C n(c).", "sec_num": null }, { "text": "The Translation Algorithm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3", "sec_num": null }, { "text": "The parsing algorithm for translation patterns can be any of known CFG parsing algorithms including CKY and Earley algorithms 1\u00b0 At this stage, head and link constraints are ignored. It is easy to show that the number of target charts for a single source chart increases exponentially if we build target charts simultaneously with source charts. For example, the two patterns A:I B:2 ~ B:2 B:2 ~--A:I B:2, and A:I B:2 --~ B:2 A:I ~-B:2 A:I will generate the following 2 n synchronized pairs of charts for the sequence of (n+l) nonterminal symbols AAA...AB, for which no effective packing of the target charts is possible. Our strategy is thus to find a candidate set of source charts in polynomial time. We therefore apply heuristic measurements to identify the most promising patterns for generating translations. In 1\u00b0Our prototype implementation was based on the Earley algorithm, since this does not require lexicalization of CFG rules. this sense, the entire translation algorithm is not guaranteed to run in polynomial time. Practically, a timeout mechanism and a process for recovery from unsuccessful translation (e.g., applying the idea of fitted parse (Jensen and Heidorn, 1983) to target CFG rules) should be incorporated into the translation algorithm.", "cite_spans": [ { "start": 1162, "end": 1188, "text": "(Jensen and Heidorn, 1983)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "3", "sec_num": null }, { "text": "Some restrictions on patterns must be imposed to avoid infinitely many ambiguities and arbitrarily long translations. The following patterns are therefore not allowed: where A, B, X, and Y are nonterminal symbols with or without head and link constraints, and C's are either terminal or nonterminal symbols.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3", "sec_num": null }, { "text": "The basic strategy for choosing a candidate derivation sequence from ambiguous parses is as follows. 11 A simplified view of the Earley algorithm (Earley, 1970) consists of three major components, predict(i), complete(i), and scan(i), which are called at each position i = 0, 1,..., n in an input string I = sls2...sn. Predict(i) returns a set of currently applicable CFG rules at position i. Complete(i) combines inactive charts ending at i with active charts that look for the inactive charts at position i to produce a new collection of active and inactive charts. Scan(i) tries to combine inactive charts with the symbol si+l at position i. Complete(n) gives the set of possible parses for the input I. Now, for every inactive chart associated with a nonterminal symbol X for a span of (i~) (1 ~ i, j <_ n), there exists a set P of patterns with the source CFG skeleton, ... --* X. We can define the following ordering of patterns in P; this gives patterns with which we can use head and link constraints for building target charts and translations. These candidate patterns can be arranged and associated with the chart in the complete() procedure.", "cite_spans": [ { "start": 146, "end": 160, "text": "(Earley, 1970)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "3", "sec_num": null }, { "text": "X --~ X1...X~ over any other pattern q with the same source CFG skeleton X --~ X1 ..' Xk, such that p has a head constraint h:Xi if q has h:Xi (i = 1,...,k). The pattern p is said to be more specific than q. For example, p = 11 This strategy is similar to that of transfer-driven MT (TDMT) (Furuse and Iida, 1994). TDMT, however, is based on a combination of declarative/procedural knowledge sources for MT, and no clear computational properties have been investigated. \"leave:V:1 house:NP --+ VP:I\" is preferred to q = \"leave:V:l NP --* VP:I\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prefer a pattern p with a source CFG skeleton", "sec_num": "1." }, { "text": "to any pattern q that has fewer terminal symbols in the source CFG skeleton than p. For example, prefer \"take:V:l a walk\" to \"take:V:l NP\" if these patterns give the VP charts with the same span. 3. Prefer a pattern p which does not violate any head constraint over those which violate a head constraint. 4. Prefer the shortest derivation sequence for each input substring. A pattern for a larger domain of locality tends to give a shorter derivation sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prefer a pattern p with a source CFG skeleton", "sec_num": "2." }, { "text": "These preferences can be expressed as numeric values (cost) for patterns. 12 Thus, our strategy favors lexicalized (or head constrained) and collocational patterns, which is exactly what we are going to achieve with pattern-based MT. Selection of patterns in the derivation sequence accompanies the construction of a target chart. Link constraints are propagated from source to target derivation trees. This is basically a bottom-up procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prefer a pattern p with a source CFG skeleton", "sec_num": "2." }, { "text": "Since the number M of distinct pairs (X,w), for a nonterminal symbol X and a subsequence w of input string s, is bounded by Kn 2, we can compute the mbest choice of pattern candidates for every inactive chart in time O(ITIKn 3) as claimed by Maruyama (Maruyama, 1993) , and Schabes and Waters (Schabes and Waters, 1995) . Here, K is the number of distinct nonterminal symbols in T, and n is the size of the input string. Note that the head constraints associated with the source CFG rules can be incorporated in the parsing algorithm, since the number of triples (X,w,h), where h is a head of X, is bounded by Kn 3. We can modify the predict(), complete(), and scan() procedures to run in O ([T[Kn 4) while checking the source head constraints. Construction of the target charts, if possible, on the basis of the m best candidate patterns for each source chart takes O(Kn~m) time. Here, m can be larger than 2 n if we generate every possible translation.", "cite_spans": [ { "start": 251, "end": 267, "text": "(Maruyama, 1993)", "ref_id": "BIBREF19" }, { "start": 274, "end": 319, "text": "Schabes and Waters (Schabes and Waters, 1995)", "ref_id": "BIBREF24" }, { "start": 691, "end": 700, "text": "([T[Kn 4)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Prefer a pattern p with a source CFG skeleton", "sec_num": "2." }, { "text": "The reader should note critical differences between lexicalized grammar rules (in the sense of LTAG and TIG) and translation patterns when they are used for MT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prefer a pattern p with a source CFG skeleton", "sec_num": "2." }, { "text": "Firstly, a pattern is not necessarily lexicalized. An economical way of organizing translation patterns is to include non-lexicalized patterns as \"default\" translation rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prefer a pattern p with a source CFG skeleton", "sec_num": "2." }, { "text": "12A similar preference can be defined for the target part of each pattern, but we found many counterexamples, where the number of nontermina] symbols shows no specificity of the patterns, in the target part of English-to-Japanese translation patterns. Therefore, only the head constraint violation in the target part is accounted for in our prototype.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prefer a pattern p with a source CFG skeleton", "sec_num": "2." }, { "text": "Secondly, lexicalization might increase the size of STAG grammars (in particular, compositional grammar rules such as ADJP NP --* NP) considerably when a large number of phrasal variations (adjectives, verbs in present participle form, various numeric expressions, and so on) multiplied by the number of their translations, are associated with the ADJP part. The notion of structure sharing (Vijay-Shanker and Schabes, 1992 ) may have to be extended from lexical to phrasal structures, as well as from monolingual to bilingual structures.", "cite_spans": [ { "start": 410, "end": 423, "text": "Schabes, 1992", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Prefer a pattern p with a source CFG skeleton", "sec_num": "2." }, { "text": "Thirdly, a translation pattern can omit the tree structure of a collocation, and leave it as just a sequence of terminal symbols. The simplicity of this helps users to add patterns easily, although precise description of syntactic dependencies is lost.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prefer a pattern p with a source CFG skeleton", "sec_num": "2." }, { "text": "Translation patterns can be enhanced with unification and feature structures to give patterns additional power for describing gender, number, agreement, and so on. Since the descriptive power of unification-based grammars is considerably greater than that of CFG (Berwick, 1982) , feature structures have to be restricted to maintain the efficiency of parsing and generation algorithms. Shieber and Schabes briefly discuss the issue (Shieber and Schabes, 1990) . We can also extend translation patterns as follows:", "cite_spans": [ { "start": 263, "end": 278, "text": "(Berwick, 1982)", "ref_id": "BIBREF10" }, { "start": 433, "end": 460, "text": "(Shieber and Schabes, 1990)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Features and Agreements", "sec_num": "4" }, { "text": "Each nonterminal node in a pattern can be associated with a fixed-length vector of binary features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features and Agreements", "sec_num": "4" }, { "text": "This will enable us to specify such syntactic dependencies as agreement and subcategorization in patterns. Unification of binary features, however, is much simpler: unification of a feature-value pair succeeds only when the pair is either (0,0) or (1,1/. Since the feature vector has a fixed length, unification of two feature vectors is performed in a constant time. For example, the patterns 13 where \"-OBJ\" is a local feature for head VPs in LIISs, while \"+OBJ\" is a local feature for VPs in 13Again, these patterns can be mapped to a weakly equivalent set of CFG rules. See GPSG (Gazdar, Pullum, and Sag, 1985) for more details.", "cite_spans": [ { "start": 583, "end": 614, "text": "(Gazdar, Pullum, and Sag, 1985)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Features and Agreements", "sec_num": "4" }, { "text": "the RHSs. Unification of a local feature with +OBJ succeeds since it is not bound.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features and Agreements", "sec_num": "4" }, { "text": "Agreement on subjects (nominative NPs) and finite-form verbs (VPs, excluding the BE verb) is disjunctively specified as Another way to extend our grammar formalism is to associate weights with patterns. It is then possible to rank the matching patterns according to a linear ordering of the weights rather than the pairwise partial ordering of patterns described in the previous section. In our prototype system, each pattern has its original weight, and according to the preference measurement described in the previous section, a penalty is added to the weight to give the effective weight of the pattern in a particular context. Patterns with the least weight are to be chosen as the most preferred patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features and Agreements", "sec_num": "4" }, { "text": "Numeric weights for patterns are extremely useful as means of assigning higher priorities uniformly to user-defined patterns. Statistical training of patterns can also be incorporated to calculate such weights systematically (Fujisaki et al., 1989) . Figure I shows a sample translation of the input \"He knows me well,\" using the following patterns. In the above example, the Earley-based algorithm with source CFG rules is used in Phase 1. In Phase 2, head and link constraints are examined, and unification of feature structures is performed by using the charts obtained in Phase 1. Candidate patterns are ordered by their weights and preferences. Finally, in Phase 3, the target charts are built to generate translations based on the selected patterns.", "cite_spans": [ { "start": 225, "end": 248, "text": "(Fujisaki et al., 1989)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 251, "end": 259, "text": "Figure I", "ref_id": null } ], "eq_spans": [], "section": "Features and Agreements", "sec_num": "4" }, { "text": "Integration of translation patterns with translation examples, or bilingual corpora, is the most important extension of our framework. There is no dis-crete line between patterns and bilingual corpora. Rather, we can view them together as a uniform set of translation pairs with varying degrees of lexicalization. Sentence pairs in the corpora, however, should not be just added as patterns, since they are often redundant, and such additions contribute to neither acquisition nor refinement of non-sentential patterns. Therefore, we have been testing the integration method with the following steps. Let T be a set of translation patterns, B be a bilingual corpus, and (s,t) be a pair of source and target sentences. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration of Bilingual Corpora", "sec_num": "5" }, { "text": "(a) [Lexicalization]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration of Bilingual Corpora", "sec_num": "5" }, { "text": "If there is a paired derivation sequence Q of (s,t) in T, create a new pattern p' for a pattern p used in Q such that every nonterminal symbol X in p with no head constraint is associated with h:X in q, where the head h is instantiated in X of p. Add p~ to T if it is not already there. Repeat the addition of such patterns, and assign low weights to them until the refined sequence Q becomes the most likely translation of s. For example, add leave:VP: 1 :+OBJ considerably:ADVP:2 -* VP:I VP:I *-laisser:VP:l:+OBJ consid@rablement:ADVP:2 if the existing VP ADVP pattern does not give a correct translation. (b) [Addition of New Patterns] If there is no such paired derivation sequence, add specific patterns, if possible, for idioms and collocations that are missing in T, or add the pair (s,t) to T as a translation pattern. For example, add leave:VP:l:+OBJ behind --* VP:I VP:I *--laisser:VP:l:+OBJ if the phrase \"leave it behind\" is not correctly translated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration of Bilingual Corpora", "sec_num": "5" }, { "text": "at all, add the pair (s,t) to T as a translation pattern.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[Translation Failure] If T cannot translate s", "sec_num": "3." }, { "text": "The grammar acquisition scheme described above has not yet been automated, but has been manually simulated for a set of 770 English-Japanese simple sentence pairs designed for use in MT system evaluation, which is available from JEIDA (the Japan Electronic Industry Development Association) ((the Japan Electronic Industry Development Association), 1995), including: #100: Any question will be welcomed. ~200: He kept calm in the face of great danger. #300: He is what is called \"the man in the news\". ~400: Japan registered a trade deficit of $101 million, reflecting the country's economic sluggishness, according to government figures. #500: I also went to the beach 2 weeks earlier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[Translation Failure] If T cannot translate s", "sec_num": "3." }, { "text": "At an early stage of grammar acquisition, [Addition of New Patterns] was primarily used to enrich the set T of patterns, and many sentences were unambiguously and correctly translated. At a later stage, however, JEIDA sentences usually gave several translations, and [Lexicalization] with careful assignment of weights was the most critical task. Although these sentences are intended to test a system's ability to translate one basic linguistic phenomenon in each simple sentence, the result was strong evidence for our claim. Over 90% of JEIDA sentences were correctly translated. Among the failures were: ~95: I see some stamps on the desk . #171: He is for the suggestion, but I'm against it. ~244: She made him an excellent wife. #660: He painted the walls and the floor white. Some (prepositional and sentential) attachment ambiguities needs to be resolved on the basis of semantic information, and scoping of coordinated structures would have to be determined by using not only collocational patterns but also some measures of balance and similarities among constituents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[Translation Failure] If T cannot translate s", "sec_num": "3." }, { "text": "Conclusions and Future Work Some assumptions about patterns should be reexamined when we extend the definition of patterns. The notion of head constraints may have to be extended into one of a set membership constraint if we need to handle coordinated structures (Kaplan and Maxwell III, 1988) . Some light-verb phrases cannot be correctly translated without \"exchanging\" several feature values between the verb and its object. A similar problem has been found in be-verb phrases. Grammar acquisition and corpus integration are fundamental issues, but automation of these processes (Watanabe, 1993) is still not complete. Development of an efficient translation algorithm, not just an efficient parsing algorithm, will make a significant contribution to research on synchronized grammars, including STAGs and our PCFGs.", "cite_spans": [ { "start": 263, "end": 293, "text": "(Kaplan and Maxwell III, 1988)", "ref_id": "BIBREF18" }, { "start": 582, "end": 598, "text": "(Watanabe, 1993)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "prototype. Their technical discussions and suggestions greatly helped me shape the idea of patternbased CFGs. I would also like to thank Taijiro Tsutsumi, Masayuki Morohashi, Hiroshi Nomiyama, Tetsuya Nasukawa, and Naohiko Uramoto for their valuable comments. Michael McDonald, as usual, helped me write the final version.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "See LTAG(Schabes, AbeiU~, and Joshi, 1988) (Lexicalized TAG) and STAG(Shieber and Schabes, 1990) (Synchronized TAG) for each member of the TAG (Tree Adjoining Grammar) family.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Hideo Watanabe designed and implemented a prototype MT system for pattern-based CFGs, while Shiho Ogino developed a Japanese generator of the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "> (h) V, (i) V (active arcs", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "---> (h) V, (i) V (active arcs [I 2] (d) V.NP, [1 2] (e) V.NP) [2 3] me ---> (g) NP (inactive arcs [I 3] (d) V NP, [i 3] (e) V NP)", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "He knows me ---> (a) S", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "He knows me ---> (a) S", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "> (j) ADVP, (k) ADVP (inactive arcs", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "4] well ---> (j) ADVP, (k) ADVP (inactive arcs [I 4] (b) VP ADVP, [i 4] (c) VP ADVP)", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "knows me well ---> (b), (c) VP", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "knows me well ---> (b), (c) VP (inactive arc [0 4] (a) NP VP)", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "He knows me well ---> (a) S Phase 2: Constraint Checking [0 I] He ---> (f) NP", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "He knows me well ---> (a) S Phase 2: Constraint Checking [0 I] He ---> (f) NP", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "> (i) V, (j) V [2 3] me ---> (g) NP", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "2] knows ---> (i) V, (j) V [2 3] me ---> (g) NP", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "He knows me ---> (a) S", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "He knows me ---> (a) S", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "> (i) ADVP, (j) ADVP [i 4] knows me well ---> (b), (c) VP (preference ordering (c)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "4] well ---> (i) ADVP, (j) ADVP [i 4] knows me well ---> (b), (c) VP (preference ordering (c), (b))", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "> (c) VP well ---> bien", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "well ---> (c) VP well ---> bien", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Using Lexicalized Tags for Machine Translation", "authors": [ { "first": "@", "middle": [], "last": "Abeill", "suffix": "" }, { "first": "A", "middle": [], "last": "", "suffix": "" }, { "first": "Y", "middle": [], "last": "Schabes", "suffix": "" }, { "first": "A", "middle": [ "K" ], "last": "Joshi", "suffix": "" } ], "year": 1990, "venue": "Proc. of the 13th International Conference on Computational Linguistics", "volume": "3", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abeill@, A., Y. Schabes, and A. K. Joshi. 1990. \"Using Lexicalized Tags for Machine Translation\". In Proc. of the 13th International Conference on Computational Linguistics, volume 3, pages 1-6, Aug.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Computational Complexity and Lexical-Functional Grammar", "authors": [ { "first": "R", "middle": [ "C" ], "last": "Berwick", "suffix": "" } ], "year": 1982, "venue": "American Journal of Computational Linguistics", "volume": "", "issue": "", "pages": "97--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Berwick, R.C. 1982. \"Computational Complex- ity and Lexical-Functional Grammar\". American Journal of Computational Linguistics, pages 97- 109, July-Dec.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Mathematics of Statistical Machine Translation: Parametric Estimation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "", "volume": "19", "issue": "", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, P. F., S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. \"The Mathematics of Statistical Machine Translation: Parametric Es- timation\". Computational Linguistics, 19(2):263- 311, June.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Machine Translation: A View from the Lexicon", "authors": [ { "first": "B", "middle": [ "J" ], "last": "Dorr", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dorr, B. J. 1993. \"Machine Translation: A View from the Lexicon\". The MIT Press, Cambridge, Mass.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "An Efficient Context-free Parsing Algorithm", "authors": [ { "first": "J", "middle": [], "last": "Earley", "suffix": "" } ], "year": 1970, "venue": "Communications of the ACM", "volume": "6", "issue": "8", "pages": "94--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Earley, J. 1970. \"An Efficient Context-free Pars- ing Algorithm\". Communications of the ACM, 6(8):94-102, February.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Cooperation between Transfer and Analysis in Example-Based Framework", "authors": [ { "first": "T", "middle": [], "last": "Fujisaki", "suffix": "" }, { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "J", "middle": [], "last": "Cocke", "suffix": "" }, { "first": "E", "middle": [], "last": "Black", "suffix": "" }, { "first": "T", "middle": [], "last": "Nishino ; Furuse", "suffix": "" }, { "first": "O", "middle": [], "last": "", "suffix": "" }, { "first": "H", "middle": [], "last": "Iida", "suffix": "" } ], "year": 1989, "venue": "Proc. of the 15th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "645--651", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fujisaki, T., F. Jelinek, J. Cocke, E. Black, and T. Nishino. 1989. \"A Probabilistie Parsing Method for Sentence Disambiguation\". In Proc. of the International Workshop on Parsing Tech- nologies, pages 85-94, Pittsburgh, Aug. Furuse, O. and H. Iida. 1994. \"Cooperation be- tween Transfer and Analysis in Example-Based Framework\". In Proc. of the 15th International Conference on Computational Linguistics, pages 645-651, Aug.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Generalized Phrase Structure Grammar", "authors": [ { "first": "G", "middle": [], "last": "Gazdar", "suffix": "" }, { "first": "G", "middle": [ "K" ], "last": "Pullum", "suffix": "" }, { "first": "I", "middle": [ "A" ], "last": "Sag", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gazdar, G., G. K. Pullum, and I. A. Sag. 1985. \"Generalized Phrase Structure Grammar\". Har- vard University Press, Cambridge, Mass.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The Fitted Parse: 100% Parsing Capability in a Syntactic Grammar of English", "authors": [ { "first": "K", "middle": [], "last": "Jensen", "suffix": "" }, { "first": "G", "middle": [ "E" ], "last": "Heidorn", "suffix": "" } ], "year": 1983, "venue": "Proc. of the 1st Conference on Applied NLP", "volume": "", "issue": "", "pages": "93--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jensen, K. and G. E. Heidorn. 1983. \"The Fit- ted Parse: 100% Parsing Capability in a Syntactic Grammar of English\". In Proc. of the 1st Confer- ence on Applied NLP, pages 93-98.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Lexical-Functional Grammar: A Formal System for Generalized Grammatical Representation", "authors": [ { "first": "R", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "J", "middle": [], "last": "Bresnan", "suffix": "" } ], "year": 1982, "venue": "", "volume": "", "issue": "", "pages": "173--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaplan, R. and J. Bresnan. 1982. \"Lexical- Functional Grammar: A Formal System for Generalized Grammatical Representation\". In J. Bresnan, editor, \"Mental Representation of Grammatical Relations\". MIT Press, Cambridge, Mass., pages 173-281.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Constituent Coordination in Lexical-Functional Grammar", "authors": [ { "first": "R", "middle": [ "M" ], "last": "Kaplan", "suffix": "" }, { "first": "J", "middle": [ "T" ], "last": "Maxwell", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 1988, "venue": "Proc. of the 12th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "303--305", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaplan, R. M. and J. T. Maxwell III. 1988. \"Constituent Coordination in Lexical-Functional Grammar\". In Proc. of the 12th International Conference on Computational Linguistics, pages 303-305, Aug.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Pattern-Based Translation: Context-Free Transducer and Its Applications to Practical NLP", "authors": [ { "first": "H", "middle": [], "last": "Maruyama", "suffix": "" } ], "year": 1993, "venue": "Proc. of Natural Language Pacific Rim Symposium (NLPRS' 93)", "volume": "", "issue": "", "pages": "232--237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maruyama, H. 1993. \"Pattern-Based Translation: Context-Free Transducer and Its Applications to Practical NLP\". In Proc. of Natural Language Pa- cific Rim Symposium (NLPRS' 93), pages 232- 237, Dec.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Fundamentals", "authors": [ { "first": "C", "middle": [], "last": "Pollard", "suffix": "" }, { "first": "I", "middle": [ "A" ], "last": "Sag", "suffix": "" } ], "year": 1987, "venue": "An Information-Based Syntax and Semantics", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pollard, C. and I. A. Sag. 1987. \"An Information- Based Syntax and Semantics, Vol.1 Fundamen- tals\". CSLI Lecture Notes, Number 13.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The Generative Lexicon", "authors": [ { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" } ], "year": 1991, "venue": "Computational Linguistics", "volume": "17", "issue": "4", "pages": "409--441", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pustejovsky, J. 1991. \"The Generative Lexi- con\". Computational Linguistics, 17(4):409-441, December.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Toward Memorybased Translation", "authors": [ { "first": "S", "middle": [], "last": "Sato", "suffix": "" }, { "first": "M", "middle": [], "last": "Nagao", "suffix": "" } ], "year": 1990, "venue": "Proc. of the 13th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "247--252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sato, S. and M. Nagao. 1990. \"Toward Memory- based Translation\". In Proc. of the 13th Interna- tional Conference on Computational Linguistics, pages 247-252, Helsinki, Aug.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Parsing Algorithm with 'lexicalized' grammars: Application to tree adjoining grammars", "authors": [ { "first": "Y", "middle": [], "last": "Schabes", "suffix": "" }, { "first": "A", "middle": [], "last": "Abeill~", "suffix": "" }, { "first": "A", "middle": [ "K" ], "last": "Joshi", "suffix": "" } ], "year": 1988, "venue": "Proc. of the 12th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "578--583", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schabes, Y., A. Abeill~, and A. K. Joshi. 1988. \"Parsing Algorithm with 'lexicalized' grammars: Application to tree adjoining grammars\". In Proc. of the 12th International Conference on Compu- tational Linguistics, pages 578-583, Aug.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Tree Insertion Grammar: A Cubic-Time, Parsable Formalism that Lexicalizes Context-Free Grammar without Changing the Trees Produced", "authors": [ { "first": "Y", "middle": [], "last": "Schabes", "suffix": "" }, { "first": "R", "middle": [ "C" ], "last": "Waters", "suffix": "" } ], "year": 1995, "venue": "Computational Linguistics", "volume": "21", "issue": "4", "pages": "479--513", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schabes, Y. and R. C. Waters. 1995. \"Tree In- sertion Grammar: A Cubic-Time, Parsable For- malism that Lexicalizes Context-Free Grammar without Changing the Trees Produced\". Compu- tational Linguistics, 21(4):479-513, Dec.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Synchronous Tree-Adjoining Grammars", "authors": [ { "first": "S", "middle": [ "M" ], "last": "Shieber", "suffix": "" }, { "first": "Y", "middle": [], "last": "Schabes", "suffix": "" } ], "year": 1990, "venue": "Proc. of the 13th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "253--258", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shieber, S. M. and Y. Schabes. 1990. \"Synchronous Tree-Adjoining Grammars\". In Proc. of the 13th International Conference on Computational Lin- guistics, pages 253-258, August.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Experiments and Prospects of Example-Based Machine Translation", "authors": [ { "first": "E", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "H", "middle": [], "last": "Iida", "suffix": "" } ], "year": 1991, "venue": "Proc. of the 29th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "185--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sumita, E. and H. Iida. 1991. \"Experiments and Prospects of Example-Based Machine Transla- tion\". In Proc. of the 29th Annual Meeting of the Association for Computational Linguistics, pages 185-192, Berkeley, June.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "JEIDA (the Japan Electronic Industry Development Association)", "authors": [], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "95--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "JEIDA (the Japan Electronic Industry Develop- ment Association). 1995. \"Evaluation Standards for Machine Translation Systems (in Japanese)\". 95-COMP-17, Tokyo.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Lexical Transfer based on Bilingual Signs", "authors": [ { "first": "J", "middle": [], "last": "Tsujii", "suffix": "" }, { "first": "K", "middle": [], "last": "Fujita", "suffix": "" } ], "year": 1991, "venue": "Proc. of the 5th European ACL Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsujii, J. and K. Fujita. 1991. \"Lexical Transfer based on Bilingual Signs\". In Proc. of the 5th European ACL Conference.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A Study of Tree Adjoining Grammars", "authors": [ { "first": "K", "middle": [], "last": "Vijay-Shanker", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vijay-Shanker, K. 1987. \"A Study of Tree Ad- joining Grammars\". Ph.D. thesis, Department of Computer and Information Science, University of Pennsylvania.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Structure Sharing in Lexicalized Tree-Adjoining Grammars", "authors": [ { "first": "K", "middle": [], "last": "Vijay-Shanker", "suffix": "" }, { "first": "Y", "middle": [], "last": "Schabes", "suffix": "" } ], "year": 1992, "venue": "Proc. of the 14th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "205--211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vijay-Shanker, K. and Y. Schabes. 1992. \"Struc- ture Sharing in Lexicalized Tree-Adjoining Gram- mars\". In Proc. of the 14th International Con- ference on Computational Linguistics, pages 205- 211, Aug.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A Method for Extracting Translation Patterns from Translation Examples", "authors": [ { "first": "H", "middle": [], "last": "Watanabe", "suffix": "" } ], "year": 1993, "venue": "Proc. of 5th Intl. Conf. on Theoretical and Methodological Issues in Machine Translation of Natural Languages", "volume": "", "issue": "", "pages": "292--301", "other_ids": {}, "num": null, "urls": [], "raw_text": "Watanabe, H. 1993. \"A Method for Extract- ing Translation Patterns from Translation Exam- ples\". In Proc. of 5th Intl. Conf. on Theoretical and Methodological Issues in Machine Translation of Natural Languages, pages 292-301, July.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "For example, the English-French translation pattern 4 NP:I miss:V:2 NP:3 ---* S:2 S:2 ~--NP:3 manquer:V:2 h NP:I essentially describes a synchronized 5 pair consisting of a left-hand-side English CFG rule (called a source rule) NP V NP --~ S and a French CFG rule (called a target rule) S ~ NP V h NP accompanied by the following constraints." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "leave:V:l NP:2 ~ VP:I VP:I ~--quitter:V:l NP:2 be:V:l year:NP:2 old --+ VP:I VP:I ~ avoir:V:l an:NP:2" }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "(A (A... (A B))) with (A (A... (A B))) (A (A... (A B))) with ((A ... (A B)) A) iA (A... (A S))) with (((B A) A)... A)" }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "+ X Y ~-C1...B...C~ if there is a cycle of synchronized derivation such that A--+ X...--~ A and B (or Cl...B...Ck) --* Y...-+ B," }, "FIGREF4": { "type_str": "figure", "uris": null, "num": null, "text": "transitive and intransitive verbs, respectively. We can also distinguish local and head features, as postulated in HPSG. Simplified version of verb subcategorization is then encoded as VP:I:+TRANS-OBJ NP:2 --* VP:I:+OBJ VP:I:+OBJ ~-VP:I:+TRANS-OBJ NP:2" }, "FIGREF5": { "type_str": "figure", "uris": null, "num": null, "text": "and *AGRV are a pair of aggregate unification specifiers that succeeds only when one of the above combinations of the feature values is unifiable." }, "FIGREF6": { "type_str": "figure", "uris": null, "num": null, "text": "NP:I:*AGRS VP:I:*AGRS ~ S:I S:I ~-NP:I:*AGRS VP:I:*AGRS ... (a) VP:I ADVP:2 ~ VP:I VP:I ~ VP:I ADVP:2 ... (b) know:VP:l:+OBJ well --+ VP:I VP:I ~--connaitre:VP:h+OBJ bien ... (c) V:I NP:2 --~ VP:I:+OBJ VP:I:+OBJ *--V:I NP:2:-PRO ... (d) V:I NP:2 --+ VP:I:+OBJ VP:I:+OBJ ~ NP:2:+PRO V:I ... (e) To simplify the example, let us assume that we have the following preterminal rules: he --~ NP:+PRO+NOMI+3RD+SG NP:+PRO+NOMI+3RD+SG ~ il ... (f) me --+ NP:+PRO+CAUS+SG-3RD NP:+PRO+CAUS+SG-3RD ,---me ... (g) knows --+ V:+FIN+3SG V:+FIN+3SG ,--salt ... (h) knows --~ V:+FIN+3SG V:+FIN+3SG ~--connait ... (i) Input: He knows me well Phase 1Sample Translation well --* ADVP ADVP ~--bien ... (j) well --~ ADVP ADVP ~--beaucoup ... (k)" } } } }