| { |
| "paper_id": "C00-1007", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:30:06.922361Z" |
| }, |
| "title": "Exploiting a Probabilistic Hierarchical Model for Generation", |
| "authors": [ |
| { |
| "first": "Srinivas", |
| "middle": [], |
| "last": "Bangalore", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "AT&T Labs Research", |
| "institution": "", |
| "location": { |
| "addrLine": "180 Park Avenue Florham Park", |
| "postCode": "07932", |
| "region": "NJ" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Owen", |
| "middle": [], |
| "last": "Rambow", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "AT&T Labs Research", |
| "institution": "", |
| "location": { |
| "addrLine": "180 Park Avenue Florham Park", |
| "postCode": "07932", |
| "region": "NJ" |
| } |
| }, |
| "email": "rambowg@research.att.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Previous stochastic approaches to generation do not include a tree-based representation of syntax. While this may be adequate or even advantageous for some applications, other applications pro t from using as much syntactic knowledge as is available, leaving to a stochastic model only those issues that are not determined by the grammar. We present initial results showing that a tree-based model derived from a tree-annotated corpus improves on a tree model derived from an unannotated corpus, and that a tree-based stochastic model with a handcrafted grammar outperforms both.", |
| "pdf_parse": { |
| "paper_id": "C00-1007", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Previous stochastic approaches to generation do not include a tree-based representation of syntax. While this may be adequate or even advantageous for some applications, other applications pro t from using as much syntactic knowledge as is available, leaving to a stochastic model only those issues that are not determined by the grammar. We present initial results showing that a tree-based model derived from a tree-annotated corpus improves on a tree model derived from an unannotated corpus, and that a tree-based stochastic model with a handcrafted grammar outperforms both.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "For many applications in natural language generation NLG, the range of linguistic expressions that must be generated is quite restricted, and a grammar for generation can be fully speci ed by hand. Moreover, in many cases it is very important not to deviate from certain linguistic standards in generation, in which case handcrafted grammars give excellent control. However, in other applications for NLG the variety of the output is much bigger, and the demands on the quality of the output somewhat less stringent. A typical example is NLG in the context of interlingua-or transfer-based machine translation. Another reason for relaxing the quality of the output may bethat not enough time is available to develop a full grammar for a new target language in NLG. In all these cases, stochastic empiricist\" methods provide an alternative to hand-crafted rationalist\" approaches to NLG. To our knowledge, the rst to use stochastic techniques in NLG were Knight 1998a and 1998b . In this paper, we present Fergus Flexible Empiricist Rationalist Generation Using Syntax.", |
| "cite_spans": [ |
| { |
| "start": 954, |
| "end": 970, |
| "text": "Knight 1998a and", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 971, |
| "end": 976, |
| "text": "1998b", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Fergus follows Langkilde and Knight's seminal work in using an n-gram language model, but we augment it with a tree-based stochastic model and a traditional tree-based syntactic grammar. More recent work on aspects of stochastic generation include Langkilde and Knight, 2000 , Malouf, 1999 and Ratnaparkhi, 2000 Before we describe in more detail how w e use stochastic models in NLG, we recall the basic tasks in NLG Rambow and Korelsky, 1992; Reiter, 1994 . During text planning, content and structure of the target text are determined to achieve the overall communicative goal. During sentence planning, linguistic means in particular, lexical and syntactic means are determined to convey smaller pieces of meaning.", |
| "cite_spans": [ |
| { |
| "start": 248, |
| "end": 274, |
| "text": "Langkilde and Knight, 2000", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 275, |
| "end": 289, |
| "text": ", Malouf, 1999", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 290, |
| "end": 311, |
| "text": "and Ratnaparkhi, 2000", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 417, |
| "end": 443, |
| "text": "Rambow and Korelsky, 1992;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 444, |
| "end": 456, |
| "text": "Reiter, 1994", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "During realization, the speci cation chosen in sentence planning is transformed into a surface string, by linearizing and in ecting words in the sentence and typically, adding function words. As in the work by Langkilde and Knight, our work ignores the text planning stage, but it does address the sentence planning and the realization stages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The structure of the paper is as follows. In Section 2, we present the underlying grammatical formalism, lexicalized tree-adjoining grammar LTAG. In Section 3, we describe the architecture of the system, and some of the modules. In Section 4 we discuss three experiments. In Section 5 we compare our work to that of Langkilde and Knight 1998a . We conclude with a summary of on-going work.", |
| "cite_spans": [ |
| { |
| "start": 316, |
| "end": 342, |
| "text": "Langkilde and Knight 1998a", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In order to model syntax, we use an existing wide-coverage grammar of English, the XTAG grammar developed at the University of Pennsylvania XTAG-Group, 1999. XTAG is a treeadjoining grammar TAG Joshi, 1987a. In Other supertags for the lexemes found in the training corpus: none 10 more 4 more 11 more 5 more Figure 1 : An excerpt from the XTAG grammar to derive There was no cost estimate for the second phase; dotted lines show possible adjunctions that were not made a TAG, the elementary structures are phrasestructure trees which are composed using two operations, substitution which appends one tree at the frontier of another and adjunction which inserts one tree into the middle of another. In graphical representation, nodes at which substitution can take place are marked with down-arrows. In linguistic uses of TAG, we associate one lexical item its anchor with each tree, and one or typically more trees with each lexical item; as a result we obtain a lexicalized TAG or LTAG. Since each lexical item is associated with a whole tree rather than just a phrase-structure rule, for example, we can specify both the predicate-argument structure of the lexeme by including nodes at which its arguments must substitute and morphosyntactic constraints such as subject-verb agreement within the structure associated with the lexeme. This property is referred to as TAG's extended domain of locality. Note that in an LTAG, there is no distinction between lexicon and grammar. A sample grammar is shown in Figure 1 . We depart from XTAG in our treatment of trees for adjuncts such as adverbs, and instead follow McDonald and Pustejovsky 1985. While in XTAG the elementary tree for an adjunct contains phrase structure that attaches the adjunct to nodes in another tree with the stag anchored by adjoins to direction speci ed label say, VP from the speci ed direction say, from the left, in our system the trees for adjuncts simply express their active v alency, but not how they connect to the lexical item they modify. This information is kept in the adjunction table which is associated with the grammar; an excerpt is shown in Figure 2 . Trees that can adjoin to other trees and have e n tries in the adjunction table are called gamma-trees, the other trees which can only besubstituted into other trees are alpha-trees. Note that we can refer to a tree by a combination of its name, called its supertag, and its anchor. For example, 1 is the supertag of an alpha-tree anchored by a noun that projects up to NP, while 2 is the supertag of a gamma tree anchored by a noun that only projects to N we assume adjectives are adjoined at N, and, as the adjunction table shows, can right-adjoin to an N. So that estimate 2 is a particular tree in our LTAG grammar. Another tree that a supertag can be associated with is 2 , which represents the predicative use of a noun. 1 Not all nouns are associated with all nominal supertags: the expletive there is only an 1 .", |
| "cite_spans": [ |
| { |
| "start": 1613, |
| "end": 1643, |
| "text": "McDonald and Pustejovsky 1985.", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 308, |
| "end": 316, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 1507, |
| "end": 1515, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 2131, |
| "end": 2139, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modeling Syntax", |
| "sec_num": "2" |
| }, |
| { |
| "text": "none D 2 \u03b3 1 \u03b1 1 \u03b1 \u03b3 2 2 2 \u03b1 1 \u03b1 1 \u03b1 \u03b3 \u03b1 2 \u03b1 1 \u03b1 2 Aux A AP", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Syntax", |
| "sec_num": "2" |
| }, |
| { |
| "text": "When we derive a sentence using an LTAG, we combine elementary trees from the grammar using adjunction and substitution. For example, to derive the sentence There was no cost estimate for the second phase from the grammar in Figure 1 , we substitute the tree for there into the tree for estimate. We then adjoin in the trees for the auxiliary was, the determiner no, and the modifying noun cost. Note that these adjunctions occur at di erent nodes: at VP, N P , and N, respectively. We then adjoin in the preposition, into which w e substitute phase, into which w e adjoin the and second. Note that all adjunctions are by gamma trees, and all substitution by alpha trees.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 225, |
| "end": 233, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modeling Syntax", |
| "sec_num": "2" |
| }, |
| { |
| "text": "If we w ant to represent this derivation graphically, w e can do so in a derivation tree, which w e obtain as follows: whenever we adjoin or substitute a tree t 1 into a tree t 2 , we add a new daughter labeled t 1 to the node labeled t 2 . As explained above, the name of each tree used is the lexeme along with the supertag. We omit the address at which substitution or adjunction takes place. The derivation tree for our derivation is shown in Figure 3 . As can be seen, this structure is a dependency tree and resembles a representation of lexical argument structure.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 447, |
| "end": 455, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modeling Syntax", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Joshi 1987b claims that TAG's properties make it particularly suited as a syntactic representation for generation. Speci cally, its extended domain of locality is useful in generation for localizing syntactic properties including word order as well as agreement and other morphological processes, and lexicalization is useful for providing an interface from semantics the derivation tree represent the sentence's predicate-argument structure. Indeed, LTAG has been used extensively in generation, starting with McDonald and Pustejovsky, 1985. Fergus is composed of three modules: the Tree Chooser, the Unraveler, and the Linear Precedence LP Chooser. The input to the system is a dependency tree as shown in Figure 4 . Note that the nodes are labeled only with lexemes, not with supertags. 2 The Tree Chooser then uses a stochastic tree model to choose TAG trees for the nodes in the input structure. This step can be seen as analogous to supertagging\" Bangalore and Joshi, 1999, except that now supertags i.e., names of trees must be found for words in a tree rather than for words in a linear sequence. The Unraveler then uses the XTAG grammar to produce a lattice of all possible linearizations that are compatible with the supertagged tree and the XTAG. The LP Chooser then chooses the most likely traversal of this lattice, given a language model. We discuss the three components in more detail.", |
| "cite_spans": [ |
| { |
| "start": 511, |
| "end": 542, |
| "text": "McDonald and Pustejovsky, 1985.", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 790, |
| "end": 791, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 708, |
| "end": 716, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modeling Syntax", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The Tree Chooser draws on a tree model, which is a representation of XTAG derivation for 1,000,000 words of the Wall Street Journal. 3 The Tree Chooser makes the simplifying as- sumptions that the choice of a tree for a node depends only on its daughter nodes, thus allowing for a top-down dynamic programming algorithm. Speci cally, a node in the input structure is assigned a supertag s so that the probability o f nding the treelet composed of with supertag s and all of its daughters as found in the input structure is maximized, and such that s is compatible with 's mother and her supertag s m . Here, compatible\" means that the tree represented by s can be adjoined or substituted into the tree represented by s m , according to the XTAG grammar. For our example sentence, the input to the system is the tree shown in Figure 4 , and the output from the Tree Chooser is the tree as shown in Figure 3 . Note that while a derivation tree in TAG fully speci es a derivation and thus a surface sentence, the output from the Tree Chooser does not. There are two reasons. Firstly, a s explained at the end of Section 2, for us trees corresponding to adjuncts are underspeci ed with respect to the adjunction site and or the adjunction direction from left or from right in the tree of the mother node, or they may beunordered with respect to other adjuncts for example, the famous adjective ordering problem. Secondly, supertags may have been chosen incorrectly or not at all.", |
| "cite_spans": [ |
| { |
| "start": 133, |
| "end": 134, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 825, |
| "end": 833, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 897, |
| "end": 905, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modeling Syntax", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The Unraveler takes as input the semispeci ed derivation tree Figure 3 and produces a word lattice. Each node in the derivation tree consists of a lexical item and a supertag. The linear order of the daughters with respect to the head position of a supertag is speci ed in the XTAG grammar. This information is consulted to order the daughter nodes Figure 5 : Architecture of Fergus with respect to the head at each level of the derivation tree. In cases where a daughter node can beattached at more than one place in the head supertag as is the case in our example for was and for, a disjunction of all these positions are assigned to the daughter node. A bottomup algorithm then constructs a lattice that encodes the strings represented by each level of the derivation tree. The lattice at the root of the derivation tree is the result of the Unraveler. The resulting lattice for the example sentence is shown in Figure 6 . The lattice output from the Unraveler encodes all possible word sequences permitted by the derivation structure. We rank these word sequences in the order of their likelihood by composing the lattice with a nitestate machine representing a trigram language model. This model has been constructed from 1,000,0000 words of Wall Street Journal corpus. We pick the best path through the lattice resulting from the composition using the Viterbi algorithm, and this top ranking word sequence is the output of the LP Chooser.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 62, |
| "end": 70, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 349, |
| "end": 357, |
| "text": "Figure 5", |
| "ref_id": null |
| }, |
| { |
| "start": 915, |
| "end": 923, |
| "text": "Figure 6", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modeling Syntax", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In order to show that the use of a tree model and a grammar does indeed help performance, we performed three experiments: We call this the Baseline Left-Right LR Model. This model generates There was estimate for phase the second no cost . for our example input. In the second experiment, we derive the parameters for the LR model from an annotated corpus, in particular, the XTAG derivation tree corpus. This model generates There no estimate for the second phase was cost . for our example input.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In the third experiment, as described in Section 3, we employ the supertag-based tree model whose parameters consist of whether a lexeme l d with supertag s d is a dependent of l m with supertag s m . Furthermore we use the supertag information provided by the XTAG grammar to order the dependents. This model generates There was no cost estimate for the second phase . for our example input, which i s i ndeed the sentence found in the WSJ. As in the case of machine translation, evaluation in generation is a complex issue. We use two metrics suggested in the MT literature Alshawi et al., 1998 based on string edit distance between the output of the generation system and the reference corpus string from the WSJ. These metrics, simple accuracy and generation accuracy, allow us to evaluate without human intervention, automatically and objectively. 4 Simple accuracy is the number of insertion I, deletion D and substitutions S errors between the target language strings in the test corpus and the strings produced by the generation model. The metric is summarized in Equation 1. R is the number of tokens in the target string. This metric is similar to the string distance metric used for measuring speech recognition accuracy. Table 1 : Performance results from the three tree models.", |
| "cite_spans": [ |
| { |
| "start": 853, |
| "end": 854, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1233, |
| "end": 1240, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments and Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Unlike speech recognition, the task of generation involves reordering of tokens. The simple accuracy metric, however, penalizes a misplaced token twice, as a deletion from its expected position and insertion at a di erent position. We use a second metric, Generation Accuracy, shown in Equation 2, which treats deletion of a token at one location in the string and the insertion of the same token at another location in the string as one single movement error M. This is in addition to the remaining insertions I 0 and deletions D 0 . GenerationAccuracy = 1 , M + I 0 + D 0 + S R 2 The simple accuracy, generation accuracy and the average time for generation of each test sentence for the three experiments are tabulated in Table 1 . The test set consisted of 100 randomly chosen WSJ sentence with an average length of 16 words. As can beseen, the supertag-based model improves over the LR model derived from annotated data and both models improve over the baseline LR model. Supertags incorporate richer information such as argument and adjunct distinction, and number and types of arguments. We expect to improve the performance of the supertag-based model by taking these features into account.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 724, |
| "end": 731, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments and Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In ongoing work, we have developed treebased metrics in addition to the string-based presented here, in order to evaluate stochastic generation models. We have also attempted to correlate these quantitative metrics with human qualitative judgements. A detailed discussion of these experiments and results is presented in Bangalore et al., 2000. ", |
| "cite_spans": [ |
| { |
| "start": 321, |
| "end": 344, |
| "text": "Bangalore et al., 2000.", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Langkilde and Knight 1998a use a handcrafted grammar that maps semantic representations to sequences of words with linearization constraints. A complex semantic structure is translated to a lattice, and a bigram language model then chooses among the possible surface strings encoded in the lattice.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison with Langkilde & Knight", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The system of Langkilde & Knight, Nitrogen, is similar to Fergus in that generation is divided into two phases, the rst of which results in a lattice from which a surface string is chosen during the second phase using a language model in our case a trigram model, in Nitrogen's case a bigram model. However, the rst phases are quite di erent. In Fergus, w e start with a lexical predicate-argument structure, while in Nitrogen, a more semantic input is used. Fergus could easily be augmented with a preprocessor that maps a semantic representation to our syntactic input; this is not the focus of our research. However, there are two more important di erences. First, the hand-crafted grammar in Nitrogen maps directly from semantics to a linear representation, skipping the arborescent representation usually favored for the representation of syntax. There is no stochastic tree model, since there are no trees. In Fergus, initial choices are made stochastically based on the tree representation in the Tree Chooser. This allows us to capture stochastically certain longdistance e ects which n-grams cannot, such as separation of parts of a collocations such as perform an operation through interposing adjuncts John performed a long, somewhat tedious, and quite frustrating operation on his border collie. Second, the hand-crafted grammar used in Fergus was crafted independently from the need for generation and is a purely declarative representation of English syntax. As such, we can use it to handle morphological effects such as agreement, which cannot in general be done by an n-gram model and which are, at the same time, descriptively straightforward and which are handled by all non-stochastic generation modules.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison with Langkilde & Knight", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We have presented empirical evidence that using a tree model in addition to a language model can improve stochastic NLG.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Outlook", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Fergus as presented in this paper is not ready to be used as a module in applications. Speci cally, w e will add a morphological component, a component that handles function words auxiliaries, determiners, and a component that handles punctuation. In all three cases, we will provide both knowledge-based and stochastic components, with the aim of comparing their behaviors, and using one type as a back-up for the other type. Finally, we will explore Fergus when applied to a language for which a much more limited XTAG grammar is available for example, specifying only the basic sentence word order as, say, S V O, and specifying subjectverb agreement. In the long run, we intend Fergus to become a exible system which will use hand-crafted knowledge as much as possible and stochastic models as much as necessary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Outlook", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Sentences such a s Peter is a doctor can be analyzed with with be as the head, as is more usual, or with doctor as the head, as is done in XTAG because the be really behaves like an auxiliary, not like a full verb.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In the system that we used in the experiments described in Section 4, all words including function words need to be present in the input representation, fully inected. This is of course unrealistic for applications. In this paper, we only aim to show that the use of a Tree Model improves performance of a stochastic generator. See Section 6 for further discussion.3 This was constructed from the Penn Tree Bank using some heuristics, since the Penn Tree Bank does not contain full head-dependent information; as a result of the use of heuristics, the Tree Model is not fully correct.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We do not address the issue of whether these metrics can be used for comparative e v aluation of other generation systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Automatic acquisition of hierarchical transduction models for machine tr anslation", |
| "authors": [ |
| { |
| "first": "Hiyan", |
| "middle": [], |
| "last": "Alshawi", |
| "suffix": "" |
| }, |
| { |
| "first": "Srinivas", |
| "middle": [], |
| "last": "Bangalore", |
| "suffix": "" |
| }, |
| { |
| "first": "Shona", |
| "middle": [], |
| "last": "Douglas", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the 36th Annual Meeting Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hiyan Alshawi, Srinivas Bangalore, and Shona Douglas. 1998. Automatic acquisition of hi- erarchical transduction models for machine tr anslation. In Proceedings of the 36th Annual Meeting Association for Computational Lin- guistics, Montreal, Canada.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Supertagging: An approach to almost parsing", |
| "authors": [ |
| { |
| "first": "Srinivas", |
| "middle": [], |
| "last": "Bangalore", |
| "suffix": "" |
| }, |
| { |
| "first": "Aravind", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Srinivas Bangalore and Aravind Joshi. 1999. Supertagging: An approach to almost pars- ing. Computational Linguistics, 252.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Evaluation Metrics for Generation", |
| "authors": [ |
| { |
| "first": "Srinivas", |
| "middle": [], |
| "last": "Bangalore", |
| "suffix": "" |
| }, |
| { |
| "first": "Owen", |
| "middle": [], |
| "last": "Rambow", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Whittaker", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of International Conference on Natural Language Generation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Srinivas Bangalore, Owen Rambow, and Steve Whittaker. 2000. Evaluation Metrics for Generation. In Proceedings of International Conference on Natural Language Generation, Mitzpe Ramon, Isreal.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "An introduction to Tree Adjoining Grammars", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Aravind", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Mathematics of Language", |
| "volume": "87", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aravind K. Joshi. 1987a. An introduction to Tree Adjoining Grammars. In A. Manaster- Ramer, editor, Mathematics of Language, pages 87 115. John Benjamins, Amsterdam.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The relevance of tree adjoining grammar to generation", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Aravind", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Natural Language Generation: New Results in Arti cial Intelligence, Psychology and Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "233--252", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aravind K. Joshi. 1987b. The relevance of tree adjoining grammar to generation. In Gerard Kempen, editor, Natural Language Generation: New Results in Arti cial In- telligence, Psychology and Linguistics, pages 233 252. Kluwer Academic Publishers, Dor- drecht Boston Lancaster.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Generation that exploits corpus-based statistical knowledge", |
| "authors": [ |
| { |
| "first": "Irene", |
| "middle": [], |
| "last": "Langkilde", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "36th Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics COLING-ACL'98", |
| "volume": "704", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Irene Langkilde and Kevin Knight. 1998a. Gen- eration that exploits corpus-based statistical knowledge. In 36th Meeting of the Associa- tion for Computational Linguistics and 17th International Conference on Computational Linguistics COLING-ACL'98, pages 704 710, Montr eal, Canada.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The practical value of n-grams in generation", |
| "authors": [ |
| { |
| "first": "Irene", |
| "middle": [], |
| "last": "Langkilde", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the Ninth International Natural Language Generation Workshop INLG'98", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Irene Langkilde and Kevin Knight. 1998b. The practical value of n-grams in genera- tion. In Proceedings of the Ninth Interna- tional Natural Language Generation Work- shop INLG'98, Niagara-on-the-Lake, On- tario.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Forest-based statistical sentence generation", |
| "authors": [ |
| { |
| "first": "Irene", |
| "middle": [], |
| "last": "Langkilde", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of First North American ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Irene Langkilde and Kevin Knight. 2000. Forest-based statistical sentence generation. In Proceedings of First North American ACL, Seattle, USA, May.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Two methods for predicting the order of prenominal adjectives in english", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Malouf", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of CLIN99", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert Malouf. 1999. Two methods for pre- dicting the order of prenominal adjectives in english. In Proceedings of CLIN99.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Tags as a grammatical formalism for generation", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "D" |
| ], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pustejovsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "23rd Meeting of the Association for Computational Linguistics ACL'85", |
| "volume": "", |
| "issue": "", |
| "pages": "94--103", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David D. McDonald and James D. Pustejovsky. 1985. Tags as a grammatical formalism for generation. In 23rd Meeting of the Associa- tion for Computational Linguistics ACL'85, pages 94 103, Chicago, IL.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Applied text generation", |
| "authors": [ |
| { |
| "first": "Owen", |
| "middle": [], |
| "last": "Rambow", |
| "suffix": "" |
| }, |
| { |
| "first": "Tanya", |
| "middle": [], |
| "last": "Korelsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Third Conference o n Applied Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "40--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Owen Rambow and Tanya Korelsky. 1992. Ap- plied text generation. In Third Conference o n Applied Natural Language Processing, pages 40 47, Trento, Italy.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Trainable methods for surface natural language generation", |
| "authors": [ |
| { |
| "first": "Adwait", |
| "middle": [], |
| "last": "Ratnaparkhi", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of First North American ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adwait Ratnaparkhi. 2000. Trainable methods for surface natural language generation. In Proceedings of First North American ACL, Seattle, USA, May.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Has a consensus NL generation architecture appeared, and is it psycholinguistically plausible?", |
| "authors": [ |
| { |
| "first": "Ehud", |
| "middle": [], |
| "last": "Reiter", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "~xtag tech-report tech-report.html, The Institute for Research in Cognitive Science", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ehud Reiter. 1994. Has a consensus NL gen- eration architecture appeared, and is it psy- cholinguistically plausible? In Proceedings of the 7th International Workshop on Natural Language Generation, pages 163 170, Maine. The XTAG-Group. 1999. A lexicalized Tree Adjoining Grammar for English. Technical Report http: www.cis.upenn.edu ~xtag tech-report tech-report.html, The Insti- tute for Research in Cognitive Science, Uni- versity o f P ennsylvania.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "num": null, |
| "text": "Adjunction table for grammar fragment", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "num": null, |
| "text": "Derivation tree for LTAG derivation of There was no cost estimate for the second phase 3 System Overview", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "num": null, |
| "text": "Figure 4: Input to Fergus", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF5": { |
| "num": null, |
| "text": "Word lattice for example sentence after Tree Chooser and Unraveler using the supertag-based model For the baseline experiment, we impose a random tree structure for each sentence of the corpus and build a Tree Model whose parameters consist of whether a lexeme l d precedes or follows her mother lexeme l m .", |
| "type_str": "figure", |
| "uris": null |
| } |
| } |
| } |
| } |