ACL-OCL / Base_JSON /prefixP /json /P05 /P05-1009.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P05-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:38:28.275047Z"
},
"title": "Towards Developing Generation Algorithms for Text-to-Text Applications",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {
"addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey",
"postCode": "90292",
"region": "CA"
}
},
"email": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {
"addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey",
"postCode": "90292",
"region": "CA"
}
},
"email": "marcu\u00a1@isi.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe a new sentence realization framework for text-to-text applications. This framework uses IDL-expressions as a representation formalism, and a generation mechanism based on algorithms for intersecting IDL-expressions with probabilistic language models. We present both theoretical and empirical results concerning the correctness and efficiency of these algorithms.",
"pdf_parse": {
"paper_id": "P05-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe a new sentence realization framework for text-to-text applications. This framework uses IDL-expressions as a representation formalism, and a generation mechanism based on algorithms for intersecting IDL-expressions with probabilistic language models. We present both theoretical and empirical results concerning the correctness and efficiency of these algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Many of today's most popular natural language applications -Machine Translation, Summarization, Question Answering -are text-to-text applications. That is, they produce textual outputs from inputs that are also textual. Because these applications need to produce well-formed text, it would appear natural that they are the favorite testbed for generic generation components developed within the Natural Language Generation (NLG) community. Over the years, several proposals of generic NLG systems have been made: Penman (Matthiessen and Bateman, 1991) , FUF (Elhadad, 1991) , Nitrogen (Knight and Hatzivassiloglou, 1995) , Fergus (Bangalore and Rambow, 2000) , HALogen (Langkilde-Geary, 2002) , Amalgam (Corston-Oliver et al., 2002) , etc. Instead of relying on such generic NLG systems, however, most of the current text-to-text applications use other means to address the generation need. In Machine Translation, for example, sentences are produced using application-specific \"decoders\", inspired by work on speech recognition (Brown et al., 1993) , whereas in Summarization, summaries are produced as either extracts or using task-specific strategies (Barzilay, 2003) . The main reason for which text-to-text applications do not usually involve generic NLG systems is that such applications do not have access to the kind of information that the input representation formalisms of current NLG systems require. A machine translation or summarization system does not usually have access to deep subject-verb or verb-object relations (such as ACTOR, AGENT, PATIENT, POSSESSOR, etc.) as needed by Penman or FUF, or even shallower syntactic relations (such as subject, object, premod, etc.) as needed by HALogen.",
"cite_spans": [
{
"start": 520,
"end": 551,
"text": "(Matthiessen and Bateman, 1991)",
"ref_id": "BIBREF11"
},
{
"start": 554,
"end": 573,
"text": "FUF (Elhadad, 1991)",
"ref_id": null
},
{
"start": 585,
"end": 620,
"text": "(Knight and Hatzivassiloglou, 1995)",
"ref_id": "BIBREF8"
},
{
"start": 630,
"end": 658,
"text": "(Bangalore and Rambow, 2000)",
"ref_id": "BIBREF0"
},
{
"start": 669,
"end": 692,
"text": "(Langkilde-Geary, 2002)",
"ref_id": "BIBREF10"
},
{
"start": 703,
"end": 732,
"text": "(Corston-Oliver et al., 2002)",
"ref_id": "BIBREF4"
},
{
"start": 1029,
"end": 1049,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF2"
},
{
"start": 1154,
"end": 1170,
"text": "(Barzilay, 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, following the recent proposal made by Nederhof and Satta (2004) , we argue for the use of IDL-expressions as an applicationindependent, information-slim representation language for text-to-text natural language generation. IDL-expressions are created from strings using four operators: concatenation (\u00a2 ), interleave ( \u00a3 ), disjunction (\u00a4 ), and lock ( \u00a5 ). We claim that the IDL formalism is appropriate for text-to-text generation, as it encodes meaning only via words and phrases, combined using a set of formally defined operators. Appropriate words and phrases can be, and usually are, produced by the applications mentioned above. The IDL operators have been specifically designed to handle natural constraints such as word choice and precedence, constructions such as phrasal combination, and underspecifications such as free word order. In Table 1 , we present a summary of the representation and generation characteristics of current NLG systems. We mark by characteristics that are needed/desirable in a generation component for textto-text applications, and by \u00a1 characteristics that make the proposal inapplicable or problematic. For instance, as already argued, the representation formalism of all previous proposals except for IDL is problematic (\u00a1 ) for text-to-text applications. The IDL formalism, while applicable to text-to-text applications, has the additional desirable property that it is a compact representation, while formalisms such as word-lattices and non-recursive CFGs can have exponential size in the number of words available for generation (Nederhof and Satta, 2004) .",
"cite_spans": [
{
"start": 53,
"end": 78,
"text": "Nederhof and Satta (2004)",
"ref_id": "BIBREF13"
},
{
"start": 1588,
"end": 1614,
"text": "(Nederhof and Satta, 2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 863,
"end": 870,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While the IDL representational properties are all desirable, the generation mechanism proposed for IDL by Nederhof and Satta (2004) is problematic (\u00a1 ), because it does not allow for scoring and ranking of candidate realizations. Their generation mechanism, while computationally efficient, involves intersection with context free grammars, and therefore works by excluding all realizations that are not accepted by a CFG and including (without ranking) all realizations that are accepted. The approach to generation taken in this paper is presented in the last row in Table 1 , and can be summarized as a tiling of generation characteristics of previous proposals (see the shaded area in Table 1 ). Our goal is to provide an optimal generation framework for text-to-text applications, in which the representation formalism, the generation mechanism, and the computational properties are all needed and desirable ( ). Toward this goal, we present a new generation mechanism that intersects IDL-expressions with probabilistic language models. The generation mechanism implements new algorithms, which cover a wide spectrum of run-time behaviors (from linear to exponential), depending on the complexity of the input. We also present theoretical results concerning the correctness and the efficiency input IDL-expression) of our algorithms.",
"cite_spans": [
{
"start": 106,
"end": 131,
"text": "Nederhof and Satta (2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 569,
"end": 576,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 689,
"end": 696,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate these algorithms by performing experiments on a challenging word-ordering task. These experiments are carried out under a highcomplexity generation scenario: find the most probable sentence realization under an n-gram language model for IDL-expressions encoding bags-of-words of size up to 25 (up to 10 \u00a2 \u00a4 \u00a3 possible realizations!). Our evaluation shows that the proposed algorithms are able to cope well with such orders of complexity, while maintaining high levels of accuracy. The concatenation (\u00a2 ) operator takes two arguments, and uses the strings encoded by its argument expressions to obtain concatenated strings that respect the order of the arguments; e.g., encodes the se\u1e97",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u00a9 # \u00a5 \u00a6 $ ! \u00a5 % & \u00a6 ' ! \u00a5 \u00a6 ( . The )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "isjunction (\u00a4 ) operator allows a choice among the strings encoded by its argument expressions; e.g.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u00a4 0 \u00a5 \u00a7 \u00a4 \u00a6 1 \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "encodes the se\u1e97",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u00a9 \u00a5 \u00a7 \u00a4 \u00a6 . The 2 ock ( \u00a5 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "operator takes only one argument, and \"locks-in\" the strings encoded by its argument expression, such that no additional material can be interleaved; e.g.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u00a3 \u00a5 3 \u00a5 \u00a2 % \u00a6 1 \" 4 ! \u00a9 \" encodes the set\u00a9 # \u00a5 \u00a6 ! \u00a5 \u00a6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": ". Consider the following IDL-expression:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u00a3 6 5 8 7 @ 9 % A B A 6 C D \u00a4 0 \u00a5 3 F E H G \u00a7 I \u00a2 B P @ Q S R U T # V 7 1 I & Q 4 T % \" 4 \u00a5 W F E H G \u00a7 I \u00a2 \u00a4 X 4 9 4 P 1 E R H Y I & T % \" \u00a4 \" \u00a2\u00cc & Q ! I \u00a2 a Q \u00a4 I # A b I c 9 ( T # I c d 1 \" F e \u00a9 \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The concatenation (\u00a2 ) operator captures precedence constraints, such as the fact that a determiner like the appears before the noun it determines. The lock ( \u00a5 ) operator enforces phrase-encoding constraints, such as the fact that the captives is a phrase which should be used as a whole. The disjunction (\u00a4 ) operator allows for multiple word/phrase choice (e.g., the prisoners versus the captives), and the interleave ( \u00a3 ) operator allows for word-order freedom, i.e., word order underspecification at meaning representation level. Among the strings encoded by IDLexpression 1 are the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "finally the prisoners were released the captives finally were released the prisoners were finally released The following strings, however, are not part of the language defined by IDL-expression 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "the finally captives were released the prisoners were released finally the captives released were The first string is disallowed because the \u00a5 operator locks the phrase the captives. The second string is not allowed because the \u00a3 operator requires all its arguments to be represented. The last string violates the order imposed by the precedence operator between were and released.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "IDL-expressions are a convenient way to compactly represent finite languages. However, IDLexpressions do not directly allow formulations of algorithms to process them. For this purpose, an equivalent representation is introduced by N&S, called IDL-graphs. We refer the interested reader to the formal definition provided by N&S, and provide here only an intuitive description of IDL-graphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs",
"sec_num": "2.2"
},
{
"text": "We illustrate in Figure 1 the IDL-graph corresponding to IDL-expression 1. operators, respectively. These latter vertices are also shown to have rank 1, as opposed to rank 0 (not shown) assigned to all other vertices.",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 25,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "IDL-graphs",
"sec_num": "2.2"
},
{
"text": "The ranking of vertices in an IDL-graph is needed to enforce a higher priority on the processing of the higher-ranked vertices, such that the desired semantics for the lock operator is preserved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs",
"sec_num": "2.2"
},
{
"text": "With each IDL-graph",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs",
"sec_num": "2.2"
},
{
"text": "$ \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs",
"sec_num": "2.2"
},
{
"text": "we can associate a finite language: the set of strings that can be generated by an IDL-specific traversal of are said to be equivalent because they generate the same finite language, denoted ! $ \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs",
"sec_num": "2.2"
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs",
"sec_num": "2.2"
},
{
"text": "To make the connection with the formulation of our algorithms, in this section we link the IDL formalism with the more classical formalism of finite-state acceptors (FSA) (Hopcroft and Ullman, 1979) . The FSA representation can naturally encode precedence and multiple choice, but it lacks primitives corresponding to the interleave ( . More precisely, the initial vertex \u00a9 \u00a1 is considered a cut (Figure 2 (a)). For each vertex in a given cut, we create a new cut by replacing the start vertex of some edge with the end vertex of that edge, observing the following rules:",
"cite_spans": [
{
"start": 171,
"end": 198,
"text": "(Hopcroft and Ullman, 1979)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 396,
"end": 405,
"text": "(Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "IDL-graphs and Finite-State Acceptors",
"sec_num": "2.3"
},
{
"text": "\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs and Finite-State Acceptors",
"sec_num": "2.3"
},
{
"text": "the vertex that is the start of several edges labeled using the special symbol \u00a7 is replaced by a sequence of all the end vertices of these edges (for example, \u00a6 \u00a5 # \u00a2 is a cut derived from \u00a2 \u00a1 (Figure 2 (b))); a mirror rule handles the special symbol ;",
"cite_spans": [],
"ref_spans": [
{
"start": 194,
"end": 203,
"text": "(Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "IDL-graphs and Finite-State Acceptors",
"sec_num": "2.3"
},
{
"text": "\" the vertex that is the start of an edge labeled using vocabulary items or is replaced by the end vertex of that edge (for example, were captives prisoners the the v2 1 1 1 1 1 1 1 1 v20 v19 v18 v17 v16 v15 v14 v13 v12 v11 v10 v9 v8 v7 v6 v5 v4 v3 Figure 1: The IDL-graph corresponding to the IDLexpression ",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 290,
"text": "were captives prisoners the the v2 1 1 1 1 1 1 1 1 v20 v19 v18 v17 v16 v15 v14 v13 v12 v11 v10 v9 v8 v7 v6 v5 v4 v3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "IDL-graphs and Finite-State Acceptors",
"sec_num": "2.3"
},
{
"text": "$ % \u00a2 , \u00a4 \u00a5 # , \u00a4 \u00a5 & \u00a3 , \u00a4 \u00a5 & \u00a4 ' are cuts derived from \u00a6 \u00a5 & \u00a2 , \u00a4 \u00a5 # \u00a2 , v1 v0 ve vs finally \u03b5 \u03b5 \u03b5 \u03b5 \u03b5 \u03b5 \u03b5 \u03b5 \u03b5 \u03b5 \u03b5 \u03b5 \u03b5 released",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs and Finite-State Acceptors",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00a3 6 5 8 7 @ 9 A H A 6 C 1 \u00a4 \u00a5 3 F E H G \u00a7 I \u00a2 ! P @ Q S R U T # V 7 1 I & Q 4 T \" 4 \u00a5 3 F E H G I \u00a2 X c 9 S P 1 E R B Y % I S T ( \" \u00a4 \" \u00a2 I & Q ! I \u00a2 % Q \u00a4 I # A b I c 9 ( T # I c d 1 \" .",
"eq_num": "(a"
}
],
"section": "IDL-graphs and Finite-State Acceptors",
"sec_num": "2.3"
},
{
"text": ", and \u00a5 \u00a3 , respectively, see Figure 2 (cd)), only if the end vertex is not lower ranked than any of the vertices already present in the cut (for example, \u00a8 % \u00a4 ' is not a cut that can be derived from \u00a2 \u00a5 & \u00a4 ' , see Figure 2 (e)).",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 38,
"text": "Figure 2",
"ref_id": "FIGREF3"
},
{
"start": 217,
"end": 225,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "\u00a5 \u00a4",
"sec_num": null
},
{
"text": "Note the last part of the second rule, which restricts the set of cuts by using the ranking mechanism. If one would allow \u00a8 ' to be a cut, one would imply that finally may appear inserted between the words of the locked phrase the prisoners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a5 \u00a4",
"sec_num": null
},
{
"text": "We now link the IDL formalism with the FSA formalism by providing a mapping from an IDL-graph $ \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a5 \u00a4",
"sec_num": null
},
{
"text": "to an acyclic finite-state acceptor \u00a1 $ \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a5 \u00a4",
"sec_num": null
},
{
"text": ". Because both formalisms are used for representing finite languages, they have equivalent representational power. The IDL representation is much more compact, however, as one can observe by comparing the IDL-graph in Figure 1 with the equivalent finitestate acceptor . The initial state of the finite-state acceptor is the state corresponding to cut \u00a2 \u00a1 , and the final states of the finite-state acceptor are the state corresponding to cuts that contain \u00a9 \u00a3 . In what follows, we denote a state of . For the example in Figure 3 . It is not hard to see that the conversion from the IDL representation to the FSA representation destroys the compactness property of the IDL formalism, because of the explicit enumeration of all possible interleavings, which causes certain labels to appear repeatedly in transitions. For example, a transition labeled finally appears 11 times in the finitestate acceptor in Figure 3 , whereas an edge labeled finally appears only once in the IDL-graph in Figure 1. 3 Computational Properties of IDL-expressions",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 226,
"text": "Figure 1",
"ref_id": null
},
{
"start": 521,
"end": 529,
"text": "Figure 3",
"ref_id": "FIGREF5"
},
{
"start": 906,
"end": 914,
"text": "Figure 3",
"ref_id": "FIGREF5"
},
{
"start": 987,
"end": 996,
"text": "Figure 1.",
"ref_id": null
}
],
"eq_spans": [],
"section": "\u00a5 \u00a4",
"sec_num": null
},
{
"text": "As mentioned in Section 1, the generation mechanism we propose performs an intersection of IDLexpressions with n-gram language models. Following (Mohri et al., 2002; Knight and Graehl, 1998) , we implement language models using weighted finite-state acceptors (wFSA). In Section 2.3, we presented a mapping from an IDL-graph in Figure 3 must be split into three different states,",
"cite_spans": [
{
"start": 145,
"end": 165,
"text": "(Mohri et al., 2002;",
"ref_id": "BIBREF12"
},
{
"start": 166,
"end": 190,
"text": "Knight and Graehl, 1998)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 328,
"end": 336,
"text": "Figure 3",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "IDL-graphs and Weighted Finite-State Acceptors",
"sec_num": "3.1"
},
{
"text": "\u00a3P 1 Q 4 R B T & V 7 D I & Q 4 T % \u00a8 \u00a8 ' , \u00a3 b X c 9 S P 1 E R B Y % I S T % \u00a8 \u00a8 '",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs and Weighted Finite-State Acceptors",
"sec_num": "3.1"
},
{
"text": ", and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs and Weighted Finite-State Acceptors",
"sec_num": "3.1"
},
{
"text": "\u00a35 8 7 @ 9 % A B A 6 C @ % \u00a9 \u00a9 '",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs and Weighted Finite-State Acceptors",
"sec_num": "3.1"
},
{
"text": ", according to which (non-epsilon) transition was last used to reach this state. The transitions leaving these states have the same labels as those leaving state \u00a3 $ \u00a9 '",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs and Weighted Finite-State Acceptors",
"sec_num": "3.1"
},
{
"text": ", and are now weighted using the language model probability distributions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs and Weighted Finite-State Acceptors",
"sec_num": "3.1"
},
{
"text": "\u00a2 \u00a4 \u00a3 \u00a6 \u00a5 \u00a2 \u00a7P @ Q 4 R B T & V % 7 1 I & Q 4 T % \" , \u00a2 \u00a9 \u00a3 \u00a6 \u00a5 \u00a2 \u00a7X c 9 4 P 1 E R H Y I & T \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs and Weighted Finite-State Acceptors",
"sec_num": "3.1"
},
{
"text": ", and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs and Weighted Finite-State Acceptors",
"sec_num": "3.1"
},
{
"text": "\u00a2 \u00a3 \u00a6 \u00a5 \u00a2 \u00a75 8 7 @ 9 A H A C \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs and Weighted Finite-State Acceptors",
"sec_num": "3.1"
},
{
"text": ", respectively. Note that, at this point, we already have a na\u00efve algorithm for intersecting IDL-expressions with ngram language models. From an IDL-expression , following the mapping",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs and Weighted Finite-State Acceptors",
"sec_num": "3.1"
},
{
"text": "$ \" \u00a1 $ \" $ \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs and Weighted Finite-State Acceptors",
"sec_num": "3.1"
},
{
"text": ", we arrive at a weighted finite-state acceptor, on which we can use a single-source shortestpath algorithm for directed acyclic graphs (Cormen et al., 2001) to extract the realization corresponding to the most probable path. The problem with this algorithm, however, is that the premature unfolding of the IDL-graph into a finite-state acceptor destroys the representation compactness of the IDL representation. For this reason, we devise algorithms that, although similar in spirit with the single-source shortest-path algorithm for directed acyclic graphs, perform on-the-fly unfolding of the IDL-graph, with a mechanism to control the unfolding based on the scores of the paths already unfolded. Such an approach has the advantage that prefixes that are extremely unlikely under the language model may be regarded as not so promising, and parts of the IDLexpression that contain them may not be unfolded, leading to significant savings.",
"cite_spans": [
{
"start": 136,
"end": 157,
"text": "(Cormen et al., 2001)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IDL-graphs and Weighted Finite-State Acceptors",
"sec_num": "3.1"
},
{
"text": "Algorithm IDL-NGLM-BFS The first algorithm that we propose is algorithm IDL-NGLM-BFS in Figure 4 . The algorithm builds a weighted finitestate acceptor corresponding to an IDL-graph incrementally, by keeping track of a set of active states, called",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 96,
"text": "Figure 4",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "9 X \u00a9 E R B Y % I",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": ". The incrementality comes from creating new transitions and states in originating in these active states, by unfolding the IDLgraph ; the set of newly unfolded states is called",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "7 # V A d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": ". The new transitions in are weighted ac- set of states to be the next set of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "IDL-NGLM-BFS ! \" 1 9 X \u00a9 E R B Y % I \u00a9 \u00a3 \" ! 2 # 0 9 $ % e 3 while # 9 $ 4 do 7 # V A d & UNFOLDIDLG \u00a4 9 X E R H Y I a \" 5 EVALUATENGLM 7 # V A b d ! \u00a1 \" 6 if FINALIDLG 7 # V A b d \" 7 then # 0 9 $ % ( ' 8 9 X \u00a9 E R B Y % I ) 7 & V % A d 9 return 9 X \u00a9 E R B Y % I",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "9 X \u00a9 E R B Y % I",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "states. Note that this is actually a breadthfirst search (BFS) with incremental unfolding. This algorithm still unfolds the IDL-graph completely, and therefore suffers from the same drawback as the na\u00efve algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "The interesting contribution of algorithm IDL-NGLM-BFS, however, is the incremental unfolding. If, instead of line 8 in Figure 4 , we introduce mechanisms to control which state set for the next unfolding iteration, we obtain a series of more effective algorithms.",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 128,
"text": "Figure 4",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "Algorithm IDL-NGLM-A0 We arrive at algorithm IDL-NGLM-A0 by modifying line 8 in Figure 4 , thus obtaining the algorithm in Figure 5 . We use as control mechanism a priority queue,",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 88,
"text": "Figure 4",
"ref_id": "FIGREF7"
},
{
"start": 123,
"end": 131,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "9 ( T & E 9 Q \" 1 , in which the states from 7 2 & V A b d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "are PUSH-ed, sorted according to an admissible heuristic function (Russell and Norvig, 1995). In the next iteration,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "9 X E R H Y I",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "is a singleton set containing the state POP-ed out from the top of the priority queue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "Algorithm IDL-NGLM-BEAM We arrive at algorithm IDL-NGLM-BEAM by again modifying line 8 in Figure 4 , thus obtaining the algorithm in Figure 6 . We control the unfolding using a probabilistic beam ",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 98,
"text": "Figure 4",
"ref_id": "FIGREF7"
},
{
"start": 133,
"end": 141,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "! \" 1 9 X \u00a9 E R B Y % I \u00a9 \u00a3 \" ! 2 # 0 9 $ % e 3 while # 9 $ 4 do 7 # V A d & UNFOLDIDLG \u00a4 9 X E R H Y I a \" 5 EVALUATENGLM 7 # V A b d ! \u00a1 \" 6 if FINALIDLG 7 # V A b d \" 7 then # 0 9 $ % ( ' 8 for each T S E 9 % E I in 7 # V A b d do PUSH \u00a4 9 ( T & E 9 Q 5 1 & T S E 9 % E I % \" 9 X E R H Y I ) POP \u00a4 9 T S E 9 Q 5 1 \" 9 return 9 X \u00a9 E R B Y % I",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "Figure 5: Pseudo-code for intersecting an IDL-graph with an n-gram language model ! using incremental unfolding and A0 search. Figure 6 : Pseudo-code for intersecting an IDL-graph with an n-gram language model ! using incremental unfolding and probabilistic beam search.",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 135,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "IDL-NGLM-BEAM ! 3 4 I ! 9 5 4 \" 1 9 X \u00a9 E R B Y % I \u00a9 \u00a3 \" ! 2 # 0 9 $ % e 3 while # 9 $ 4 do 7 # V A d & UNFOLDIDLG \u00a4 9 X E R H Y I a \" 5 EVALUATENGLM 7 # V A b d ! \u00a1 \" 6 if FINALIDLG 7 # V A b d \" 7 then # 0 9 $ % ( ' 8 9 X \u00a9 E R B Y % I ) BEAMSTATES 7 # V A d 3 c I c 9 5 4 \" 9 return 9 X \u00a9 E R B Y % I",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation via Intersection of IDL-expressions with Language Models",
"sec_num": "3.2"
},
{
"text": "V A d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "#",
"sec_num": "7"
},
{
"text": "reachable with a probability higher or equal to the current maximum probability times the probability beam 3 c I c 9 5 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "#",
"sec_num": "7"
},
{
"text": "The IDL representation is ideally suited for computing accurate admissible heuristics under language models. These heuristics are needed by the IDL-NGLM-A0 algorithm, and are also employed for pruning by the IDL-NGLM-BEAM algorithm. ). It follows that arbitrarily accurate admissible heuristics exist for IDL-expressions, but computing them onthe-fly requires finding a balance between the time and space requirements for computing better heuristics and the speed-up obtained by using them in the search algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Admissible Heuristics for IDL-expressions",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00a1 \" ! # \" \u00a3 % $ ' & ) ( 1 0 3 2 5 4 7 6 9 8 A @ \u00a5 C B D \u00a3 % $ ' & F E C ( 0 \u00a2 \u00a3 \u00a5 H G \u00a7I P G \" \u00a4 \"",
"eq_num": "(2)"
}
],
"section": "Computing Admissible Heuristics for IDL-expressions",
"sec_num": "3.3"
},
{
"text": "The following theorem states the correctness of our algorithms, in the sense that they find the maximum probability path encoded by an IDL-graph under an n-gram language model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Properties of IDL-NGLM algorithms",
"sec_num": "3.4"
},
{
"text": "Let be an IDL-expression, G( ) its IDL-graph, and W( ) its wFSA under an n-gram language model LM. Algorithms IDL-NGLM-BFS and IDL-NGLM-A0 find the path of maximum probability under LM. Algorithm IDL-NGLM-BEAM finds the path of maximum probability under LM, if all states in W( ) along this path are selected by its BEAMSTATES function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theorem 1",
"sec_num": null
},
{
"text": "The proof of the theorem follows directly from the correctness of the BFS and A0 search, and from the condition imposed on the beam search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theorem 1",
"sec_num": null
},
{
"text": "The next theorem characterizes the run-time complexity of these algorithms, in terms of an input IDLexpression and its corresponding IDL-graph",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theorem 1",
"sec_num": null
},
{
"text": "$ \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theorem 1",
"sec_num": null
},
{
"text": "complexity. There are three factors that linearly influence the run-time complexity of our algorithms: is the maximum number of nodes in We omit the proof here due to space constraints. The fact that the run-time behavior of our algorithms is linear in the complexity of the input IDL-expression (with an additional log factor in the case of A0 search due to priority queue management) allows us to say that our algorithms are efficient with respect to the task they accomplish. We note here, however, that depending on the input IDL-expression, the task addressed can vary in complexity from linear to exponential. That is, for the intersection of an IDL-expression complexity. This exponential complexity comes as no surprise given that the problem of intersecting an n-gram language model with a bag of words is known to be NP-complete (Knight, 1999 generation algorithm. In general, for IDL-expressions for which is bounded, which we expect to be the case for most practical problems, our algorithms perform in polynomial time in the number of words available for generation.",
"cite_spans": [
{
"start": 839,
"end": 852,
"text": "(Knight, 1999",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Theorem 1",
"sec_num": null
},
{
"text": "@ \u00a5 C B D $ \u00a5 \u00a4 \u00a7 \u00a6\u00a8 \u00a9 \u00a5 \u00a7I \u00a7 , \u00a1 @ \u00a5 C B D $ \u00a4 \u00a6 \u00a9 \u00a5 \u00a7I \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theorem 1",
"sec_num": null
},
{
"text": "In this section, we present results concerning the performance of our algorithms on a wordordering task. This task can be easily defined as follows: from a bag of words originating from some sentence, reconstruct the original sentence as faithfully as possible. In our case, from an original sentence such as \"the gifts are donated by american companies\", we create the IDL-expression 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of IDL-NGLM Algorithms",
"sec_num": "4"
},
{
"text": "\u00a5 4 \u00a2 \u00a3 F E H G \u00a7 I $ % R & E H T \u00a9 d V 7 @ 9 % E I c d 1 \u00a9 X c V 5 4 W P 9 7 R I S T % \" 3 & C @ \u00a9 9 Q c I \u00a9 9 4 I & Q S R X 4 9 % 7 \" \u00a2 3 6 5 5 \u00a5 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of IDL-NGLM Algorithms",
"sec_num": "4"
},
{
"text": ", from which some algorithm realizes a sentence such as \"donated by the american companies are gifts\". Note the natural way we represent in an IDL-expression beginning and end of sentence constraints, using the \u00a2 operator. Since this is generation from bag-of-words, the task is known to be at the high-complexity extreme of the run-time behavior of our algorithms. As such, we consider it a good test for the ability of our algorithms to scale up to increasingly complex inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of IDL-NGLM Algorithms",
"sec_num": "4"
},
{
"text": "We use a state-of-the-art, publicly available toolkit 2 to train a trigram language model using Kneser-Ney smoothing, on 10 million sentences (170 million words) from the Wall Street Journal (WSJ), lower case and no final punctuation. The test data is also lower case (such that upper-case words cannot be hypothesized as first words), with final punctuation removed (such that periods cannot be hypothesized as final words), and consists of 2000 unseen WSJ sentences of length 3-7, and 2000 unseen WSJ sentences of length 10-25.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of IDL-NGLM Algorithms",
"sec_num": "4"
},
{
"text": "The algorithms we tested in this experiments were the ones presented in Section 3.2, plus two baseline algorithms. The first baseline algorithm, L, uses an inverse-lexicographic order for the bag items as its output, in order to get the word the on sentence initial position. The second baseline algorithm, G, is a greedy algorithm that realizes sentences by maximizing the probability of joining any two word sequences until only one sequence is left.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of IDL-NGLM Algorithms",
"sec_num": "4"
},
{
"text": "For the A0 algorithm, an admissible cost is computed for each state \u00a1 in a weighted finite-state automaton, as the sum (over all unused words) of the minimum language model cost (i.e., maximum probability) of each unused word when conditioning over all sequences of two words available at that particular state for future conditioning (see Equation 2, with",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of IDL-NGLM Algorithms",
"sec_num": "4"
},
{
"text": "\u00a2 \u00a4 \u00a3 \u00a2 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of IDL-NGLM Algorithms",
"sec_num": "4"
},
{
"text": "). These estimates are also used by the beam algorithm for deciding which IDL-graph nodes are not unfolded. We also test a greedy version of the A0 algorithm, denoted A0 , which considers for unfolding only the nodes extracted from the priority queue which already unfolded a path of length greater than or equal to the maximum length already unfolded minus (in this notation, the A0 algorithm would be denoted A0 \u00a1 ). For the beam algorithms, we use the notation B\u00a2 to specify a probabilistic beam of size \u00a2 , i.e., an algorithm that beams out the states reachable with probability less than the current maximum probability times \u00a2 . Our first batch of experiments concerns bags-ofwords of size 3-7, for which exhaustive search is possible. In Table 2 , we present the results on the word-ordering task achieved by various algorithms. We evaluate accuracy performance using two automatic metrics: an identity metric, ID, which measures the percent of sentences recreated exactly, and BLEU (Papineni et al., 2002) , which gives the geometric average of the number of uni-, bi-, tri-, and four-grams recreated exactly. We evaluate the search performance by the percent of Search Errors made by our algorithms, as well as a percent figure of Estimated Search Errors, computed as the percent of searches that result in a string with a lower probability than the probability of the original sentence. To measure the impact of using IDL-expressions for this task, we also measure the percent of unfolding of an IDL graph with respect to a full unfolding. We report speed results as the average number of seconds per bag-of-words, when using a 3.0GHz CPU machine under a Linux OS.",
"cite_spans": [
{
"start": 990,
"end": 1013,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 745,
"end": 752,
"text": "Table 2",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Evaluation of IDL-NGLM Algorithms",
"sec_num": "4"
},
{
"text": "The first notable result in Table 2 : Bags-of-words of size 3-7: accuracy (ID, BLEU), Search Errors (and Estimated Search Errors), space savings (Unfold), and speed results.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 2",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Evaluation of IDL-NGLM Algorithms",
"sec_num": "4"
},
{
"text": "achieved by the A0 algorithm under the IDL representation. At no cost in accuracy, it unfolds only 12% of the edges, and achieves a 7 times speedup, compared to the BFS algorithm. The savings achieved by not unfolding are especially important, since the exponential complexity of the problem is hidden by the IDL representation via the folding mechanism of the \u00a3 operator. The algorithms that find sub-optimal solutions also perform well. While maintaining high accuracy, the A0 \u00a2 and B\u00a5 \u00a4 \u00a3 \u00a2 algorithms unfold only about 5-7% of the edges, at 12-14 times speed-up.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of IDL-NGLM Algorithms",
"sec_num": "4"
},
{
"text": "Our second batch of experiments concerns bagof-words of size 10-25, for which exhaustive search is no longer possible (Table 3) . Not only exhaustive search, but also full A0 search is too expensive in terms of memory (we were limited to 2GiB of RAM for our experiments) and speed. Only the greedy versions A0 and A0 \u00a2 , and the beam search using tight probability beams (0.2-0.1) scale up to these bag sizes. Because we no longer have access to the string of maximum probability, we report only the percent of Estimated Search Errors. Note that, in terms of accuracy, we get around 20% Estimated Search Errors for the best performing algorithms (A0 \u00a2 and B\u00a5 \u00a4 \u00a3\u00a8 ) , which means that 80% of the time the algorithms are able to find sentences of equal or better probability than the original sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 127,
"text": "(Table 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation of IDL-NGLM Algorithms",
"sec_num": "4"
},
{
"text": "In this paper, we advocate that IDL expressions can provide an adequate framework for develop- 19.9 36.7 Table 3 : Bags-of-words of size 10-25: accuracy (ID, BLEU), Estimated Search Errors, and speed results.",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 112,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "ing text-to-text generation capabilities. Our contribution concerns a new generation mechanism that implements intersection between an IDL expression and a probabilistic language model. The IDL formalism is ideally suited for our approach, due to its efficient representation and, as we show in this paper, efficient algorithms for intersecting, scoring, and ranking sentence realizations using probabilistic language models. We present theoretical results concerning the correctness and efficiency of the proposed algorithms, and also present empirical results that show that our algorithms scale up to handling IDL-expressions of high complexity. Real-world text-to-text generation tasks, such as headline generation and machine translation, are likely to be handled graciously in this framework, as the complexity of IDL-expressions for these tasks tends to be lower than the complexity of the IDL-expressions we worked with in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Actually, these are multisets, as we treat multiply-occurring labels as separate items.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.speech.sri.com/projects/srilm/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by DARPA-ITO grant NN66001-00-1-9814.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Using TAG, a tree model, and a language model for generation",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 1st International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas Bangalore and Owen Rambow. 2000. Using TAG, a tree model, and a language model for genera- tion. In Proceedings of the 1st International Natural Language Generation Conference.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Information Fusion for Multidocument Summarization: Paraphrasing and Generation",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay. 2003. Information Fusion for Multi- document Summarization: Paraphrasing and Genera- tion. Ph.D. thesis, Columbia University.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estima- tion. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Introduction to Algorithms",
"authors": [
{
"first": "H",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"E"
],
"last": "Cormen",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"L"
],
"last": "Leiserson",
"suffix": ""
},
{
"first": "Clifford",
"middle": [],
"last": "Rivest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. 2001. Introduction to Al- gorithms. The MIT Press and McGraw-Hill. Second Edition.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An overview of Amalgam: A machine-learned generation module",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Corston",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Oliver",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"K"
],
"last": "Ringger",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Corston-Oliver, Michael Gamon, Eric K. Ringger, and Robert Moore. 2002. An overview of Amalgam: A machine-learned generation module. In Proceed- ings of the International Natural Language Genera- tion Conference.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "FUF User manual -version 5.0",
"authors": [
{
"first": "Michael",
"middle": [
"Elhadad"
],
"last": "",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Elhadad. 1991. FUF User manual -version 5.0. Technical Report CUCS-038-91, Department of Computer Science, Columbia University.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Introduction to automata theory, languages, and computation",
"authors": [
{
"first": "John",
"middle": [
"E"
],
"last": "Hopcroft",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John E. Hopcroft and Jeffrey D. Ullman. 1979. Introduc- tion to automata theory, languages, and computation. Addison-Wesley.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Machine transliteration",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Graehl",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "4",
"pages": "599--612",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Knight and Jonathan Graehl. 1998. Machine transliteration. Computational Linguistics, 24(4):599- 612.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Two level, many-path generation",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Knight and Vasileios Hatzivassiloglou. 1995. Two level, many-path generation. In Proceedings of the As- sociation of Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Decoding complexity in wordreplacement translation models",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "4",
"pages": "607--615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Knight. 1999. Decoding complexity in word- replacement translation models. Computational Lin- guistics, 25(4):607-615.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A foundation for generalpurpose natural language generation: sentence realization using probabilistic models of language",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Langkilde-Geary",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irene Langkilde-Geary. 2002. A foundation for general- purpose natural language generation: sentence real- ization using probabilistic models of language. Ph.D. thesis, University of Southern California.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Text Generation and Systemic-Functional Linguistic",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Matthiessen",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bateman",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Matthiessen and John Bateman. 1991. Text Generation and Systemic-Functional Linguistic. Pin- ter Publishers, London.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Weighted finite-state transducers in speech recognition",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Riley",
"suffix": ""
}
],
"year": 2002,
"venue": "Computer Speech and Language",
"volume": "16",
"issue": "1",
"pages": "69--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehryar Mohri, Fernando Pereira, and Michael Ri- ley. 2002. Weighted finite-state transducers in speech recognition. Computer Speech and Language, 16(1):69-88.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "IDLexpressions: a formalism for representing and parsing finite languages in natural language processing",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Nederhof",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Artificial Intelligence Research",
"volume": "21",
"issue": "",
"pages": "287--317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark-Jan Nederhof and Giorgio Satta. 2004. IDL- expressions: a formalism for representing and parsing finite languages in natural language processing. Jour- nal of Artificial Intelligence Research, 21:287-317.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Association for Computational Linguistics (ACL-2002)",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic evalu- ation of machine translation. In Proceedings of the As- sociation for Computational Linguistics (ACL-2002), pages 311-318, Philadelphia, PA, July 7-12.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Artificial Intelligence. A Modern Approach",
"authors": [
{
"first": "Stuart",
"middle": [],
"last": "Russell",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Norvig",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart Russell and Peter Norvig. 1995. Artificial Intelli- gence. A Modern Approach. Prentice Hall, Englewood Cliffs, New Jersey.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "have been proposed by Nederhof & Satta (2004) (henceforth N&S) as a representation for finite languages, and are created from strings using four operators: concatenation (\u00a2 ), interleave ( \u00a3 ), disjunction (\u00a4 ), and lock ( \u00a5 ). The semantics of IDL-expressions is given in terms of sets of strings.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "such, an FSA representation must explicitly enumerate all possible interleavings, which are implicitly captured in an IDL representation. This correspondence between implicit and explicit interleavings is naturally handled by the notion of a cut of an IDL-graph of vertices that can be reached simultaneously when traversing $ \" from the initial node to the final node, following the branches as prescribed by the encoded ,",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "Cuts of the IDL-graph inFigure 1 (a-d). A non-cut is presented in (e).",
"uris": null
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"text": "name of the cut to which it corresponds. A transi-The finite-state acceptor corresponding to the IDL-graph inFigure 1.",
"uris": null
},
"FIGREF7": {
"type_str": "figure",
"num": null,
"text": "Pseudo-code for intersecting an IDL-graph with an n-gram language model ! using incremental unfolding and breadth-first search.cording to the language model. If a final state ofis not yet reached, the while loop is closed by making the",
"uris": null
},
"TABREF1": {
"type_str": "table",
"text": "Comparison of the present proposal with current NLG systems.",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF10": {
"type_str": "table",
"text": "is the savings",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"2\">ALG</td><td>ID</td><td>BLEU</td><td>Search</td><td>Unfold</td><td>Speed</td></tr><tr><td/><td/><td>(%)</td><td/><td>Errors (%)</td><td>(%)</td><td>(sec./bag)</td></tr><tr><td>L</td><td/><td>2.5</td><td colspan=\"3\">9.5 97.2 (95.8) N/A</td><td>.000</td></tr><tr><td colspan=\"2\">G</td><td colspan=\"4\">30.9 51.0 67.5 (57.6) N/A</td><td>.000</td></tr><tr><td colspan=\"4\">BFS 67.1 79.2</td><td>0.0 (0.0)</td><td colspan=\"2\">100.0 .072</td></tr><tr><td colspan=\"2\">A0</td><td colspan=\"2\">67.1 79.2</td><td>0.0 (0.0)</td><td>12.0</td><td>.010</td></tr><tr><td colspan=\"2\">A0</td><td colspan=\"3\">60.5 74.8 21.1 (11.9)</td><td>3.2</td><td>.004</td></tr><tr><td colspan=\"2\">A0 \u00a2</td><td colspan=\"2\">64.3 77.2</td><td>8.5 (4.0)</td><td>5.3</td><td>.005</td></tr><tr><td>B\u00a5 B\u00a5</td><td>\u00a2 \u00a4 \u00a3 \u00a4 \u00a3\u00a8</td><td colspan=\"2\">65.0 78.0 6 6.6 78.8</td><td>9.2 (5.0) 3.2 (1.7)</td><td>7.2 13.2</td><td>.006 .011</td></tr></table>"
}
}
}
}