ACL-OCL / Base_JSON /prefixH /json /H92 /H92-1029.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H92-1029",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:28:18.677477Z"
},
"title": "An Analogical Parser for Restricted Domains",
"authors": [
{
"first": "Donald",
"middle": [],
"last": "Hindle",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Bell Labs",
"institution": "",
"location": {
"addrLine": "600 Mountain Ave. Murray Hill",
"postCode": "07974",
"region": "NJ"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This note describes the current development of an approach to parsing designed to overcome some of the problems of existing parsers, particularly with respect to their utility a~ language models. The parser combines lexical and grammatical constraints into a uniform grammatical representation, is readily trainable (since the parser output is indistinguishable from the grammar input), and uses analogy to guess about the likelihood of constructions outside the grammar.",
"pdf_parse": {
"paper_id": "H92-1029",
"_pdf_hash": "",
"abstract": [
{
"text": "This note describes the current development of an approach to parsing designed to overcome some of the problems of existing parsers, particularly with respect to their utility a~ language models. The parser combines lexical and grammatical constraints into a uniform grammatical representation, is readily trainable (since the parser output is indistinguishable from the grammar input), and uses analogy to guess about the likelihood of constructions outside the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A parser is a device that provides a description of the syntactic phrases that make up a sentence. For a speech understanding task such as ATIS, the parser has two roles. First, it should provide a description of the phrases in a sentence so these phrases can be interpreted by a subsequent semantic processor. The second function is to provide a language model -a model of the likelihood of a sentence -to constrain the speech recognition task. It is unfortunately the case that existing parsers developed for text fulfill neither of these roles very well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE PROBLEM WITH PARSERS",
"sec_num": "1."
},
{
"text": "It is useful to begin by reviewing some of the reasons for this failure. We can describe the situation in terms of three general problems that parsers face: the Lexicality Problem, the Tail Problem, and the Interpolation Problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE PROBLEM WITH PARSERS",
"sec_num": "1."
},
{
"text": "The most familiar way to think of a parser is as a device that provides a description of a sentence given some grammar. Consider for example a context free grammar, where nonterminal categories are rewritten as terminals or nonterminals, and terminals are rewritten as words. There typically is no way to express the constraints among individual words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Lexicality Problem",
"sec_num": null
},
{
"text": "Yet it is clear that much of our knowledge of language has to do with what words go together. [2] Merely knowing the grammatical rules of the language is not enough to predict which words can go together. So for example, general English grammatical rules admit premodification of a noun by another noun or by an adjective. It is possible to describe broad semantic constraints on such modification; so for example, early morning is a case of a time-adjective modifying a time-period, and morning flight is a time-period modifying an event. Already we are have an explosion of categories in the grammar, since we are talking not about nouns and adjectives, but about a fairly detailed subclassification of semantic types of nouns and adjectives.",
"cite_spans": [
{
"start": 94,
"end": 97,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Lexicality Problem",
"sec_num": null
},
{
"text": "But the problem is worse than this. As Table 1 shows, even this rough characterization of semantic constraints on modification is insufficient, since the adjective-noun combination early night does not occur. This dependency of syntactic combinability on particular lexicM items is repeated across the grammar and lexicon.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Lexicality Problem",
"sec_num": null
},
{
"text": "The lexicality problem has two aspects. One is representing the information and the other is acquiring it. There has recently been increasing work on both aspects of the problem. The approach described in this paper is but one of many possible approaches, designed with an emphasis on facilitating efficient parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Lexicality Problem",
"sec_num": null
},
{
"text": "Most combinations of words never occur in a corpus, but many of these combinations are possible, but simply have not been observed yet. For a grammar (lexicalized or not) the problem presented by this tail of rare events is unavoidable. The grammar will always undercover the language. The solution to the tail problem involves training from text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Tail Problem",
"sec_num": null
},
{
"text": "While it is always useful to push a grammar out the tail, it is inevitable that a grammar will not cover everything encounted, and that a parser will have to deal with unforeseen constructions. This is of course the typical problem in language modeling, and it raises the problem of estimating the probabilities of structures that have not been seen -the Interpolation Problem. The rules of the grammar must be extendible to new constructions. In this parser the approach is through analogy, or memorybased reasoning. [ ",
"cite_spans": [
{
"start": 518,
"end": 519,
"text": "[",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Interpolation Problem",
"sec_num": null
},
{
"text": "Trees in the grammar are either terminal or nonterminal. Terminal trees are a pair of a syntactic feature specification and a word. Non-terminals are a pair of trees, with a specification of which tree is head -thus, this is a binary dependency grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Grammar",
"sec_num": "2.1."
},
{
"text": "terminal ~ (features word)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "t ~ terminal I (1 t t) I (2 t t)",
"sec_num": null
},
{
"text": "The category of a non-terminal is the category of its head.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "t ~ terminal I (1 t t) I (2 t t)",
"sec_num": null
},
{
"text": "The grammar for the parser is expressed as a set of trees that have lexically specified terminals, each with a frequency count. For example, in the ATIS grammar, the tree corresponding to the phrase book a flight is (1 (V \"book\") (2 (XI \"a\")(N \"flight\")))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "t ~ terminal I (1 t t) I (2 t t)",
"sec_num": null
},
{
"text": "It occurs 6 times. The grammar consists of a large set of such partial trees, which encode both the grammatical and the lexical constraints of the language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "t ~ terminal I (1 t t) I (2 t t)",
"sec_num": null
},
{
"text": "Following are examples of two trees that might be in the grammar for the parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "t ~ terminal I (1 t t) I (2 t t)",
"sec_num": null
},
{
"text": "(V 1 (V 1 (V 0 aIVZ) (XPII_O 0 NE)) (N 1 (N 2 (Xl o A) (N 0 LIST)) (P 1 (P o OF) (N2 2 (XQ 0 ALL) (N2 0 AIRFARES))))) (P 1 (P 0 FOR) (~2 2 (N 0 ROUND-TRIP) (N2 0 TICKETS))) 2.2. Parsing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "t ~ terminal I (1 t t) I (2 t t)",
"sec_num": null
},
{
"text": "The basic parser operation is to combine subtrees by matching existing trees in the grammar. Consider, for example, parsing the fragment give me a list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "t ~ terminal I (1 t t) I (2 t t)",
"sec_num": null
},
{
"text": "Initially, the parser focuses on the first word in the sentence, and tries to combine it with preceding and following nodes. Since give exists in the grammar as head of a tree with me as second element, the match is straightforward, and the node give me is built, directly copying the grammar. Nothing in the grammar leads to combining give me and a, so the parser attention moves forward, and a list is built, again, directly from the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "t ~ terminal I (1 t t) I (2 t t)",
"sec_num": null
},
{
"text": "At this point, the parser will is looking at the fragments give me (with head give) and a list (with head list), and is faced again with the question: can these pieces be combined. Here the answer is not so obvious.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "t ~ terminal I (1 t t) I (2 t t)",
"sec_num": null
},
{
"text": "If we could guarantee that all trees that the parser must construct will exist in its grammar of trees, then the parsing procedure would be as described in the preceding section. Of course, we don't predict in advance all trees the parser might see. Rather, the parser has a grammar representing a subset of the trees it might see along with a measure of similarity between trees. When the parser finds no exact way to combine two nodes to match a tree that exists in the grammar, it looks for similar trees that combine. In particular, it looks at each of the two potential combined nodes in turn and tries to find a similar tree that does combine with the observed tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing by analogy.",
"sec_num": "2.3."
},
{
"text": "So in our example, although give me a list does not occur,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing by analogy.",
"sec_num": "2.3."
},
{
"text": "give me occurs with a number of similar trees, including: One of these trees is selected to be the analog of a list, thus allowing give me to be combined as head with a list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing by analogy.",
"sec_num": "2.3."
},
{
"text": "a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing by analogy.",
"sec_num": "2.3."
},
{
"text": "The parser uses a heuristically defined measure of similarity that depends on: category, root, type , specifier, and distribution. Obviously, much depends on the similarity metric used. The aim here is to combine our knowledge of language, to determine what in general contributes to the similarity of words, with patterns trained from the text. The details of the current similarity metric are largely arbitrary, and ways of training it are being investigated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing by analogy.",
"sec_num": "2.3."
},
{
"text": "Notice that this approach finds the closest exemplar, not average of behavior. (cf. [7, 81) 2.",
"cite_spans": [
{
"start": 84,
"end": 87,
"text": "[7,",
"ref_id": "BIBREF6"
},
{
"start": 88,
"end": 91,
"text": "81)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing by analogy.",
"sec_num": "2.3."
},
{
"text": "For words which are ambiguous among more than one possible terminal (e.g. to can be a preposition or an infinitival marker), the parser must assign a terminal tree. In this parser, the disambiguation process is part of the parsing process. That is, when the parser is focusing on the word to it selects the tree which best combines to with a neighboring node. If that tree has to as, for example, head of a prepositional phrase, then to is a preposition, and similarly if to is an infinitival marker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disambiguation",
"sec_num": "4."
},
{
"text": "Of course, if a word is not attached to any other constituent in the course of parsing, this method will not apply. Disambiguation is still necessary, to allow subsequent processing. In such cases, the parser reverts to its bigram model to make the best guess about the proper tree for a word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disambiguation",
"sec_num": "4."
},
{
"text": "Developing a grammar for this parser means collecting a set of trees. There are 4 distinct sources of grammar trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DEVELOPING A GRAMMAR",
"sec_num": "3."
},
{
"text": "General English. The base set of trees for the parser is a set of general trees for the language as a whole, independent of the domain. These include standard sentence patterns as well as trees for the regular expressions of time, place, quantity, etc. For the current parser, these trees were written by hand (though in this set will over time be developed partly by hand and partly from text). This set of trees is independent of the domain, and available for any application. It forms part of a general model for English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DEVELOPING A GRAMMAR",
"sec_num": "3."
},
{
"text": "The remaining three parts of the tree database are all specific to the particular restricted domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DEVELOPING A GRAMMAR",
"sec_num": "3."
},
{
"text": "Domain Database Specific. Trees specific to the subdomain, derived semi-automatically from the underlying database. Included are airline names, flight names and codes, aircraft names, etc. This can also include a set of typical sentences for the domain. In a sense, this set of trees provides information about the content of the messages in the domain, the things one is likely to talk about.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DEVELOPING A GRAMMAR",
"sec_num": "3."
},
{
"text": "Parsed Training Sentences. hand parsed text from the training sentences. These trees are fairly easy to produce through an incremental process of: a) parse a set of sentences, b) hand correct them, c) remake the parser, and d) repeat. About a thousand words an hour can be analyzed this way. (Thus for the ATIS task, it is easy to hand parse the entire training set, though this was not done for the experiment reported here.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DEVELOPING A GRAMMAR",
"sec_num": "3."
},
{
"text": "Unsupervised Parsed Text. also from the training sentences, but parsed by the existing parser and left uncorrected. (Note: given an existing database of parsed sentences, these could transformed into trees for the parser grammar.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DEVELOPING A GRAMMAR",
"sec_num": "3."
},
{
"text": "Obviously, one aim of this design is to make acquisition of the grammar easy. Indeed, the parser design is not English-specific, and in fact a Spanish version of the parser (under an earlier but related design) is currently being updated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DEVELOPING A GRAMMAR",
"sec_num": "3."
},
{
"text": "For The ATIS task, a vocabulary was defined consisting of 1842 distinct terminal symbols (a superset of the February 91 vocabulary, enhanced by adding words to regularize the grammar, and by distinguishing words with features; e.g. \"travel\" as a verb is a different terminal from \"travel\" as a noun). A grammar was derived, based on 1) a relatively small general English model including trees for general sentence structure as well as trees for dates, times, numbers, money, and cities, and 2) an ATIS specific set of trees covering types of objects in the database (aircraft, airports, airlines, flight info, ground transportation) and 3) sentences in the training set. In this experiment, approximately 10% of the grammar are language general, 10~ are database specific, 50% are supervised parsed trees and 30~ are unsupervised.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE ATIS EXPERIMENT",
"sec_num": "4."
},
{
"text": "The weighting of the various sources of grammar trees has not arisen here -all trees are weighted equally. But in the general case, where there is a pre-existing large general grammar, and a large corpus for unsupervised training, the weighting of grammar trees will become an issue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE ATIS EXPERIMENT",
"sec_num": "4."
},
{
"text": "Given this grammar consisting of 14,000 trees, derived as described above, the grammar perplexity is 15.9 on the 138 February 91 test sentences. This compares to a perplexity of 18.9 for the bigram model (where bigrams are terminals). The grammar trees derived from the unsupervised parsing of the training sentences improve the model slightly (from 16.4 to 15.9 perplexity).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE ATIS EXPERIMENT",
"sec_num": "4."
},
{
"text": "The parse of a sentence consists of a sequence of N nodes. By convention, the first and last nodes in the sequence (nl and nN) are instances of the distinguished sentence boundary node. If all the words in a sentence are incorporated by the parser under a single root node, then the output will consist of a sequence of three nodes, of which the middle one covers the words of the sentence. But remember, the parser may emit a sequence of fragments; in the limiting case, the parser will emit one node for each word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SENTENCE PROBABILITY",
"sec_num": "5."
},
{
"text": "The tree grammar, consists of a set of tree specifications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The tree grammar",
"sec_num": "5.1."
},
{
"text": "For each tree ti, the specification records: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The tree grammar",
"sec_num": "5.1."
},
{
"text": "the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The tree grammar",
"sec_num": "5.1."
},
{
"text": "In the following, rd, ld, re, and lc mean right daughter, left daughter, right corner and left corner respectively. The probability of a sentence s consisting of a sequence of n nodes (starting with the sentence boundary node, which we call nl) is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "probability calculation",
"sec_num": "5.2."
},
{
"text": ".N-1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "probability calculation",
"sec_num": "5.2."
},
{
"text": "Pr(,) = i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "probability calculation",
"sec_num": "5.2."
},
{
"text": "Pr ( bigram( re( n, ) , Ic( ni+ l ) ) )",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 21,
"text": "( bigram( re( n, )",
"ref_id": null
}
],
"eq_spans": [],
"section": "probability calculation",
"sec_num": "5.2."
},
{
"text": "\u2022 Pr(ni+l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Pr(not_attached(ni))",
"sec_num": null
},
{
"text": "In this formula, the bigram probabilities are calculated on the terminals (word plus grammatical features), interpolating using feature similarity. , Pr(n lie(n)) = Pr(ld(n) l le(ld(n))) \u2022 1.0 -Pr (not_attaehed(ld(n) ",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 216,
"text": "(not_attaehed(ld(n)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ire(hi+l))",
"sec_num": null
},
{
"text": "In this formula, the first term is the recursion, which descends the left edge of the node to the left corner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ")) \u2022 Pr(tree(n) lld(n)) \u2022 Pr(rd(n) ltree(n), td(n))",
"sec_num": null
},
{
"text": "At each step in the descent, the second term in the formula takes account of the probability that the left daughter will be attached to something.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ")) \u2022 Pr(tree(n) lld(n)) \u2022 Pr(rd(n) ltree(n), td(n))",
"sec_num": null
},
{
"text": "The third term is the probability that the tree tree(n) will be the parent given that node le(n) is the left daughter of a node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ")) \u2022 Pr(tree(n) lld(n)) \u2022 Pr(rd(n) ltree(n), td(n))",
"sec_num": null
},
{
"text": "The fourth term is the probability that node rd(n) will be the right daughter given that ld(n) is the left daughter and tree(n) is the parent tree corresponding to node n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ")) \u2022 Pr(tree(n) lld(n)) \u2022 Pr(rd(n) ltree(n), td(n))",
"sec_num": null
},
{
"text": "probability of tree(n) given ld(n) To find the Pr(tree(n)[ld(n)), we consider the two cases, depend-ing on whether there is a substitution for the left_tree of n:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ")) \u2022 Pr(tree(n) lld(n)) \u2022 Pr(rd(n) ltree(n), td(n))",
"sec_num": null
},
{
"text": "Case: no left_substltution. If the left_tree(tree(n)) is equal to the tree(ld(n)) (i.e. if there is no substitution), then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ")) \u2022 Pr(tree(n) lld(n)) \u2022 Pr(rd(n) ltree(n), td(n))",
"sec_num": null
},
{
"text": "Pr(tree(n) l ld(n)) = (1.0 -prob_left_substitution(id(n))) Pr(tree(n) I ld(n), no.left_substitution)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ")) \u2022 Pr(tree(n) lld(n)) \u2022 Pr(rd(n) ltree(n), td(n))",
"sec_num": null
},
{
"text": "The prob_left_substitution(ld(n)) is the probability that given the node ld(n) whose tree is tt, that node will be the left daughter in a node whose left_tree is is not the same as tt. That is, tt will realize the left_tree(n). We estimate this probability on the basis of the count(t 0 and the left_count(tt).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ")) \u2022 Pr(tree(n) lld(n)) \u2022 Pr(rd(n) ltree(n), td(n))",
"sec_num": null
},
{
"text": "When there is no left_substitution, the probability of the parent tree is estimated directly from the counts of the trees that tree(id(n)) can be left_tree of:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ")) \u2022 Pr(tree(n) lld(n)) \u2022 Pr(rd(n) ltree(n), td(n))",
"sec_num": null
},
{
"text": "Pr(tree(n) I Id(n), no_left_substitution) = eount( tree( n ) ) /le ft_count( tree( ld( n ) ) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ")) \u2022 Pr(tree(n) lld(n)) \u2022 Pr(rd(n) ltree(n), td(n))",
"sec_num": null
},
{
"text": "Case: left_substitution. If there is a substitution, then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ")) \u2022 Pr(tree(n) lld(n)) \u2022 Pr(rd(n) ltree(n), td(n))",
"sec_num": null
},
{
"text": "Pr(tree(n) l ld(n)) = prob_le ft_substitution( ld(n ) ) Pr(tree(n) I tree(td(n) ), left_substitution)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ")) \u2022 Pr(tree(n) lld(n)) \u2022 Pr(rd(n) ltree(n), td(n))",
"sec_num": null
},
{
"text": "To estimate the Pr(tree(n) ] tree(id(n))) in case 2 (where we know there is a substitution for the left_ptree(n), we reason as follows. For each tree txs,l, that might substitute for tree(ld(n)), it will substitute only if tXlelt is observed as a left member of a tree that tree(leftdaughter(n)) is not observed with, and for txright, tXleyt is the best substitution. The total of such trees is called lsubs(t).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ")) \u2022 Pr(tree(n) lld(n)) \u2022 Pr(rd(n) ltree(n), td(n))",
"sec_num": null
},
{
"text": "By this account,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ")) \u2022 Pr(tree(n) lld(n)) \u2022 Pr(rd(n) ltree(n), td(n))",
"sec_num": null
},
{
"text": "The probability of the right daughter, given the left daughter and the tree similarly takes into account the probabilities of substitution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pr(tree(n) ] tree(id(n) ), left_substitution) = eount( tree( n ) ) / lsubs( tree( ld( n ) ) ).",
"sec_num": null
},
{
"text": "While current results for this parsing model look promising, there are several directions of further exploration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FURTHER WORK",
"sec_num": "6."
},
{
"text": "Integration in Speech Recognition. There are two obvious ways of incorporating this parser into the speech recognition task. First, it can be used to select among a set of candidate sentences proposed by a recognizer. The second, more interesting, approach is to embed the parser in the recognition process. Given the parser's localization of information and its deterministic beginningto-end processing, it can naturally be used to find a locally (where the domain of locality is adjacent trees) optimal path through an (appropriately sparse) lattice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FURTHER WORK",
"sec_num": "6."
},
{
"text": "Development of Further Processing. This parser rests on the assumption, shared in a variety of recent work from quite different perspectives [1, 3, 4] , that a level of underspeeified syntactic description is efficiently obtainable and is useful. The current work supports a particular view of what partial syntactic descriptions are obtainable. It remains to show that the further processing components can be constructed to make these pieces useful.",
"cite_spans": [
{
"start": 141,
"end": 144,
"text": "[1,",
"ref_id": null
},
{
"start": 145,
"end": 147,
"text": "3,",
"ref_id": "BIBREF2"
},
{
"start": 148,
"end": 150,
"text": "4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FURTHER WORK",
"sec_num": "6."
},
{
"text": "Implementation Details. A number of decisions in the implementation of the current parser are arbitrary, and further development demands exploring the optimal design. For example, we need to explore what the similarity function should look like, and what function should be used for comparing potential attachments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FURTHER WORK",
"sec_num": "6."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Using statistics in lexical analysis",
"authors": [
{
"first": "Kenneth",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Gale",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Hanks",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hindle",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, Kenneth W., William A. Gale, Patrick Hanks, and Donald Hindle. (to appear). Using statistics in lex- ical analysis, in Zernik (ed.) Lexical acquisition: using on-line resources to build a lexicon.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "To parse or not to parse: relationdriven text skimming",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Jacobs",
"suffix": ""
}
],
"year": 1990,
"venue": "COLING 90",
"volume": "",
"issue": "",
"pages": "194--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacobs, Paul. 1990. To parse or not to parse: relation- driven text skimming. In COLING 90, 194-198, Helsinki, Finland.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Description Theory and Intonation Boundaries",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Hindle",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus, Mitchell P. and Donald Hindle. 1990. Descrip- tion Theory and Intonation Boundaries. In Gerald Alt- mann (ed.), Computational and Cognitive Models of Speech. MIT Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Working with analogicalsemantics. Foris: Dordrecht",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sadler",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadler, Victor. 1989. Working with analogicalsemantics. Foris: Dordrecht.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Parsing strategies with 'lexicalized' grammars: application to tree adjoining grammars",
"authors": [],
"year": null,
"venue": "Proceedings fo the 12th International Conference on Computational Linguistics, COLING88",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parsing strategies with 'lexicalized' grammars: applica- tion to tree adjoining grammars. In Proceedings fo the 12th International Conference on Computational Lin- guistics, COLING88, Budapest, Hungary.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Analogical modeling of language",
"authors": [
{
"first": "Royal",
"middle": [],
"last": "Skousen",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Skousen, Royal. 1989. Analogical modeling of language. Kluwer:Dordrecht.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Toward memorybased reasoning",
"authors": [
{
"first": "Craig",
"middle": [],
"last": "Stanfill",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Waltz",
"suffix": ""
}
],
"year": 1986,
"venue": "Communications of the ACM 29",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanfill, Craig and David Waltz. 1986. Toward memory- based reasoning. Communications of the ACM 29.12.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "list of ground transportation a list of the cities serve a list of flights from philadelphia a list of all the flights a list of all flights a list of all aircraft type",
"type_str": "figure",
"uris": null,
"num": null
}
}
}
}