ACL-OCL / Base_JSON /prefixU /json /U05 /U05-1006.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U05-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:08:30.817360Z"
},
"title": "A Statistical Approach towards Unknown Word Type Prediction for Deep Grammars",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Saarland University Saarbr\u00fccken",
"location": {
"postCode": "D-66041",
"settlement": "Germany"
}
},
"email": ""
},
{
"first": "Valia",
"middle": [],
"last": "Kordoni",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Saarland University Saarbr\u00fccken",
"location": {
"postCode": "D-66041",
"settlement": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a statistical approach to unknown word type prediction for a deep HPSG grammar. Our motivation is to enhance robustness in deep processing. With a predictor which predicts lexical types for unknown words according to the context, new lexical entries can be generated on the fly. The predictor is a maximum entropy based classifier trained on a HPSG treebank. By exploring various feature templates and the feedback from parse disambiguation results, the predictor achieves precision over 60%. The models are general enough to be applied to other constraint-based grammar formalisms.",
"pdf_parse": {
"paper_id": "U05-1006",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a statistical approach to unknown word type prediction for a deep HPSG grammar. Our motivation is to enhance robustness in deep processing. With a predictor which predicts lexical types for unknown words according to the context, new lexical entries can be generated on the fly. The predictor is a maximum entropy based classifier trained on a HPSG treebank. By exploring various feature templates and the feedback from parse disambiguation results, the predictor achieves precision over 60%. The models are general enough to be applied to other constraint-based grammar formalisms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Deep processing delivers fine-grained syntactic and semantic analyses which are desirable for advanced NLP applications. However, specificity and robustness are the major difficulties that deep processing has encountered for years.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unlike shallow methods, which in most cases deliver an expected number of analyses, the number of output from deep processing is usually unpredictable, especially for open texts. The specificity problem arises when there are more analyses generated than expected. The analyses might be linguistically sound, but practically uninteresting for real applications. Recently, with more deep processing resources made available , the specificity problem is being alleviated with statistical parse selection models .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As to robustness, more open questions remain to be investigated. A deep grammar is normally a complicated rule system. Whenever the input varies, even slightly, beyond the grammar developers' expectations, the output becomes unpredictable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Closer studies of deep grammars have shown that lexicon coverage is one of the major barriers preventing deep grammars from being used for open text processing. Take the LinGO English Resource Grammar (ERG) (Copestake and Flickinger, 2000) , for instance. The grammar has been developed for more than 10 years, and currently contains about 22K lexicon entries. A recent test on the BNC corpus reported that only 32% of the strings have full lexical span, of which 57% get at least one parse (Baldwin et al., 2004) . About 40% of the parsing failures are due to lexicon missing. Lexicalized deep grammars rely on knowledge-rich lexicon. However, the construction of a lexicon with decent coverage requires a huge amount of human effort and considerable linguistic proficiency.",
"cite_spans": [
{
"start": 207,
"end": 239,
"text": "(Copestake and Flickinger, 2000)",
"ref_id": "BIBREF7"
},
{
"start": 491,
"end": 513,
"text": "(Baldwin et al., 2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A widely adopted approach towards robust deep processing is to integrate shallow methods (Callmeier et al., 2004) . However, most recent approaches still work on various fall-back strategies. When a deep processing component fails to deliver output, intermediate or shallow components are invoked to provide compatible analyses. Practically valid, this approach does not directly help to enhance the robustness of deep processing itself.",
"cite_spans": [
{
"start": 89,
"end": 113,
"text": "(Callmeier et al., 2004)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Inspired by the statistical approaches in parse selection , we propose a statistical approach for unknown word type prediction. The experiments are carried out on a broad-coverage linguistically-precise HPSG grammar for English, the LinGO English Resource Grammar (ERG) (Copestake and Flickinger, 2000) . However the underlying statistical model is general enough to apply to other deep grammars. Also, by incorporating the parse disambiguation result, we show that the robustness is in nature a dual problem to the specificity. And they can benefit from each other's improvements.",
"cite_spans": [
{
"start": 270,
"end": 302,
"text": "(Copestake and Flickinger, 2000)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper is structured as follows: Section 2 gives the background about the lexicon in HPSG; Section 3 describes our statistical models for unknown word type prediction and the various feature templates we use; Section 4 shows how the parse selection model can be incorporated to enhance the precision of prediction ; Section 5 reports on the experiment results; Section 6 compares our approach to other related work; Section 7 concludes our approach and presents some aspects of our future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Definitions in HPSG Head-driven Phrase Structure Grammar (HPSG) (Pollard and Sag, 1994) is a widely adopted constraint-based grammar formalism. Based on typed feature structure (TFS) (Carpenter, 1992), HPSG is highly lexicalized, which means there is only a limited number of highly generalized rules (ID Schemata & LP rules). A knowledge-rich lexicon is organized into a complex type hierarchy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Representation and",
"sec_num": "2"
},
{
"text": "In HPSG, all the linguistic objects are modeled by TFSs. Formally, a TFS is a directed acyclic graph (DAG). Each node in the DAG is labelled with a sort symbol (or type) corresponding to the category of the linguistic object. All the sort symbols are organized into an inheritance system, namely the type hierarchy. Two types are compatible if they share at least one common subtype in the hierarchy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Representation and",
"sec_num": "2"
},
{
"text": "The lexicon is also organized into the type hierarchy. In principle, each lexical entry is a wellformed TFS, which conveys a set of constraints. The constraints include both feature-value appropriateness and type compatibility. For instance, Figure 1 is the TFS for the proper name \"Mary\". ",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 250,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lexicon Representation and",
"sec_num": "2"
},
{
"text": "word \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 phon \"mary\" synsem synsem \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 loc local \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 cat cat \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 head",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Representation and",
"sec_num": "2"
},
{
"text": "bearer 1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb conx | bkgrd female inst 1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb nonloc nonlocal \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Representation and",
"sec_num": "2"
},
{
"text": "Figure 1: TFS of lexical entry for \"Mary\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Representation and",
"sec_num": "2"
},
{
"text": "However in implementation, complete de-scription is barely necessary. Most of the constraints will be conveyed via type inheritance. Only entry specific information like the stem and the semantic relation are required. Figure 2 gives part of the lexical hierarchy in ERG under the type basic noun word. Types with the suffix \"le\" are so-called leaf lexical types and should be directly assigned to lexical entries. These types are always mutually separated and incompatible. It is noticeable that each lexical entry takes exactly one leaf lexical type. When a word has more than one syntactic and/or semantic behaviors, different lexical entries will be created separately.",
"cite_spans": [],
"ref_spans": [
{
"start": 219,
"end": 227,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lexicon Representation and",
"sec_num": "2"
},
{
"text": "ERG 1 defines in total 741 leaf lexical types, of which 709 types are actually used in its lexicon with 12347 entries. A large number of these lexical types are closed categories whose lexical entries should already exist in the grammar. The top 10 verbal types count for about 75% of the verbal entries. For nouns the figure is about 95% and 90% for adjectives. Presumably, the automated lexical extension for nouns will be easier. This is plausible because verbal lexical entries normally require more detailed subcategorization information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Representation and",
"sec_num": "2"
},
{
"text": "For open text processing, a static lexicon inevitably becomes insufficient. A better strategy is to build an unknown word type predictor which can \"guess\" the lexical type from the available context, and generate lexical entries on the fly. As mentioned in Section 2, the lexicon of an HPSG grammar is organized into a type hierarchy. Each entry bears exactly one leaf lexical type. So the predictor is actually a classifier, which takes various context and morphological forms of the unknown word into consideration, and picks out the most suitable leaf lexical type as output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Unknown Word Type Prediction Models",
"sec_num": "3"
},
{
"text": "Such an unknown word type predictor is essentially very similar to a part-of-speech (POS) tagger. A typical POS tagger assigns a (unique or ambiguous) part-of-speech tag to each token in the input. A large number of current language processing systems use a POS tagger for pre-processing. The difference is that our unknown word type predictor has a very larger tagset. The tagset of a typical POS tagger usually contains tens of different tags. But our predictor needs to handle hundreds of possible types. In addition, an unknown word type predictor only predicts unknown words while a typical POS tagger generates tags for each token on the input sequence. Another point is that our unknown word type predictor can use any context information available at the processing stage. But normally a POS tagger only uses surface context features because these are usually used during pre-processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Unknown Word Type Prediction Models",
"sec_num": "3"
},
{
"text": "Predition Model Considering these difference, we have constructed our predictor based on a maximum entropy classifier. The advantages of a Maximum entropy model lie in the general feature representation and in no independence assumptions between features. A maximum entropy model can also easily handle thousands of features and large numbers of possible outputs. For our prediction model, the probability of a lexical type t given an unknown word and its context c is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy Classifier Based",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(t, c) = exp( i \u03b8 i f i (t, c)) t \u2208T exp( i \u03b8 i f i (t , c))",
"eq_num": "(1)"
}
],
"section": "Maximum Entropy Classifier Based",
"sec_num": "3.1"
},
{
"text": "where feature f i (t, c) may encode arbitrary characteristics of the context. The parameters < \u03b8 1 , \u03b8 2 , . . . > can be evaluated by maximizing the pseudo-likelihood on a training corpus (see (Malouf, 2002) ). The basic feature templates used in our MEbased model include the prefix and suffix of the unknown word, the context words within a window size of 5, and their corresponding lexical types.",
"cite_spans": [
{
"start": 194,
"end": 208,
"text": "(Malouf, 2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy Classifier Based",
"sec_num": "3.1"
},
{
"text": "Features Each lexical type is essentially a set of constraints on linguistic objects. If a word has a specific lexical type, it must conform to all the constraints demanded by the type, and hence it can only appear in some specific linguistic context. The constraints concern various linguistic aspects, among which syntactic constraints are predominant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Partial Parsing Results as",
"sec_num": "3.2"
},
{
"text": "One advantage of using a maximum entropy based model is that ME allows the combination of diverse forms of contextual information in a principled manner, and it does not impose any distributional assumptions on the training data. So far, only the surface context features (words and their lexical types) are used. It can be presumed that the precision can be enhanced by adding syntactic context as features into the prediction model. However, syntactic information is not available in a traditional pipeline processing model, where the syntactic analysis will be the postprocessing module to the predictor. Also, when there are unknown words in the input, a full analysis of the sentence is not possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Partial Parsing Results as",
"sec_num": "3.2"
},
{
"text": "So we have modified our strategy by inserting a partial parsing stage before the lexical type predictor if there are unknown words on the input sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Partial Parsing Results as",
"sec_num": "3.2"
},
{
"text": "The partial parse needs some clarification. A full parse can be represented by a set of edges as shown in Figure 3(a) . Each edge is derived from a rule application. There is no more than one edge between each pair of positions. And there is always exactly one full span edge in a full parse.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 117,
"text": "Figure 3(a)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Using Partial Parsing Results as",
"sec_num": "3.2"
},
{
"text": "A partial parse of an input sequence is a set of edges which composes a shortest path from the beginning to the end of the sequence 2 . There might be more than one partial parse for a given input sequence. As shown in Figure 3(b) , when the word between position 2 and 3 is unknown, a dummy edge c is created. This dummy edge will prevent further rule application. Both a \u2212 c \u2212 d and b \u2212 c \u2212 d are partial parses. From the partial parses, we collect all edges that are adjacent to the left/right of the unknown word, respectively. Then the rules that generate these edges are counted according to their application (once per edge). The most frequently used rules to create left/right adjacent edges are added as two features conveying syntactic information into the ME-based model. A complete list of all features templates used in our predictor are listed in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 219,
"end": 230,
"text": "Figure 3(b)",
"ref_id": "FIGREF0"
},
{
"start": 861,
"end": 868,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Using Partial Parsing Results as",
"sec_num": "3.2"
},
{
"text": "As mentioned before, deep lexical types normally encode complicated constraints that only make sense when they work together with the grammar rules. And some subtle differences between lexical types do not show statistical significance in a corpus with limited size. So the feedback from later stages of deep processing is very important for predicting the lexical types for the unknown words. The partial parsing results break the pipeline model. However, they might help only when the unknown is not the head of the phrase. Otherwise, the full parse crushes into small fragments, and the partial parsing results normally make no sense. An alternative way of breaking the pipeline model is to help the parser to generate full parses in the first place, and let the parsing result tell which lexical entry is good.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Parse Disambiguation Results",
"sec_num": "4"
},
{
"text": "Features X is prefix of w i , |X| \u2264 4 X is suffix of w i , |X| \u2264 4 t i\u22121 = X, t i\u22122 t i\u22121 = XY , t i+1 = X, t i+1 t i+2 = XY",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Parse Disambiguation Results",
"sec_num": "4"
},
{
"text": "w i\u22122 = X, w i\u22121 = X, w i+1 = X, w i+2 = X LP is the left adjacent most frequent edge of w i RP is the right adjacent most frequent edge of w i Table 3 : Feature templates used in ME-based prediction model for word w i (t j is the lexical type of w j )",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 151,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Incorporating Parse Disambiguation Results",
"sec_num": "4"
},
{
"text": "In order to help the parser to generate a full parse of the sentence, we feed the newly generated lexical entries directly into the parser. Instead of generating only one entry for each occurrence of unknown, we pass on top n most likely lexical entries. With these new entries, the sentence will receive one or more parses (assuming the sentence is grammatical and covered by the grammar). From the parsing results, a best parse is selected with the disambiguation model, and the corresponding lexical entry is taken as the final result of lexical extension. Within this processing model, the incorrect types will be ruled out if they are not compatible with the syntactic context. Also the infrequent readings of the unknown will be dispreferred by the disambiguation model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Parse Disambiguation Results",
"sec_num": "4"
},
{
"text": "Missing lexical entries can be discovered by lexicon checking. Precision is the only measurement for the lexical type predictor. In this section we will evaluate our models by experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Redwoods ) is a HPSG treebank that records full analyses of sentences with ERG. The genre of texts includes email correspondence, travel planning dialogs, etc. The 5th growth of Redwoods contains about 16.5K sentences and 122K tokens 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "5.1"
},
{
"text": "In all our experiments, we have done a 10fold cross validation on the Redwoods treebank. For each fold, words that do not occur in the training partition are assumed to be unknown.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "5.1"
},
{
"text": "A modified version of the efficient HPSG parser PET (Callmeier, 2000; Callmeier, 2001) has been used to generate the derivation tree fragments of the partial parses.",
"cite_spans": [
{
"start": 52,
"end": 69,
"text": "(Callmeier, 2000;",
"ref_id": "BIBREF4"
},
{
"start": 70,
"end": 86,
"text": "Callmeier, 2001)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "5.1"
},
{
"text": "We have also modified LexDB (Copestake et al., 2004) in order to be able to add temporal lexical entries that are only active for specific sentence.",
"cite_spans": [
{
"start": 28,
"end": 52,
"text": "(Copestake et al., 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "5.1"
},
{
"text": "The parse disambiguation model we have used is a maximum entropy based model that uses non-lexicalized features with 2 levels of grandparnets (see for detailed discussion about parse disambiguation models for HPSG grammars).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "5.1"
},
{
"text": "For maximum entropy parameter estimation, we have used (Malouf, 2002) 's MaxEnt package.",
"cite_spans": [
{
"start": 55,
"end": 69,
"text": "(Malouf, 2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "5.1"
},
{
"text": "For comparison, we have built a baseline system that always assigns a majority type to each unknown according to the POS tag. More specificically, we tag the input sentence with a small Penn Treebank-like POS tagset. Then POS tag is mapped to a most popular lexical type for that POS. 4 Table 4 lists part of the mappings. POS Majority Lexical Type noun n intr le verb v np trans le adj.",
"cite_spans": [],
"ref_spans": [
{
"start": 285,
"end": 294,
"text": "4 Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "adj intrans le adv. adv int vp le Table 4 : Part of the POS tags to lexical types mapping Again for comparison, we have built another two simple prediction models with two popular general-purpose POS taggers, TnT and MX-POST. TnT is a HMM-based trigram tagger while MXPOST is maximum entropy based. We have trained the tagging models by using all the leaf lexical types as the tagset. The taggers tag the whole sentence. But only the output tags for the unknowns are taken to generate the lexical entries.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 41,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "The maximum entropy based model is tested both with and without using partial parsing results as features. To incorporate disambiguation results, our predictor generates 3 entries for each unknown and store them as temporary entries into the LexDB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Precisions of the different prediction models are shown in Table 5 The baseline model achieves precision around 30%. This means that the task of unknown word type prediction for deep grammars is nontrivial. The general-purpose POS taggers based models perform quite well, outperforming the baseline by 10%. As a confirmation to (Elworthy, 1995)'s claim, a huge tagset does not imply that tagging will be very difficult. Our ME-based model significantly outperforms the taggers-based models by another 10%. This is a strong indication of our model's advantages.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 66,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "By incorporating simple syntactic information into the ME-based model, we get extra precision gain of less than 1%. It is worth noticing that the syntactic features we used are still naive. Better syntactic features remain to be explored in future work. Also, by applying partial parsing, the computation complexity increases significantly in comparison to our basic MEbased model. By incorporating the disambiguation results, the precision of the model boosts up for another 10%. The computational overhead is proportional to the number of candidate entries added for each unknown word. However, in most cases, introducing lexical entries with incorrect types will end up to parsing failure and can be efficiently detected by quick checking. In such cases the slowdown is acceptable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "In general, we have achieved up to 60% precision of unknown word type prediction for the ERG in these experiments. Given the complexity of the grammar and the huge number of pos-sible lexical types, these results are satisfying. Also, in real case of grammar adaptation for new domains, a large portion of unknowns are proper names. This means that the precision might get even higher in real applications. A test with some small text collection with real unknown words 5 shows that the precision can easily go above 80% with the basic ME model without partial parsing features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "It should also be mentioned that some of these experiments are also carried out for Dutch Alpino Grammar (Bouma et al., 2001) , and similar results are obtained. This shows that our method may be grammar and platform independent.",
"cite_spans": [
{
"start": 105,
"end": 125,
"text": "(Bouma et al., 2001)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "This work is in essence very similar to the work of deep lexical acquisition (DLA) in (Baldwin, 2005) . A minor difference is that our model always generates (at least) one lexical entry for the unknown, so that the deep processing does not halt at the very beginning. A more important difference is that, while (Baldwin, 2005) focuses on generalizing the method of deriving DLA models on various secondary language resources, our work focuses more on how to utilize the deep grammar itself as a source for enhancing robustness. The Redwoods Treebank is by nature the output of the deep grammar. And the parsing, as well as the disambiguation models are also part of the grammar that has eventually contributed to the unknown word type prediction. (Erbach, 1990; Barg and Walther, 1998; Fouvry, 2003 ) followed a different approach towards unknown words processing for unification based grammars. The basic idea was to use the underspecified lexical entries, namely TFSs with fewer constraints, in order to generate full parses for the sentences, and then extract the sub-TFS from the parses as a new lexical entry. However, lexical entries generated in this way might be both too general and too specific. And underspecified lexical entries with fewer constraints allow more grammar rules to be applied while parsing. It gets even worse when two unknown words occur next to each other, which might allow almost any constituent to be constructed. Also, the underspecified lexical entry significantly increases computational complexity. (van Schagen and Knott, 2004) took a similar approach of interactive unknown word acquisition in a dialogue context. (Thede and Harper, 1997) reported an empirical approach towards unknown lexical analysis using morphological and syntactic information. The approach is similar to ours in spirit. However, the experiments were done for a shallow parser with a very limited number of word classes. The applicability to lexicalist deep grammars with lots of lexical types is unknown.",
"cite_spans": [
{
"start": 86,
"end": 101,
"text": "(Baldwin, 2005)",
"ref_id": "BIBREF1"
},
{
"start": 312,
"end": 327,
"text": "(Baldwin, 2005)",
"ref_id": "BIBREF1"
},
{
"start": 748,
"end": 762,
"text": "(Erbach, 1990;",
"ref_id": "BIBREF10"
},
{
"start": 763,
"end": 786,
"text": "Barg and Walther, 1998;",
"ref_id": "BIBREF3"
},
{
"start": 787,
"end": 799,
"text": "Fouvry, 2003",
"ref_id": "BIBREF11"
},
{
"start": 1653,
"end": 1677,
"text": "(Thede and Harper, 1997)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Related Work",
"sec_num": "6"
},
{
"text": "In , the maximum entropy models were used for wide coverage parsing with the Alpino Dutch grammar (Bouma et al., 2001) . But the focus was on parse selection, not unknown words processing.",
"cite_spans": [
{
"start": 98,
"end": 118,
"text": "(Bouma et al., 2001)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Related Work",
"sec_num": "6"
},
{
"text": "Another related work is supertagging (Bangalore and Joshi, 1999). In supertagging, the lexical items are assigned with rich descriptions (supertags) that impose complex constraints in a local context. Some statistical techniques of assigning supertags to unknown words have been reported. For example, (Bangalore and Joshi, 1999) used a simple method of combining a probability estimate for unknown words P (U KN |T i ) with a probability estimate based on word features (capitalization, hyphenation, ending of words) by:",
"cite_spans": [
{
"start": 302,
"end": 329,
"text": "(Bangalore and Joshi, 1999)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Related Work",
"sec_num": "6"
},
{
"text": "P (W i |T i ) = P (U N K|T i ) * P (w f eat(W i )|T i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Related Work",
"sec_num": "6"
},
{
"text": "(2) where U N K is a token associated with each supertag and its count N U N K is estimated by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Related Work",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (U N K|T j ) = N 1 (T j ) N (T j ) + \u03b7 (3) N unk (T j ) = P (U N K|T j ) * N (T j ) 1 \u2212 P (U N K|T j )",
"eq_num": "(4)"
}
],
"section": "Comparison with Related Work",
"sec_num": "6"
},
{
"text": "N 1 (T j ) is the number of words that are associated with the supertag T j that appear in the corpus once. From some aspect, this approach is similar to our work. But our MEbased model allows more general feature representation. Also the lexical types we used are more general in the sense that both local and non-local constraints are encoded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Related Work",
"sec_num": "6"
},
{
"text": "Several statistical unknown word type prediction models are implemented and evaluated for deep HPSG grammars. The general-purpose POS taggers based approach delivers satisfying precision. The maximum entropy based predictor allows for more general feature representation. By incorporating parse disambiguation results, the unknown word type predictor achieves precision over 60%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Although the experiments are carried out with the ERG, the underlying model is general enough to be easily applied on other constraintbased lexicalist grammars, provided the lexical categories can be abstracted by a set of atomic types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Several aspects of this work need further exploration. More sophisticated syntactic features should be investigated. Besides, the deep grammar also provides semantic analyses which are not available in shallow processing. The general feature representation in our model allows the incorporation of this orthogonal dimension of information to enhance the precision of prediction. Also, larger corpora in more variety of genres are certain to generate better models. The application of the method to more deep grammars is anticipated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "The June 2004 release of ERG was used throughout this paper for experiments and statistics. This was also the version used for building the latest version of Redwood Treebank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that the edges on the full parse of the sentence are not necessary in the corresponding partial parses if a word is assumed to be unknown. However, partial parses do reduce the number of candidate edges for consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Sentences without a full analysis are neither counted here nor used in experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is similar to the built-in unknown word handling mechanism of the PET system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used a text set named rondane for training and hike for testing. rondane contains 1424 sentences in formal written English about tourism in the norwegian mountain area, with an average sentence length of 16 words; hike contains 320 sentences about outdoor hiking in Norway with an average sentence length of 14.3 words. Both contain a lot of unknowns like location names, transliterations, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Road-testing the English Resource Grammar over the British National Corpus",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Ara",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC 2004)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Baldwin, Emily M. Bender, Dan Flickinger, Ara Kim, and Stephan Oepen. 2004. Road-testing the English Resource Grammar over the British National Corpus. In Proceedings of the Fourth International Conference on Language Resources and Eval- uation (LREC 2004), Lisbon, Portugal.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Bootstrapping deep lexical resources: Resources for courses",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL-SIGLEX Workshop on Deep Lexical Acquisition",
"volume": "",
"issue": "",
"pages": "67--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Baldwin. 2005. Bootstrapping deep lexical resources: Resources for courses. In Proceedings of the ACL-SIGLEX Workshop on Deep Lexical Acquisition, pages 67-76, Ann Arbor, Michigan, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Supertagging: an approach to almost parsing",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "2",
"pages": "237--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas Bangalore and Aravind K. Joshi. 1999. Supertagging: an approach to almost parsing. Computational Linguistics, 25(2):237-265.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Processing unkonwn words in HPSG",
"authors": [
{
"first": "Petra",
"middle": [],
"last": "Barg",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Walther",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36th Conference of the ACL and the 17th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petra Barg and Markus Walther. 1998. Pro- cessing unkonwn words in HPSG. In Pro- ceedings of the 36th Conference of the ACL and the 17th International Conference on Computational Linguistics, Montreal, Que- bec, Canada.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "PET -a platform for experimentation with efficient HPSG processing techniques",
"authors": [
{
"first": "Gosse",
"middle": [],
"last": "Bouma",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Malouf",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics in The Netherlands",
"volume": "6",
"issue": "",
"pages": "99--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gosse Bouma, Gertjan van Noord, and Robert Malouf. 2001. Alpino: Wide-coverage com- putational analysis of dutch. In Computa- tional Linguistics in The Netherlands 2000. Ulrich Callmeier, Andreas Eisele, Ulrich Sch\u00e4fer, and Melanie Siegel. 2004. The deepthought core architecture framework. In Proceedings of LREC 04, Lisbon, Portugal. Ulrich Callmeier. 2000. PET -a platform for experimentation with efficient HPSG process- ing techniques. Journal of Natural Language Engineering, 6(1):99-108.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Efficient parsing with large-scale unification grammars",
"authors": [
{
"first": "Ulrich",
"middle": [],
"last": "Callmeier",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulrich Callmeier. 2001. Efficient parsing with large-scale unification grammars. Mas- ter's thesis, Universit\u00e4t des Saarlandes, Saarbr\u00fccken, Germany.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The Logic of Typed Feature Structures",
"authors": [
{
"first": "Bob",
"middle": [],
"last": "Carpenter",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bob Carpenter. 1992. The Logic of Typed Fea- ture Structures. Cambridge University Press, Cambridge, England.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An open-source grammar development environment and broad-coverage english grammar using hpsg",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Second conference on Language Resources and Evaluation (LREC-2000)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Copestake and Dan Flickinger. 2000. An open-source grammar development environ- ment and broad-coverage english grammar using hpsg. In Proceedings of the Second con- ference on Language Resources and Evalua- tion (LREC-2000), Athens, Greece.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A lexicon module for a grammar development environment",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "Fabre",
"middle": [],
"last": "Lambeau",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Waldron",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC-2004)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Copestake, Fabre Lambeau, Benjamin Waldron, Francis Bond, Dan Flickinger, and Stephan Oepen. 2004. A lexicon module for a grammar development environment. In Proceedings of the 4th International Confer- ence on Language Resources and Evaluation (LREC-2004), Lisbon, Portugal.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Tagset design and inflected languages",
"authors": [
{
"first": "David",
"middle": [],
"last": "Elworthy",
"suffix": ""
}
],
"year": 1995,
"venue": "EACL SIGDAT workshop \"From Texts to Tags: Issues in Multilingual Language Analysis",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Elworthy. 1995. Tagset design and in- flected languages. In EACL SIGDAT work- shop \"From Texts to Tags: Issues in Multilin- gual Language Analysis\", pages 1-10, Dublin, Ireland, April.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Syntactic processing of unknown words",
"authors": [
{
"first": "Gregor",
"middle": [],
"last": "Erbach",
"suffix": ""
}
],
"year": 1990,
"venue": "IWBS Report",
"volume": "131",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregor Erbach. 1990. Syntactic processing of unknown words. IWBS Report 131, IBM, Stuttgart.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Lexicon acquisition with a large-coverage unification-based grammar",
"authors": [
{
"first": "Frederik",
"middle": [],
"last": "Fouvry",
"suffix": ""
}
],
"year": 2003,
"venue": "Companion to the 10th of EACL",
"volume": "",
"issue": "",
"pages": "87--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frederik Fouvry. 2003. Lexicon acquisition with a large-coverage unification-based gram- mar. In Companion to the 10th of EACL, pages 87-90, ACL, Budapest, Hungary.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Wide coverage parsing with stochastic attribute value grammars",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Malouf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
}
],
"year": 2004,
"venue": "IJCNLP-04 Workshop: Beyond shallow analyses -Formalisms and statistical modeling for deep analyses",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Malouf and Gertjan van Noord. 2004. Wide coverage parsing with stochastic at- tribute value grammars. In IJCNLP-04 Workshop: Beyond shallow analyses -For- malisms and statistical modeling for deep analyses.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A comparison of algorithms for maximum entropy parameter estimation",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Malouf",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Sixth Conferencde on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "49--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Malouf. 2002. A comparison of al- gorithms for maximum entropy parameter estimation. In Proceedings of the Sixth Conferencde on Natural Language Learning (CoNLL-2002), pages 49-55.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The LinGO Redwoods treebank: Motivation and preliminary applications",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Shieber",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of COLING 2002: The 17th International Conference on Computational Linguistics: Project Notes",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Oepen, Kristina Toutanova, Stu- art Shieber, Christopher Manning, Dan Flickinger, and Thorsten Brants. 2002. The LinGO Redwoods treebank: Motiva- tion and preliminary applications. In Pro- ceedings of COLING 2002: The 17th Inter- national Conference on Computational Lin- guistics: Project Notes, Taipei.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Head-Driven Phrase Structure Grammar",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carl",
"suffix": ""
},
{
"first": "Ivan",
"middle": [
"A"
],
"last": "Pollard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sag",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carl J. Pollard and Ivan A. Sag. 1994. Head- Driven Phrase Structure Grammar. Univer- sity of Chicago Press, Chicago, Illinois.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Analysis of unknown lexical items using morphological and syntactic information with the timit corpus",
"authors": [
{
"first": "M",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Thede",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harper",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Fifth Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "261--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott M. Thede and Mary Harper. 1997. Analy- sis of unknown lexical items using morpholog- ical and syntactic information with the timit corpus. In Proceedings of the Fifth Workshop on Very Large Corpora, pages 261-272.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Parse ranking for a rich HPSG grammar",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Christoper",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Stuart",
"middle": [
"M"
],
"last": "Shieber",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the First Workshop on Treebanks and Linguistic Theories (TLT2002)",
"volume": "",
"issue": "",
"pages": "253--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Christoper D. Manning, Stuart M. Shieber, Dan Flickinger, and Stephan Oepen. 2002. Parse ranking for a rich HPSG grammar. In Proceedings of the First Workshop on Treebanks and Linguis- tic Theories (TLT2002), pages 253-263, So- zopol, Bulgaria.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Tauira: A tool for acquiring unknown words in a dialogue context",
"authors": [
{
"first": "Alistair",
"middle": [],
"last": "Maarten Van Schagen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Knott",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Australasian Language Technology Workshop (ALTW2004), Macquarie University",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maarten van Schagen and Alistair Knott. 2004. Tauira: A tool for acquiring unknown words in a dialogue context. In Proceedings of the 2004 Australasian Language Technology Workshop (ALTW2004), Macquarie Univer- sity, Australia.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Parsing edges: (a) edges in a full parse; (b) edges in partial parses.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF2": {
"content": "<table><tr><td colspan=\"2\">General Cat. Leaf Lex Types Num.</td></tr><tr><td>verb</td><td>261</td></tr><tr><td>noun</td><td>177</td></tr><tr><td>adjective</td><td>78</td></tr><tr><td>adverb</td><td>53</td></tr></table>",
"html": null,
"num": null,
"text": "It is obvious that missing lexical entries, in most cases, should be in open categories. Verb, noun, adjective and adverb are the major open categories. In ERG, the number of leaf lexical types under these general categories are shown in Table 1.",
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td/><td>basic noun word</td><td/></tr><tr><td/><td>noun noninfl word</td><td>basic n proper lexent</td></tr><tr><td>basic intr noun word</td><td>n mass le</td><td>n proper lexent</td></tr><tr><td>basic intr lex entry</td><td/><td>n proper le</td></tr><tr><td>n intr lex entry</td><td>n ppof meas le</td><td/></tr><tr><td>n intr le</td><td>n intr nosort le</td><td/></tr><tr><td/><td colspan=\"2\">Figure 2: Part of the Lexical Hierarchy in ERG</td></tr><tr><td colspan=\"2\">2 lists the top 10 lexical types with maximum</td><td/></tr><tr><td colspan=\"2\">number of entries in the ERG lexicon.</td><td/></tr><tr><td colspan=\"2\">Leaf Lexical Type Num. of Entries</td><td/></tr><tr><td>n intr le</td><td>1742</td><td/></tr><tr><td>n proper le</td><td>1463</td><td/></tr><tr><td>adj intrans le</td><td>1386</td><td/></tr><tr><td>v np trans le</td><td>732</td><td/></tr><tr><td>n ppof le</td><td>728</td><td/></tr><tr><td>adv int vp le</td><td>390</td><td/></tr><tr><td>v np* trans le</td><td>342</td><td/></tr><tr><td>n mass count le</td><td>292</td><td/></tr><tr><td>v particle np le</td><td>242</td><td/></tr><tr><td>n mass le</td><td>226</td><td/></tr></table>",
"html": null,
"num": null,
"text": "Number of Leaf Lexical Types under Major Open Categories in ERG However, even for the open categories, the distribution of existing lexical entries over different lexical types varies significantly. Table",
"type_str": "table"
},
"TABREF4": {
"content": "<table/>",
"html": null,
"num": null,
"text": "",
"type_str": "table"
},
"TABREF6": {
"content": "<table><tr><td>: Precision of Unknown Word Type Pre-</td></tr><tr><td>dictors (+/-pp means w or w/o partial parsing</td></tr><tr><td>result features)</td></tr></table>",
"html": null,
"num": null,
"text": "",
"type_str": "table"
}
}
}
}