ACL-OCL / Base_JSON /prefixS /json /S13 /S13-1010.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S13-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:42:07.766968Z"
},
"title": "Distinguishing Common and Proper Nouns",
"authors": [
{
"first": "Judita",
"middle": [],
"last": "Preiss",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {
"addrLine": "211 Portobello",
"postCode": "S1 4DP",
"settlement": "Sheffield",
"country": "United Kingdom"
}
},
"email": "j.preiss@sheffield.ac.uk"
},
{
"first": "Mark",
"middle": [],
"last": "Stevenson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {
"addrLine": "211 Portobello",
"postCode": "S1 4DP",
"settlement": "Sheffield",
"country": "United Kingdom"
}
},
"email": "r.m.stevenson@sheffield.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe a number of techniques for automatically deriving lists of common and proper nouns, and show that the distinction between the two can be made automatically using a vector space model learning algorithm. We present a direct evaluation on the British National Corpus, and application based evaluations on Twitter messages and on automatic speech recognition (where the system could be employed to restore case).",
"pdf_parse": {
"paper_id": "S13-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe a number of techniques for automatically deriving lists of common and proper nouns, and show that the distinction between the two can be made automatically using a vector space model learning algorithm. We present a direct evaluation on the British National Corpus, and application based evaluations on Twitter messages and on automatic speech recognition (where the system could be employed to restore case).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Some nouns are homographs (they have the same written form, but different meaning) which can be used to denote either a common or proper noun, for example the word apple in the following examples:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) Apple designs and creates iPod (2) The Apple II series is a set of 8-bit home computers (3) The apple is the pomaceous fruit of the apple tree (4) For apple enthusiasts -tasting notes and apple identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The common and proper uses are not always as clearly distinct as in this example; for example, a specific instance of a common noun, e.g., District Court turns court into a proper noun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While heuristically, proper nouns often start with a capital letter in English, capitalization can be inconsistent, incorrect or omitted, and the presence or absence of an article cannot be relied on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The problem of distinguishing between common and proper usages of nouns has not received much attention within language processing, despite being an important component for many tasks including machine translation (Lopez, 2008; Hermjakob et al., 2008) , sentiment analysis (Pang and Lee, 2008; Wilson et al., 2009) and topic tracking (Petrovi\u0107 et al., 2010) . Approaches to the problem also have applications to tasks such as web search (Chen et al., 1998; Baeza-Yates and Ribeiro-Neto, 2011) , and case restoration (e.g., in automatic speech recognition output) (Baldwin et al., 2009) , but frequently involve the manual creation of a gazeteer (a list of proper nouns), which suffer not only from omissions but also often do not allow the listed words to assume their common role in text.",
"cite_spans": [
{
"start": 214,
"end": 227,
"text": "(Lopez, 2008;",
"ref_id": "BIBREF13"
},
{
"start": 228,
"end": 251,
"text": "Hermjakob et al., 2008)",
"ref_id": "BIBREF12"
},
{
"start": 273,
"end": 293,
"text": "(Pang and Lee, 2008;",
"ref_id": "BIBREF14"
},
{
"start": 294,
"end": 314,
"text": "Wilson et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 334,
"end": 357,
"text": "(Petrovi\u0107 et al., 2010)",
"ref_id": "BIBREF15"
},
{
"start": 437,
"end": 456,
"text": "(Chen et al., 1998;",
"ref_id": "BIBREF6"
},
{
"start": 457,
"end": 492,
"text": "Baeza-Yates and Ribeiro-Neto, 2011)",
"ref_id": "BIBREF1"
},
{
"start": 563,
"end": 585,
"text": "(Baldwin et al., 2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents methods for generating lists of nouns that have both common and proper usages (Section 2) and methods for identifying the type of usage (Section 3) which are evaluated using data derived automatically from the BNC (Section 4) and on two applications (Section 5). It shows that it is difficult to automatically construct lists of ambiguous nouns but also that they can be distinguished effectively using standard features from Word Sense Disambiguation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To our knowledge, no comprehensive list of common nouns with proper noun usage is available. We develop a number of heuristics to generate such lists automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Lists of Nouns",
"sec_num": "2"
},
{
"text": "Part of speech tags A number of part of speech (PoS) taggers assign different tags to common and proper nouns. Ambiguous nouns are identified by tagging a corpus and extracting those that have had both tags assigned, together with the frequency of occurrence of the common/proper usage. The CLAWS (Garside, 1987) and the RASP taggers (Briscoe et al., 2006) were applied to the British National Corpus (BNC) (Leech, 1992) to generate the lists BNCclaws and BNCrasp respectively. In addition the RASP tagger was also run over the 1.75 billion word Gigaword corpus (Graff, 2003) to extract the list Gigaword.",
"cite_spans": [
{
"start": 297,
"end": 312,
"text": "(Garside, 1987)",
"ref_id": "BIBREF10"
},
{
"start": 334,
"end": 356,
"text": "(Briscoe et al., 2006)",
"ref_id": "BIBREF5"
},
{
"start": 407,
"end": 420,
"text": "(Leech, 1992)",
"ref_id": null
},
{
"start": 562,
"end": 575,
"text": "(Graff, 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Lists of Nouns",
"sec_num": "2"
},
{
"text": "Capitalization Nouns appearing intrasententially with both lower and upper case first letters are assumed to be ambiguous. This technique is applied to the 5-grams from the Google corpus (Brants and Franz, 2006) and the BNC (creating the lists 5-grams and BNCcaps).",
"cite_spans": [
{
"start": 187,
"end": 211,
"text": "(Brants and Franz, 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Lists of Nouns",
"sec_num": "2"
},
{
"text": "Wikipedia includes disambiguation pages for ambiguous words which provide information about their potential usage. Wikipedia pages for nouns with senses (according to the disambiguation page) in a set of predefined categories were identified to form the list Wikipedia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Lists of Nouns",
"sec_num": "2"
},
{
"text": "Named entity recognition The Stanford Named Entity Recogniser (Finkel et al., 2005) was run over the BNC and any nouns that occur in the corpus with both named entity and non-named entity tags are extracted to form the list Stanford.",
"cite_spans": [
{
"start": 62,
"end": 83,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Lists of Nouns",
"sec_num": "2"
},
{
"text": "WordNet The final heuristic makes use of Word-Net (Fellbaum, 1998) which lists nouns that are often used as proper nouns with capitalisation. Nouns which appeared in both a capitalized and lowercased form were extracted to create the list WordNet. Table 1 shows the number of nouns identified by each technique in the column labeled words which demonstrates that the number of nouns identified varies significantly depending upon which heuristic is used. A pairwise score is also shown to indicate the consistency between each list and two example lists, BNCclaws and Gigaword. It can be seen that the level of overlap is quite low and the various heuristics generate quite different lists of nouns. In particular the recall is low, in almost all cases less than a third of nouns in one list appear in the other.",
"cite_spans": [],
"ref_spans": [
{
"start": 248,
"end": 255,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Generating Lists of Nouns",
"sec_num": "2"
},
{
"text": "One possible reason for the low overlap between the noun lists is mistakes by the heuristics used to extract them. For example, if a PoS tagger mistakenly tags just one instance of a common noun as proper then that noun will be added to the list extracted by the part of speech heuristic. Two filtering schemes were applied to improve the accuracy of the lists: (1) minimum frequency of occurrence, the noun must appear more than a set number of times words BNCclaws Gigaword P R P R BNCclaws 41,110 100 100 31 2 BNCrasp 20,901 52 27 45 17 BNCcaps 18,524 56 26 66 21 5-grams 27,170 45 29 59 28 Gigaword 57,196 22 31 100 100 Wikipedia 7,351 49 9 59 8 WordNet 798 75 1 in the corpus and (2) bias, the least common type of noun usage (i.e., common or proper) must account for more than a set percentage of all usages. We experimented with various values for these filters and a selection of results is shown in Precision (against BNCclaws) increased as the filters become more aggressive. However comparison with Gigaword does not show such high precision and recall is extremely low in all cases.",
"cite_spans": [],
"ref_spans": [
{
"start": 458,
"end": 709,
"text": "BNCclaws Gigaword P R P R BNCclaws 41,110 100 100 31 2 BNCrasp 20,901 52 27 45 17 BNCcaps 18,524 56 26 66 21 5-grams 27,170 45 29 59 28 Gigaword 57,196 22 31 100 100 Wikipedia 7,351 49 9 59 8 WordNet 798 75 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Generating Lists of Nouns",
"sec_num": "2"
},
{
"text": "These experiments demonstrate that it is difficult to automatically generate a list of nouns that exhibit both common and proper usages. Manual analysis of the lists generated suggest that the heuristics can identify ambiguous nouns but intersecting the lists results in the loss of some obviously ambiguous nouns (however, their union introduces a large amount of noise). We select nouns from the lists created by these heuristics (such that the distribution of either the common or proper noun sense in the data was not less than 45%) for experiments in the following sections. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Lists of Nouns",
"sec_num": "2"
},
{
"text": "We cast the problem of distinguishing between common and proper usages of nouns as a classification task and develop the following approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Noun Types",
"sec_num": "3"
},
{
"text": "A naive baseline is supplied by assigning each word its most frequent usage form (common or proper noun). The most frequent usage is derived from the training portion of labeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Most frequent usage",
"sec_num": "3.1"
},
{
"text": "A system based on n-grams was implemented using NLTK (Bird et al., 2009) . Five-grams, four-grams, trigrams and bigrams from the training corpus are matched against a test corpus sentence, and results of each match are summed to yield a preferred use in the given context with a higher weight (experimentally determined) being assigned to longer n-grams. The system backs off to the most frequent usage (as derived from the training data).",
"cite_spans": [
{
"start": 53,
"end": 72,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "n-gram system",
"sec_num": "3.2"
},
{
"text": "Distinguishing between common and proper nouns can be viewed as a classification problem. Treating the problem in this manner is reminiscent of techniques commonly employed in Word Sense Disambiguation (WSD). Our supervised approach is based on an existing WSD system (Agirre and Martinez, 2004) that uses a wide range of features:",
"cite_spans": [
{
"start": 268,
"end": 295,
"text": "(Agirre and Martinez, 2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Space Model (VSM)",
"sec_num": "3.3"
},
{
"text": "\u2022 Word form, lemma or PoS bigrams and trigrams containing the target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Space Model (VSM)",
"sec_num": "3.3"
},
{
"text": "\u2022 Preceding or following lemma (or word form) content word appearing in the same sentence as the target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Space Model (VSM)",
"sec_num": "3.3"
},
{
"text": "\u2022 High-likelihood, salient, bigrams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Space Model (VSM)",
"sec_num": "3.3"
},
{
"text": "\u2022 Lemmas of all content words in the same sentence as the target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Space Model (VSM)",
"sec_num": "3.3"
},
{
"text": "\u2022 Lemmas of all content words within a \u00b14 word window of the target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Space Model (VSM)",
"sec_num": "3.3"
},
{
"text": "\u2022 Non stopword lemmas which appear more than twice throughout the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Space Model (VSM)",
"sec_num": "3.3"
},
{
"text": "Each occurrence of a common / proper noun is represented as a binary vector in which each position indicates the presence or absence of a feature. A centroid vector is created during the training phase for the common noun and the proper noun instances of a word. During the test phase, the centroids are compared to the vector of each test instance using the cosine metric, and the word is assigned the type of the closest centroid.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Space Model (VSM)",
"sec_num": "3.3"
},
{
"text": "The approaches described in the previous section are evaluated on two data sets extracted automatically from the BNC. The BNC-PoS data set is created using the output from the CLAWS tagger. Nouns assigned the tag NP0 are treated as proper nouns and those assigned any other nominal tag as common nouns. (According to the BNC manual the NP0 tag has a precision 83.99% and recall 97.76%. 2 ) This data set consists of all sentences in the BNC in which the target word appears. The second data set, BNC-Capital, is created using capitalisation information and consists of instances of the target noun that do not appear sentence-initially. Any instances that are capitalised are treated as proper nouns and those which are non-capitalised as common nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "Experiments were carried out using capitalised and decapitalized versions of the two test corpora. The decapitalised versions by lowercasing each corpus and using it for training and testing. Results are presented in Table 3 . Ten fold cross validation is used for all experiments: i.e. 9/10th of the corpus were used to acquire the training data centroids and 1/10th was used for evaluation. The average performance over the 10 experiments is reported. The vector space model (VSM) outperforms other approaches on both corpora. Performance is particularly high when capitalisation is included (VSM w caps). However, this approach still outperforms the baseline without case information (VSM w/o caps), demonstrating that using this simple approach is less effective than making use of local context.",
"cite_spans": [],
"ref_spans": [
{
"start": 217,
"end": 224,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "Most frequent 79% 67% n-gram w caps 80% 77% n-gram w/o caps 68% 56% VSM w caps 90% 100% VSM w/o caps 86% 80% ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold standard BNC-PoS BNC-Capital",
"sec_num": null
},
{
"text": "We also carried out experiments on two types of text in which capitalization information may not be available: social media and ASR output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applications",
"sec_num": "5"
},
{
"text": "As demonstrated in the BNC based evaluations, the system can be applied to text which does not contain capitalization information to identify proper nouns (and, as a side effect, enable the correction of capitalization). An example of such a dataset are the (up to) 140 character messages posted on Twitter. There are some interesting observations to be made on messages downloaded from Twitter. Although some users choose to always tweet in lower case, the overall distribution of capitalization in tweets is high for the 100 words selected in Section 2 and only 3.7% of the downloaded tweets are entirely lower case. It also appeared that users who capitalize, do so fairly consistently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter",
"sec_num": "5.1"
},
{
"text": "This allows the creation of a dataset based on downloaded Twitter data 3 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter",
"sec_num": "5.1"
},
{
"text": "1. Identify purely lower case tweets containing the target word. These will form the test data (and are manually assigned usage).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter",
"sec_num": "5.1"
},
{
"text": "2. Any non-sentence initial occurrences of the target word are used as training instances: lower case indicating a common instance, upper case indicating a proper instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter",
"sec_num": "5.1"
},
{
"text": "14 words 4 were randomly selected from the list used in Section 4 and their lowercase tweet instances were manually annotated by a single annotator. The Training corpus MF n-grams VSM Twitter 59% 40% 60% BNCclaw decap 59% 44% 79% Table 4 : Results on the Twitter data average proportion of proper nouns in the test data was 59%.",
"cite_spans": [],
"ref_spans": [
{
"start": 230,
"end": 237,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Twitter",
"sec_num": "5.1"
},
{
"text": "The results for the three systems are presented in Table 4 . As the length of the average sentence in the Twitter data is only 15 words (compared to 27 words in the BNCclaws data for the same target words), the Twitter data is likely to be suffering sparseness issues. This hypothesis is partly supported by the increase in performance when the BNCclaws decapitalized data is added to the training data, however, the performance of the n-gram system remains below the most frequent use. On closer examination, this is likely due to the skew in the data -there are many more examples for the common use of each noun, and thus each context is much more likely to have been seen in this setting.",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Twitter",
"sec_num": "5.1"
},
{
"text": "Most automatic speech recognition (ASR) systems do not provide capitalization. However, our system does not rely on capitalization information, and therefore can identify proper / common nouns even if capitalization is absent. Also, once proper nouns are identified, the system can be used to restore case -a feature which allows an evaluation to take place on this dataset. We use the TDT2 Test and Speech corpus (Cieri et al., 1999) , which contains ASR and a manually transcribed version of news texts from six different sources, to demonstrate the usefulness of this system for this task.",
"cite_spans": [
{
"start": 414,
"end": 434,
"text": "(Cieri et al., 1999)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic speech recognition",
"sec_num": "5.2"
},
{
"text": "The ASR corpus is restricted to those segments which contain an equal number of target word occurrences in the ASR text and the manually transcribed version, and all such segments are extracted. The gold standard, and the most frequent usage, are drawn from the manually transcribed data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic speech recognition",
"sec_num": "5.2"
},
{
"text": "Again, results are based on an average performance obtained using a ten fold cross validation. Three versions of training data are used: the 9/10 of ASR data (with labels provided by the manual transcription), the equivalent 9/10 of lowercased manu-Training corpus MF n-grams VSM Manual 66% 42% 73% ASR 63% 41% 79% Table 5 : Results on the ASR data ally transcribed data, and a combination of the two. The results can be seen in Table 5 . The performance rise obtained with the VSM model when the ASR data is used is likely due to the repeated errors within this, which will not be appearing in the manually transcribed texts. The n-gram performance is greatly affected by the low volume of training data available, and again, a large skew within this.",
"cite_spans": [],
"ref_spans": [
{
"start": 315,
"end": 322,
"text": "Table 5",
"ref_id": null
},
{
"start": 429,
"end": 436,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Automatic speech recognition",
"sec_num": "5.2"
},
{
"text": "We automatically generate lists of common and proper nouns using a number of different techniques. A vector space model technique for distinguishing common and proper nouns is found to achieve high performance when evaluated on the BNC. This greatly outperforms a simple n-gram based system, due to its better adaptability to sparse training data. Two application based evaluations also demonstrate the system's performance and as a side effect the system could serve as a technique for automatic case restoration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The 100 words selected for our evaluation are available at http://pastehtml.com/view/cjsbs4xvl.txt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "No manual annotation of common and proper nouns in this corpus exists and thus an exact accuracy figure for this corpus cannot be obtained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://search.twitter.com/api 4 abbot, bull, cathedral, dawn, herald, justice, knight, lily, lodge, manor, park, president, raven and windows",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors are grateful to the funding for this research received from Google (Google Research Award) and the UK Engineering and Physical Sciences Research Council (EP/J008427/1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Basque Country University system: English and Basque tasks",
"authors": [
{
"first": "E",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Martinez",
"suffix": ""
}
],
"year": 2004,
"venue": "Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text",
"volume": "",
"issue": "",
"pages": "44--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agirre, E. and Martinez, D. (2004). The Basque Coun- try University system: English and Basque tasks. In Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pages 44-48.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modern Information Retrieval: The Concepts and Technology Behind Search",
"authors": [
{
"first": "R",
"middle": [],
"last": "Baeza-Yates",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ribeiro-Neto",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baeza-Yates, R. and Ribeiro-Neto, B. (2011). Modern Information Retrieval: The Concepts and Technology Behind Search. Addison Wesley Longman Limited, Essex.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Restoring punctuation and casing in English text",
"authors": [
{
"first": "T",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 22nd Australian Joint Conference on Artificial Intelligence (AI09)",
"volume": "",
"issue": "",
"pages": "547--556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baldwin, T., Paul, M., and Joseph, A. (2009). Restoring punctuation and casing in English text. In Proceedings of the 22nd Australian Joint Conference on Artificial Intelligence (AI09), pages 547-556.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Natural Language Processing with Python -Analyzing Text with the Natural Language Toolkit",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bird, S., Klein, E., and Loper, E. (2009). Natural Lan- guage Processing with Python -Analyzing Text with the Natural Language Toolkit. O'Reilly.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Web 1T 5-gram v1",
"authors": [
{
"first": "T",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brants, T. and Franz, A. (2006). Web 1T 5-gram v1.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The second release of the RASP system",
"authors": [
{
"first": "T",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Watson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Briscoe, T., Carroll, J., and Watson, R. (2006). The sec- ond release of the RASP system. In Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Proper name translation in cross-language information retrieval",
"authors": [
{
"first": "H",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tsai",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "232--236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, H., Huang, S., Ding, Y., and Tsai, S. (1998). Proper name translation in cross-language information retrieval. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Lin- guistics, Volume 1, pages 232-236, Montreal, Canada.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The TDT-2 text and speech corpus",
"authors": [
{
"first": "C",
"middle": [],
"last": "Cieri",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Liberman",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Martey",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Strassel",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of DARPA Broadcast News Workshop",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cieri, C., Graff, D., Liberman, M., Martey, N., and Strassel, S. (1999). The TDT-2 text and speech cor- pus. In Proceedings of DARPA Broadcast News Work- shop, pages 57-60.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "WordNet: An Electronic Lexical Database and some of its Applications",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fellbaum, C., editor (1998). WordNet: An Electronic Lexical Database and some of its Applications. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Incorporating non-local information into information extraction systems by Gibbs sampling",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finkel, J. R., Grenager, T., and Manning, C. (2005). In- corporating non-local information into information ex- traction systems by Gibbs sampling. In Proceedings of the 43nd Annual Meeting of the Association for Com- putational Linguistics, pages 363-370.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The CLAWS word-tagging system",
"authors": [
{
"first": "R",
"middle": [
"; R"
],
"last": "Garside",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Leech",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Sampson",
"suffix": ""
}
],
"year": 1987,
"venue": "The Computational Analysis of English: A Corpusbased Approach",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Garside, R. (1987). The CLAWS word-tagging system. In Garside, R., Leech, G., and Sampson, G., editors, The Computational Analysis of English: A Corpus- based Approach. London: Longman.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "English Gigaword",
"authors": [
{
"first": "D",
"middle": [],
"last": "Graff",
"suffix": ""
}
],
"year": 2003,
"venue": "Linguistic Data Consortium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graff, D. (2003). English Gigaword. Technical report, Linguistic Data Consortium.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Name translation in statistical machine translationlearning when to transliterate",
"authors": [
{
"first": "U",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of ACL-08: HLT",
"volume": "28",
"issue": "",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hermjakob, U., Knight, K., and Daum\u00e9 III, H. (2008). Name translation in statistical machine translation - learning when to transliterate. In Proceedings of ACL- 08: HLT, pages 389-397, Columbus, Ohio. Leech, G. (1992). 100 million words of English: the British National Corpus. Language Research, 28(1):1-13.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Statistical machine translation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2008,
"venue": "ACM Computing Surveys",
"volume": "40",
"issue": "3",
"pages": "1--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lopez, A. (2008). Statistical machine translation. ACM Computing Surveys, 40(3):1-49.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Opinion mining and sentiment analysis",
"authors": [
{
"first": "B",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "Foundations and Trends in Information Retrieval",
"volume": "2",
"issue": "1-2",
"pages": "1--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pang, B. and Lee, L. (2008). Opinion mining and senti- ment analysis. Foundations and Trends in Information Retrieval, Vol. 2(1-2):pp. 1-135.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Streaming first story detection with application to twitter",
"authors": [
{
"first": "S",
"middle": [],
"last": "Petrovi\u0107",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Lavrenko",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "181--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petrovi\u0107, S., Osborne, M., and Lavrenko, V. (2010). Streaming first story detection with application to twit- ter. In Human Language Technologies: The 2010 An- nual Conference of the North American Chapter of the Association for Computational Linguistics, pages 181-189, Los Angeles, California.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Recognizing contextual polarity: an exploration of features for phrase-level sentiment analysis",
"authors": [
{
"first": "T",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hoffman",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics",
"volume": "35",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilson, T., Wiebe, J., and Hoffman, P. (2009). Recogniz- ing contextual polarity: an exploration of features for phrase-level sentiment analysis. Computational Lin- guistics, 35(5).",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"num": null,
"text": "Pairwise comparison of lists. The nouns in each list are compared against the BNCclaws and Gigaword lists. Results are computed for P(recision) and R(ecall).",
"content": "<table/>",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"text": "where freq is the minimum frequency of occurrence filter and bias indicates the percentage of the less frequent noun type.",
"content": "<table><tr><td/><td>bias</td><td colspan=\"6\">freq words BNCclaws Gigaword</td></tr><tr><td/><td/><td/><td/><td>P</td><td>R</td><td>P</td><td>R</td></tr><tr><td>BNCclaws</td><td>40</td><td>100</td><td>274</td><td>100</td><td>1</td><td>53</td><td>1</td></tr><tr><td>BNCrasp</td><td>30</td><td>100</td><td>253</td><td>94</td><td>1</td><td>85</td><td>0</td></tr><tr><td>5-grams</td><td>40</td><td>150</td><td>305</td><td>80</td><td>1</td><td>67</td><td>0</td></tr><tr><td>Stanford</td><td>40</td><td>200</td><td>260</td><td>87</td><td>1</td><td>47</td><td>0</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "Pairwise comparison of lists with filtering",
"content": "<table/>",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"text": "BNC evaluation results",
"content": "<table/>",
"html": null
}
}
}
}