ACL-OCL / Base_JSON /prefixH /json /H05 /H05-1046.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H05-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:33:51.977365Z"
},
"title": "Disambiguating Toponyms in News",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Garbin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgetown University Georgetown University Washington",
"location": {
"postCode": "20057, 20057",
"settlement": "Washington",
"region": "DC, DC",
"country": "USA, USA"
}
},
"email": "egarbin@cox.net"
},
{
"first": "Inderjeet",
"middle": [],
"last": "Mani",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgetown University Georgetown University Washington",
"location": {
"postCode": "20057, 20057",
"settlement": "Washington",
"region": "DC, DC",
"country": "USA, USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This research is aimed at the problem of disambiguating toponyms (place names) in terms of a classification derived by merging information from two publicly available gazetteers. To establish the difficulty of the problem, we measured the degree of ambiguity, with respect to a gazetteer, for toponyms in news. We found that 67.82% of the toponyms found in a corpus that were ambiguous in a gazetteer lacked a local discriminator in the text. Given the scarcity of humanannotated data, our method used unsupervised machine learning to develop disambiguation rules. Toponyms were automatically tagged with information about them found in a gazetteer. A toponym that was ambiguous in the gazetteer was automatically disambiguated based on preference heuristics. This automatically tagged data was used to train a machine learner, which disambiguated toponyms in a human-annotated news corpus at 78.5% accuracy.",
"pdf_parse": {
"paper_id": "H05-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "This research is aimed at the problem of disambiguating toponyms (place names) in terms of a classification derived by merging information from two publicly available gazetteers. To establish the difficulty of the problem, we measured the degree of ambiguity, with respect to a gazetteer, for toponyms in news. We found that 67.82% of the toponyms found in a corpus that were ambiguous in a gazetteer lacked a local discriminator in the text. Given the scarcity of humanannotated data, our method used unsupervised machine learning to develop disambiguation rules. Toponyms were automatically tagged with information about them found in a gazetteer. A toponym that was ambiguous in the gazetteer was automatically disambiguated based on preference heuristics. This automatically tagged data was used to train a machine learner, which disambiguated toponyms in a human-annotated news corpus at 78.5% accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Place names, or toponyms, are ubiquitous in natural language texts. In many applications, including Geographic Information Systems (GIS), it is necessary to interpret a given toponym mention as a particular entity in a geographical database or gazetteer. Thus the mention \"Washington\" in \"He visited Washington last year\" will need to be interpreted as a reference to either the city Washington, DC or the U.S. state of Washington, and \"Berlin\" in \"Berlin is cold in the winter\" could mean Berlin, New Hampshire or Berlin, Germany, among other possibilities. While there has been a considerable body of work distinguishing between a toponym and other kinds of names (e.g., person names), there has been relatively little work on resolving which place and what kind of place given a classification of kinds of places in a gazetteer. Disambiguated toponyms can be used in a GIS to highlight a position on a map corresponding to the coordinates of the place, or to draw a polygon representing the boundary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe a corpus-based method for disambiguating toponyms. To establish the difficulty of the problem, we began by quantifying the degree of ambiguity of toponyms in a corpus with respect to a U.S. gazetteer. We then carried out a corpus-based investigation of features that could help disambiguate toponyms. Given the scarcity of human-annotated data, our method used unsupervised machine learning to develop disambiguation rules. Toponyms were automatically tagged with information about them found in a gazetteer. A toponym that was ambiguous in the gazetteer was automatically disambiguated based on preference heuristics. This automatically tagged data was used to train the machine learner. We compared this method with a supervised machine learning approach trained on a corpus annotated and disambiguated by hand.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our investigation targeted toponyms that name cities, towns, counties, states, countries or national capitals. We sought to classify each toponym as a national capital, a civil political/administrative region, or a populated place (administration unspecified) . In the vector model of GIS, the type of place crucially determines the geometry chosen to represent it (e.g., point, line or polygon) as well as any reasoning about geographical inclusion. The class of the toponym can be useful in \"grounding\" the toponym to latitude and longitude coordinates, but it can also go beyond grounding to support spatial reasoning. For example, if the province is merely grounded as a point in the data model (e.g., if the gazetteer states that the centroid of a province is located at a particular latitude-longitude) then without the class information, the inclusion of a city within a province can't be established. Also, resolving multiple cities or a unique capital to a political region mentioned in the text can be a useful adjunct to a map that lacks political boundaries or whose boundaries are dated. It is worth noting that our classification is more fine-grained than efforts like the EDT task in Automatic Content Extraction 1 program (Mitchell and Strassel 2002) , which distinguishes between toponyms that are a Facility \"Alfredo Kraus Auditorium\", a Location \"the Hudson River\", and Geo-Political Entities that include territories \"U.S. heartland\", and metonymic or other derivative place references \"Russians\", \"China (offered)\", \"the U.S. company\", etc. Our classification, being gazetteer based, is more suited to GIS-based applications.",
"cite_spans": [
{
"start": 231,
"end": 259,
"text": "(administration unspecified)",
"ref_id": null
},
{
"start": 1238,
"end": 1266,
"text": "(Mitchell and Strassel 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We used a month's worth of articles from the New York Times (September 2001), part of the English Gigaword (LDC 2003) . This corpus consisted of 7,739 documents and, after SGML stripping, 6.51 million word tokens with a total size of 36.4MB). We tagged the corpus using a list of place names from the USGS Concise Gazetteer (GNIS). The resulting corpus is called MAC1, for \"Machine Annotated Corpus 1\". GNIS covers cities, states, and counties in the U.S., which are classified as \"civil\" and \"populated place\" geographical entities. A geographical entity is an entity on the Earth's surface that can be represented by some geometric specification in a GIS; for example, as a point, line or polygon. GNIS also covers 53 other types of geo-entities, e.g., \"valley,\" \"summit\", \"water\" and \"park.\" GNIS has 37,479 entries, with 27,649 distinct toponyms, of which 13,860 toponyms had multiple entries in the GNIS (i.e., were ambiguous according to GNIS). Table 1 shows the entries in GNIS for an ambiguous toponym.",
"cite_spans": [
{
"start": 107,
"end": 117,
"text": "(LDC 2003)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 951,
"end": 958,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "2.1"
},
{
"text": "Let E be a set of elements, and let F be a set of features. We define a feature g in F to be a disambiguator for E iff for all pairs <e x , e y > in E X E, g(e x ) \u2260 g(e y ) and neither g(e x ) nor g(e y ) are nullvalued. As an example, consider the GNIS gazetteer in Table 1 , let F = {U.S. County, U.S. State, Lat-Long, and Elevation}. We can see that each feature in F is a disambiguator for the set of entries E = {110, 111, 112}.",
"cite_spans": [],
"ref_spans": [
{
"start": 268,
"end": 275,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "2.2"
},
{
"text": "Let us now characterize the mapping between texts and gazetteers. A string s1 in a text is said to be a discriminator within a window w for another string s2 no more than w words away if s1 matches a disambiguator d for s2 in a gazetteer. For example, \"MT\" is a discriminator within a window 5 for the toponym \"Acton\" in \"Acton, MT,\" since \"MT\" occurs within a \u00b15-word window of \"Acton\" and matches, via an abbreviation, \"Montana\", the value of a GNIS disambiguator U.S. State (here the tokenized words are \"Acton\", \",\", and \"MT\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "2.2"
},
{
"text": "A trie-based lexical lookup tool (called LexScan) was used to match each toponym in GNIS against the corpus MAC1. Of the 27,649 distinct toponyms in GNIS, only 4553 were found in the corpus (note that GNIS has only U.S. toponyms). Of the 4553 toponyms, 2911 (63.94%) were \"bare\" toponyms, lacking a local discriminator within a \u00b15-word window that could resolve the name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "2.2"
},
{
"text": "Of the 13,860 toponyms that were ambiguous according to GNIS, 1827 of them were found in MAC1, of which only 588 had discriminators within a \u00b15-word window (i.e., discriminators which matched gazetteer features that disambiguated the toponym). Thus, 67.82% of the 1827 toponyms found in MAC1 that were ambiguous in GNIS lacked a discriminator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "2.2"
},
{
"text": "This 67.82% proportion is only an estimate of true toponym ambiguity, even for the sample MAC1. There are several sources of error in this estimate: (i) World cities, capitals and countries were not yet considered, since GNIS only covered U.S. toponyms. (ii) In general, a single feature (e.g., County, or State) may not be sufficient to disambiguate a set of entries. It is of course possible for two different places named by a common toponym to be located in the same county in the same state. However, there were no toponyms with this property in GNIS. (iii) A string in MAC1 tagged by GNIS lexical lookup as a toponym may not have been a place name at all (e.g., \"Lord Acton lived \u2026\"). Of the toponyms that were spurious, most were judged by us to be common words and person names. This should not be surprising, as 5341 toponyms in GNIS are also person names according to the U.S. Census Bureau 2 (iv) LexScan wasn't perfect, for the following reasons. First, it sought only exact matches. Second, the matching relied on expansion of standard abbreviations. Due to non-standard abbreviations, the number of true U.S. toponyms in the corpus likely exceeded 4553. Third, the matches were all case-sensitive: while case-insensitivity caused numerous spurious matches, case-sensitivity missed a more predictable set, i.e. all-caps dateline toponyms or lowercase toponyms in Internet addresses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "2.2"
},
{
"text": "Note that the 67.82% proportion is just an estimate of local ambiguity. Of course, there are often non-local discriminators (outside the \u00b15-word windows); for example, an initial place name reference could have a local discriminator, with sub-2 www.census.gov/genealogy/www/freqnames.html sequent references in the article lacking local discriminators while being coreferential with the initial reference. To estimate this, we selected cases where a toponym was discriminated on its first mention. In those cases, we counted the number of times the toponym was repeated in the same document without the discriminator. We found that 73% of the repetitions lacked a local discriminator, suggesting an important role for coreference (see Sections 4 and 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "2.2"
},
{
"text": "To prepare a toponym disambiguator, we required a gazetteer as well as corpora for training and testing it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Sources for Automatic Disambiguation",
"sec_num": "3"
},
{
"text": "To obtain a gazetteer that covered worldwide information, we harvested countries, country capitals, and populous world cities from two websites ATLAS 3 and GAZ 4 , to form a consolidated gazetteer (WAG) with four features G1,..,G4 based on geographical inclusion, and three classes, as shown in Table 2 . As an example, an entry for Aberdeen could be the following feature vector: G1=United States, G2=Maryland, G3=Harford County, G4=Aberdeen, CLASS=ppl.",
"cite_spans": [],
"ref_spans": [
{
"start": 295,
"end": 302,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gazetteer",
"sec_num": "3.1"
},
{
"text": "We now briefly discuss the merging of ATLAS and GAZ to produce WAG. ATLAS provided a simple list of countries and their capitals. GAZ recorded the country as well as the population of 700 cities of at least 500,000 people. If a city was in both sources, we allowed two entries but ordered them in WAG to make the more specific type (e.g. \"capital\") the default sense, the one that LexScan would use. Accents and diacritics were stripped from WAG toponyms by hand, and aliases were associated with standard forms. Finally, we merged GNIS state names with these, as well as abbreviations discovered by our abbreviation expander.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gazetteer",
"sec_num": "3.1"
},
{
"text": "We selected a corpus consisting of 15,587 articles from the complete Gigaword Agence France Presse, May 2002. LexScan was used to tag, insensitive to case, all WAG toponyms found in this corpus, with the attributes in Table 2 . If there were multiple entries in WAG for a toponym, LexScan only tagged the preferred sense, discussed below. The resulting tagged corpus, called MAC-DEV,",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 225,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpora",
"sec_num": "3.2"
},
{
"text": "Attribute",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tag",
"sec_num": null
},
{
"text": "Civil ( was used as a development corpus for feature exploration. To disambiguate the sense for a toponym that was ambiguous in WAG, we used two preference heuristics. First, we searched MAC1 for two dozen highly frequent ambiguous toponym strings (e.g., \"Washington\", etc.), and observed by inspection which sense predominated in MAC1, preferring the predominant sense for each of these frequently mentioned toponyms. For example, in MAC1, \"Washington\" was predominantly a Capital. Second, for toponyms outside this most frequent set, we used the following specificity-based preference: Cap. > Ppl > Civil. In other words, we prefer the more specific sense; since there are a smaller number of Capitals than Populated places, we prefer Capitals to Populated Places.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CLASS",
"sec_num": null
},
{
"text": "For machine learning, we used the Gigaword Associated Press Worldwide January 2002 (15,999 articles), tagged in the same way by LexScan as MAC-DEV was. This set was called MAC-ML. Thus, MAC1, MAC-DEV, and MAC-ML were all generated automatically, without human supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CLASS",
"sec_num": null
},
{
"text": "For a blind test corpus with human annotation, we opportunistically sampled three corpora: MAC1, TimeBank 1.2 5 and the June 2002 New York Times from the English Gigaword, with the first author tagging a random 28, 88, and 49 documents respectively from each. Each tag in the resulting human annotated corpus (HAC) had the WAG attributes from Table 2 with manual correction of all the WAG attributes. A summary of the corpora, their source, and annotation status is shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 343,
"end": 350,
"text": "Table 2",
"ref_id": null
},
{
"start": 475,
"end": 482,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "CLASS",
"sec_num": null
},
{
"text": "We used the tagged toponyms in MAC-DEV to explore useful features for disambiguating the classes of toponyms. We identified single-word terms that co-occurred significantly with classes within a k-word window (we tried k= \u00b13, and k=\u00b120). These terms were scored for pointwise mutual information (MI) with the classes. Terms with average tf.idf of less than 4 in the collection were filtered out as these tended to be personal pronouns, articles and prepositions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Exploration",
"sec_num": "4"
},
{
"text": "To identify which terms helped select for particular classes of toponyms, the set of 48 terms whose MI scores were above a threshold (-11, chosen by inspection) were filtered using the student's t-statistic, based on an idea in (Church and Hanks 1991) . The t-statistic was used to compare the distribution of the term with one class of toponym to its distribution with other classes to assess whether the underlying distributions were significantly different with at least 95% confidence. The results are shown in Table 4 , where scores for a term that occurred jointly in a window with at least one other class label are shown in bold. A t-score > 1.645 is a significant difference with 95% confidence. However, because joint evidence was scarce, we eventually chose not to eliminate Table 4 terms such as 'city' (t =1.19) as features for machine learning. Some of the terms were significant disambiguators between only one pair of classes, e.g. 'yen,' 'attack,' and 'capital,' but we kept them on that basis.",
"cite_spans": [
{
"start": 228,
"end": 251,
"text": "(Church and Hanks 1991)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 515,
"end": 523,
"text": "Table 4",
"ref_id": null
},
{
"start": 787,
"end": 794,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Exploration",
"sec_num": "4"
},
{
"text": "Value is true iff the toponym is abbreviated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abbrev",
"sec_num": null
},
{
"text": "Value is true iff the toponym is all capital letters. Left/Right Pos{1, .., k} Values are the ordered tokens up to k positions to the left/right WkContext Value is the set of MI collocated terms found in windows of \u00b1 k tokens (to the left and right)",
"cite_spans": [
{
"start": 54,
"end": 71,
"text": "Left/Right Pos{1,",
"ref_id": null
},
{
"start": 72,
"end": 75,
"text": "..,",
"ref_id": null
},
{
"start": 76,
"end": 78,
"text": "k}",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AllCaps",
"sec_num": null
},
{
"text": "Value is the set of CLASS values represented by all toponyms from the document: e.g., the set {civil, capital, ppl} CorefClass",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TagDiscourse",
"sec_num": null
},
{
"text": "Value is the CLASS if any for a prior mention of a toponym in the document, or none Table 5 ",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 91,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "TagDiscourse",
"sec_num": null
},
{
"text": "Based on the discovered terms in experiments with different window sizes, and an examination of MAC1 and MAC-DEV, we identified a final set of features that, it seemed, might be useful for machine learning experiments. These are shown in Table 5 . The features Abbrev and Allcaps describe evidence internal to the toponym: an abbreviation may indicate a state (Mass.), territory (N.S.W.), country (U.K.), or some other civil place; an all-caps toponym might be a capital or ppl in a dateline. The feature sets LeftPos and RightPos target the \u00b1k positions in each window as ordered tokens, but note that only windows with a MI term are considered. The domain of WkContext is the window of \u00b1k tokens around a toponym that contains a MI collocated term.",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 245,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": ". Features for Machine Learning",
"sec_num": null
},
{
"text": "We now turn to the global discourse-level features. The domain for TagDiscourse is the whole document, which is evaluated for the set of toponym classes present: this information may reflect the discourse topic, e.g. a discussion of U.S. sports teams will favor mentions of cities over states or capitals. The feature CorefClass implements a one sense per discourse strategy, motivated by our earlier observation (from Section 2) that 73% of subsequent mentions of a toponym that was discriminated on first mention were expressed without a local discriminator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ". Features for Machine Learning",
"sec_num": null
},
{
"text": "The features in Table 5 were used to code feature vectors for a statistical classifier. The results are shown in Table 6 . As an example, when the Ripper classifier (Cohen 1996) was trained on MAC-ML with a window of k= \u00b13 word tokens, the predictive accuracy when tested using crossvalidation MAC-ML was 88.39% \u00b10.24 (where 0.24 is the standard deviation across 10 folds). 88.39 \u00b1 0.24 (Civ.",
"cite_spans": [
{
"start": 165,
"end": 177,
"text": "(Cohen 1996)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 5",
"ref_id": null
},
{
"start": 113,
"end": 120,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Machine Learning",
"sec_num": "5"
},
{
"text": "Cap r70 p88 f78 Civ. r94 p90 f92 Ppl r87 p82 f84 Avg. r84 p87 f85 80.97 \u00b1 0.33 (Civ.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "65.0)",
"sec_num": null
},
{
"text": "Cap r61 p77 f68 Civ. r83 p86 f84 Ppl r81 p72 f76 Avg. r75 p78 f76 MAC-DEV MAC-DEV (crossvalidation) 87.08 \u00b1 0.28 (Civ.",
"cite_spans": [
{
"start": 74,
"end": 99,
"text": "MAC-DEV (crossvalidation)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "57.1)",
"sec_num": null
},
{
"text": "Cap r74 p87 f80 Civ. r93 p88 f91 Ppl r82 p80 f81 Avg. r83 p85 f84 81.36 \u00b1 0.59 (Civ.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "57.8)",
"sec_num": null
},
{
"text": "Cap r49 p78 f60 Civ. r92 p81 f86 Ppl r56 p70 f59 Avg. r66 p77 f68 MAC-DEV HAC 68.66 (Civ.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "59.3)",
"sec_num": null
},
{
"text": "Cap r50 p71 f59 Civ. r93 p70 f80 Ppl r24 p57 f33 Avg. r56 p66 f57",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "59.7)",
"sec_num": null
},
{
"text": "65.33 (Civ. 50.7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "59.7)",
"sec_num": null
},
{
"text": "Cap r100 p100 f100 Civ. r84 p62 f71 Cap r70 p89 f78 Civ. r94 r88 f91 Ppl r81 p80 f80 Avg. r82 p86 f83",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "59.7)",
"sec_num": null
},
{
"text": "79.70 \u00b1 0.30 (Civ.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "59.7)",
"sec_num": null
},
{
"text": "Cap r56 p73 f63 Civ. r83 p86 f84 Ppl r80 p68 f73 Avg. r73 p76 f73 MAC-DEV+MAC-ML HAC 73.07 (Civ.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "59.7)",
"sec_num": null
},
{
"text": "Cap r71 p83 f77 Civ. r91 p69 f79 Ppl r45 f81 f58 Avg. r69 p78 f71",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "51.7)",
"sec_num": null
},
{
"text": "78.30 (Civ. 50)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "51.7)",
"sec_num": null
},
{
"text": "Cap r100 p63 f77 Civ. r91 p75 f82 Ppl r63 p88 f73 Avg. r85 p75 f77 Table 6 . Machine Learning Accuracy",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "51.7)",
"sec_num": null
},
{
"text": "The majority class (Civil) had the predictive accuracy shown in parentheses. (When tested on a different set from the training set, cross-validation wasn't used). Ripper reports a confusion matrix for each class; Recall, Precision, and F-measure for these classes are shown, along with their average across classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "51.7)",
"sec_num": null
},
{
"text": "In all cases, Ripper is significantly better in predictive accuracy than the majority class. When testing using cross-validation on the same machine-annotated corpus as the classifier was trained on, performance is comparable across corpora, and is in the high 80%, e.g., 88.39 on MAC-ML (k=\u00b13). Performance drops substantially when we train on machine-annotated corpora but test on the human-annotated corpus (HAC) (the unsupervised approach), or when we both train and test on HAC (the supervised approach). The noise in the autogenerated classes in the machine-annotated corpus is a likely cause for the lower accuracy of the unsupervised approach. The poor performance of the supervised approach can be attributed to the lack of human-annotated training data: HAC is a small, 83,872-word corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "51.7)",
"sec_num": null
},
{
"text": "If not AllCaps(P) and Right-Pos1(P,'SINGLE_QUOTE') and Civil \u2208 TagDiscourse Then Civil(P).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage of Examples in Testing (Accuracy)",
"sec_num": null
},
{
"text": "If not AllCaps(P) and southern) and Civil \u2208 TagDiscourse Then Civil(P).",
"cite_spans": [
{
"start": 22,
"end": 31,
"text": "southern)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "5/67 (100%)",
"sec_num": null
},
{
"text": "13/67 (100%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5/67 (100%)",
"sec_num": null
},
{
"text": "TagDiscourse was a critical feature; ignoring it during learning dropped the accuracy nearly 9 percentage points. This indicates that prior mention of a class increases the likelihood of that class. (Note that when inducing a rule involving a set-valued feature, Ripper tests whether an element is a member of that set-valued feature, selecting the test that maximizes information gain for a set of examples.) Increasing the window size only lowered accuracy when tested on the same corpus (using crossvalidation); for example, an increase from \u00b13 words to \u00b120 words (intervening sizes are not shown for reasons of space) lowered the PA by 5.7 percentage points on MAC-DEV. However, increasing the training set size was effective, and this increase was more substantial for larger window sizes: combining MAC-ML with MAC-DEV improved accuracy on HAC by about 4.5% for k= \u00b13, but an increase of 13% was seen for k =\u00b120. In addition, F-measure for the classes was steady or increased. As Table 6 shows, this was largely due to the increase in recall on the non-majority classes. The best performance when training Ripper on the machine-annotated MAC-DEV+MAC-ML and testing on the human-annotated corpus HAC was 78.30.",
"cite_spans": [],
"ref_spans": [
{
"start": 986,
"end": 993,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 7. Sample Rules Learnt by Ripper",
"sec_num": null
},
{
"text": "Another learner we tried, the SMO supportvector machine from WEKA (Witten and Frank 2005) , was marginally better, showing 81.0 predictive accuracy training and testing on MAC-DEV+MAC-ML (ten-fold cross-validation, k=\u00b120) and 78.5 predictive accuracy training on MAC-DEV+MAC-ML and testing on HAC (k=\u00b120). Ripper rules are of course more transparent: example rules learned from MAC-DEV are shown in Table 7 , along with their coverage of feature vectors and accuracy on the test set HAC.",
"cite_spans": [
{
"start": 66,
"end": 89,
"text": "(Witten and Frank 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 399,
"end": 406,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 7. Sample Rules Learnt by Ripper",
"sec_num": null
},
{
"text": "Work related to toponym tagging has included harvesting of gazetteers from the Web (Uryupina 2003) , hand-coded rules to place name disambiguation, e.g., (Li et al. 2003) (Zong et al. 2005) , and machine learning approaches to the problem, e.g., (Smith and Mann 2003) . There has of course been a large amount of work on the more general problem of word-sense disambiguation, e.g., (Yarowsky 1995) (Kilgarriff and Edmonds 2002) . We discuss the most relevant work here.",
"cite_spans": [
{
"start": 83,
"end": 98,
"text": "(Uryupina 2003)",
"ref_id": "BIBREF9"
},
{
"start": 154,
"end": 170,
"text": "(Li et al. 2003)",
"ref_id": "BIBREF4"
},
{
"start": 171,
"end": 189,
"text": "(Zong et al. 2005)",
"ref_id": "BIBREF11"
},
{
"start": 246,
"end": 267,
"text": "(Smith and Mann 2003)",
"ref_id": "BIBREF7"
},
{
"start": 382,
"end": 397,
"text": "(Yarowsky 1995)",
"ref_id": "BIBREF10"
},
{
"start": 398,
"end": 427,
"text": "(Kilgarriff and Edmonds 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "While (Uryupina 2003) uses machine learning to induce gazetteers from the Internet, we merely download and merge information from two popular Web gazetteers. (Li et al. 2003 ) use a statistical approach to tag place names as a LOCation class. They then use a heuristic approach to location normalization, based on a combination of handcoded pattern-matching rules as well as discourse features based on co-occurring toponyms (e.g., a document with \"Buffalo\", \"Albany\" and \"Rochester\" will likely have those toponyms disambiguated to New York state). Our TagDiscourse feature is more coarse-grained. Finally, they assume one sense per discourse in their rules, whereas we use it as a feature CorefClass for use in learning. Overall, our approach is based on unsupervised machine learning, rather than hand-coded rules for location normalization. (Smith and Mann 2003) use a \"minimally supervised\" method that exploits as training data toponyms that are found locally disambiguated, e.g., \"Nashville, Tenn.\"; their disambiguation task is to identify the state or country associated with the toponym in test data that has those disambiguators stripped off. Although they report 87.38% accuracy on news, they address an easier problem than ours, since: (i) our earlier local ambiguity estimate suggests that as many as two-thirds of the gazetteer-ambiguous toponyms may be excluded from their test on news, as they would lack local discriminators (ii) the classes our tagger uses (Table 3) are more fine-grained. Finally, they use one sense per discourse as a bootstrapping strategy to expand the machine-annotated data, whereas in our case CorefClass is used as a feature.",
"cite_spans": [
{
"start": 6,
"end": 21,
"text": "(Uryupina 2003)",
"ref_id": "BIBREF9"
},
{
"start": 158,
"end": 173,
"text": "(Li et al. 2003",
"ref_id": "BIBREF4"
},
{
"start": 845,
"end": 866,
"text": "(Smith and Mann 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Our approach is distinct from other work in that it firstly, attempts to quantify toponym ambiguity, and secondly, it uses an unsupervised approach based on learning from noisy machine-annotated corpora using publicly available gazetteers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "This research provides a measure of the degree of of ambiguity with respect to a gazetteer for toponyms in news. It has developed a toponym disambiguator that, when trained on entirely machine annotated corpora that avail of easily available Internet gazetteers, disambiguates toponyms in a human-annotated corpus at 78.5% accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our current project includes integrating our disambiguator with other gazetteers and with a geovisualization system. We will also study the effect of other window sizes and the combination of this unsupervised approach with minimally-supervised approaches such as (Brill 1995) (Smith and Mann 2003) . To help mitigate against data sparseness, we will cluster terms based on stemming and semantic similarity.",
"cite_spans": [
{
"start": 264,
"end": 276,
"text": "(Brill 1995)",
"ref_id": "BIBREF0"
},
{
"start": 277,
"end": 298,
"text": "(Smith and Mann 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The resources and tools developed here may be obtained freely by contacting the authors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "www.ldc.upenn.edu/Projects/ACE/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": ". www.worldatlas.com 4 www.worldgazetteer.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.timeml.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised learning of disambiguation rules for part of speech tagging",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1995,
"venue": "ACL Third Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill. 1995. Unsupervised learning of disambigua- tion rules for part of speech tagging. ACL Third Workshop on Very Large Corpora, Somerset, NJ, p. 1-13.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Using Statistics in Lexical Analysis",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
},
{
"first": "Don",
"middle": [],
"last": "Hindle",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Gale",
"suffix": ""
}
],
"year": 1991,
"venue": "Lexical Acquisition: Using On-line Resources to Build a Lexicon",
"volume": "",
"issue": "",
"pages": "115--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ken Church, Patrick Hanks, Don Hindle, and William Gale. 1991. Using Statistics in Lexical Analysis. In U. Zernik (ed), Lexical Acquisition: Using On-line Resources to Build a Lexicon, Erlbaum, p. 115-164.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning Trees and Rules with Set-valued Features",
"authors": [
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of AAAI 1996",
"volume": "",
"issue": "",
"pages": "709--716",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Cohen. 1996. Learning Trees and Rules with Set-valued Features. Proceedings of AAAI 1996, Portland, Oregon, p. 709-716.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Introduction to the Special Issue on Evaluating Word Sense Disambiguation Systems",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Edmonds",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Natural Language Engineering",
"volume": "8",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Kilgarriff and Philip Edmonds. 2002. Introduc- tion to the Special Issue on Evaluating Word Sense Disambiguation Systems. Journal of Natural Lan- guage Engineering 8 (4).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A hybrid approach to geographical references in information extraction",
"authors": [
{
"first": "Huifeng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Rohini",
"middle": [
"K"
],
"last": "Srihari",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2003,
"venue": "HLT-NAACL 2003 Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huifeng Li, Rohini K. Srihari, Cheng Niu, and Wei Li. 2003. A hybrid approach to geographical references in information extraction. HLT-NAACL 2003 Work- shop: Analysis of Geographic References, Edmonton, Alberta, Canada.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Linguistic Data Consortium: English Gigaword www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalo gId=LDC2003T05",
"authors": [
{
"first": "",
"middle": [],
"last": "Ldc",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LDC. 2003. Linguistic Data Consortium: English Giga- word www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalo gId=LDC2003T05",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Corpus Development for the ACE (Automatic Content Extraction) Program. Linguistic Data Consortium www",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Mitchell and Stephanie Strassel. 2002. Corpus Development for the ACE (Automatic Content Ex- traction) Program. Linguistic Data Consortium www.ldc.upenn.edu/Projects/LDC_Institute/ Mitchell/ACE_LDC_06272002.ppt",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bootstrapping toponym classifiers. HLT-NAACL 2003 Workshop: Analysis of Geographic References",
"authors": [
{
"first": "David",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Gideon",
"middle": [],
"last": "Mann",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "45--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Smith and Gideon Mann. 2003. Bootstrapping toponym classifiers. HLT-NAACL 2003 Workshop: Analysis of Geographic References, p. 45-49, Ed- monton, Alberta, Canada.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Data Mining: Practical machine learning tools and techniques, 2nd Edition",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Witten",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Witten and Eibe Frank. 2005. Data Mining: Practi- cal machine learning tools and techniques, 2nd Edi- tion. Morgan Kaufmann, San Francisco.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semi-supervised learning of geographical gazetteers from the internet",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Uryupina",
"suffix": ""
}
],
"year": 2003,
"venue": "Analysis of Geographic References",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olga Uryupina. 2003. Semi-supervised learning of geo- graphical gazetteers from the internet. HLT-NAACL 2003 Workshop: Analysis of Geographic References, Edmonton, Alberta, Canada.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. Proceedings of ACL",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. Pro- ceedings of ACL 1995, Cambridge, Massachusetts.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "On Assigning Place Names to Geography Related Web Pages. Joint Conference on Digital Libraries (JCDL2005)",
"authors": [
{
"first": "Wenbo",
"middle": [],
"last": "Zong",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Aixin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ee-Peng",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Dion",
"middle": [
"H"
],
"last": "Goh",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenbo Zong, Dan Wu, Aixin Sun, Ee-Peng Lim, and Dion H. Goh. 2005. On Assigning Place Names to Geography Related Web Pages. Joint Conference on Digital Libraries (JCDL2005), Denver, Colorado.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>Entry</td><td>Topony</td><td>U.S.</td><td>U.S. State</td><td>Lat-Long</td><td colspan=\"2\">Elevation (ft.</td><td>Class</td></tr><tr><td>Number</td><td>m</td><td>County</td><td/><td>(dddmmss)</td><td>above</td><td>sea</td></tr><tr><td/><td/><td/><td/><td/><td>level)</td><td/></tr><tr><td>110</td><td>Acton</td><td>Middlesex</td><td>Massachu-</td><td>422906N-</td><td>260</td><td/><td>Ppl (popu-</td></tr><tr><td/><td/><td/><td>setts</td><td>0712600W</td><td/><td/><td>lated place)</td></tr><tr><td>111</td><td>Acton</td><td>Yellow-</td><td>Montana</td><td>455550N-</td><td>3816</td><td/><td>Ppl</td></tr><tr><td/><td/><td>stone</td><td/><td>1084048W</td><td/><td/></tr><tr><td>112</td><td>Acton</td><td>Los Ange-</td><td>California</td><td>342812N-</td><td>2720</td><td/><td>Ppl</td></tr><tr><td/><td/><td>les</td><td/><td>1181145W</td><td/><td/></tr></table>",
"html": null,
"text": "",
"type_str": "table",
"num": null
},
"TABREF1": {
"content": "<table><tr><td/><td/><td/><td colspan=\"7\">Political Region or Administrative Area, e.g. Country, Province, County), Ppl</td></tr><tr><td/><td colspan=\"9\">(Populated Place, e.g. City, Town), Cap (Country Capital, Provincial Capital, or County</td></tr><tr><td/><td colspan=\"2\">Seat)</td><td/><td/><td/><td/><td/><td/></tr><tr><td>G1</td><td colspan=\"3\">Country</td><td/><td/><td/><td/><td/></tr><tr><td>G2</td><td colspan=\"5\">Province (State) or Country-Capital</td><td/><td/><td/></tr><tr><td>G3</td><td colspan=\"5\">County or Independent City</td><td/><td/><td/></tr><tr><td>G4</td><td colspan=\"5\">City, Town (Within County)</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td colspan=\"5\">Table 2: WAG Gazetteer Attributes</td></tr><tr><td colspan=\"2\">Corpus Size</td><td/><td/><td/><td>Use</td><td/><td/><td/><td>How Annotated</td></tr><tr><td>MAC1</td><td colspan=\"4\">6.51 million words with</td><td colspan=\"4\">Ambiguity Study (Gigaword NYT Sept.</td><td>LexScan</td><td>of</td><td>all</td></tr><tr><td/><td colspan=\"4\">61,720 place names (4553</td><td colspan=\"2\">2001) (Section 2)</td><td/><td/><td>senses, no attributes</td></tr><tr><td/><td colspan=\"3\">distinct) from GNIS</td><td/><td/><td/><td/><td/><td>marked</td></tr><tr><td>MAC-</td><td colspan=\"4\">5.47 million words with</td><td colspan=\"4\">Development Corpus (Gigaword AFP</td><td>LexScan using at-</td></tr><tr><td>DEV</td><td colspan=\"4\">124,175 place names</td><td colspan=\"3\">May 2002) (Section 4)</td><td/><td>tributes from WAG,</td></tr><tr><td/><td colspan=\"2\">(1229</td><td>distinct)</td><td>from</td><td/><td/><td/><td/><td>with heuristic pref-</td></tr><tr><td/><td colspan=\"2\">WAG</td><td/><td/><td/><td/><td/><td/><td>erence</td></tr><tr><td>MAC-</td><td colspan=\"4\">6.21 million words with</td><td colspan=\"4\">Machine Learning Corpus (Gigaword AP</td><td>LexScan using at-</td></tr><tr><td>ML</td><td colspan=\"4\">181,866 place names</td><td colspan=\"4\">Worldwide January 2002) (Section 5)</td><td>tributes from WAG,</td></tr><tr><td/><td colspan=\"2\">(1322</td><td>distinct)</td><td>from</td><td/><td/><td/><td/><td>with heuristic pref-</td></tr><tr><td/><td colspan=\"2\">WAG</td><td/><td/><td/><td/><td/><td/><td>erence</td></tr><tr><td>HAC</td><td colspan=\"4\">83,872 words with 1275</td><td colspan=\"4\">Human Annotated Corpus (from Time-</td><td>LexScan</td><td>using</td></tr><tr><td/><td colspan=\"4\">place names (435 distinct)</td><td colspan=\"4\">Bank 1.2, and Gigaword NYT Sept. 2001</td><td>WAG, with attrib-</td></tr><tr><td/><td colspan=\"3\">from WAG.</td><td/><td colspan=\"3\">and June 2002) (Section 5)</td><td/><td>utes and sense being</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>manually corrected</td></tr><tr><td>'stock'</td><td/><td>4</td><td>4</td><td colspan=\"2\">'winter'</td><td>3.61</td><td>3.61</td><td>'air'</td><td>3.16</td><td>3.16</td></tr><tr><td>'exchange'</td><td/><td>4.24</td><td>4.24</td><td colspan=\"2\">'telephone'</td><td>3.16</td><td>3.16</td><td>'base'</td><td>3.16</td><td>3.16</td></tr><tr><td>'embassy'</td><td/><td>3.61</td><td>3.61</td><td>'port'</td><td/><td>3.46</td><td>3.46</td><td>'accuses'</td><td>3.61</td><td>3.61</td></tr><tr><td>'capital'</td><td/><td>1.4</td><td>2.2</td><td colspan=\"2\">'midfielder'</td><td>3.46</td><td>3.46</td><td>'northern'</td><td>5.57</td><td>5.57</td></tr><tr><td>'airport'</td><td/><td>3.32</td><td>3.32</td><td>'city'</td><td/><td>1.19</td><td>1.19</td><td>'airlines'</td><td>4.8</td><td>4.8</td></tr><tr><td>'summit'</td><td/><td>4</td><td>4</td><td colspan=\"2\">'near'</td><td>2.77</td><td>3.83</td><td>'invaded'</td><td>3.32</td><td>3.32</td></tr><tr><td>'lower'</td><td/><td>3.16</td><td>3.16</td><td colspan=\"2\">'times'</td><td>3.16</td><td>3.16</td><td>'southern'</td><td>3.87</td><td>6.71</td></tr><tr><td>'visit'</td><td/><td>4.61</td><td>4.69</td><td colspan=\"2\">'southern'</td><td>3.87</td><td>3.87</td><td>'friendly'</td><td>4</td><td>4</td></tr><tr><td colspan=\"3\">'conference' 4.24</td><td>4.24</td><td>'yen'</td><td/><td>4</td><td>0.56</td><td>'state-run'</td><td>3.32</td><td>3.32</td></tr><tr><td colspan=\"2\">'agreement'</td><td>3.16</td><td>3.16</td><td colspan=\"2\">'attack'</td><td>0.18</td><td>3.87</td><td>'border'</td><td>7.48</td><td>7.48</td></tr></table>",
"html": null,
"text": "",
"type_str": "table",
"num": null
}
}
}
}