ACL-OCL / Base_JSON /prefixU /json /U07 /U07-1007.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U07-1007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:08:52.914208Z"
},
"title": "Exploring approaches to discriminating among near-synonyms",
"authors": [
{
"first": "Mary",
"middle": [],
"last": "Gardiner",
"suffix": "",
"affiliation": {
"laboratory": "Centre for Language Technology Macquarie University",
"institution": "",
"location": {}
},
"email": "gardiner@ics.mq.edu.au"
},
{
"first": "Mark",
"middle": [],
"last": "Dras",
"suffix": "",
"affiliation": {
"laboratory": "Centre for Language Technology Macquarie University",
"institution": "",
"location": {}
},
"email": "madras@ics.mq.edu.au"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Near-synonyms are words that mean approximately the same thing, and which tend to be assigned to the same leaf in ontologies such as WordNet. However, they can differ from each other subtly in both meaning and usage-consider the pair of nearsynonyms frugal and stingy-and therefore choosing the appropriate near-synonym for a given context is not a trivial problem.",
"pdf_parse": {
"paper_id": "U07-1007",
"_pdf_hash": "",
"abstract": [
{
"text": "Near-synonyms are words that mean approximately the same thing, and which tend to be assigned to the same leaf in ontologies such as WordNet. However, they can differ from each other subtly in both meaning and usage-consider the pair of nearsynonyms frugal and stingy-and therefore choosing the appropriate near-synonym for a given context is not a trivial problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Initial work by Edmonds (1997) suggested that corpus statistics methods would not be particularly effective, and led to subsequent work adopting methods based on specific lexical resources. In earlier work (Gardiner and Dras, 2007) we discussed the hypothesis that some kind of corpus statistics approach may still be effective in some situations, particularly if the near-synonyms differ in sentiment from each other, and we presented some preliminary confirmation of the truth of this hypothesis. This suggests that problems involving this type of nearsynonym may be particularly amenable to corpus statistics methods.",
"cite_spans": [
{
"start": 16,
"end": 30,
"text": "Edmonds (1997)",
"ref_id": "BIBREF5"
},
{
"start": 206,
"end": 231,
"text": "(Gardiner and Dras, 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this paper we investigate whether this result extends to a different corpus statistics method and in addition we analyse the results with respect to a possible confounding factor discussed in the previous work: the skewness of the sets of near synonyms. Our results show that the relationship between success in prediction and the nature of the near-synonyms is method dependent and that skewness is a more significant factor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Choosing an appropriate word or phrase from among candidate near-synonyms or paraphrases is a significant language generation problem since even though near-synonyms and paraphrases are close in meaning, they differ in connotation and denotation in ways that may be significant to the desired effect of the generation output: for example, word choice can change a sentence from advice to admonishment. Particular applications that have been cited as having a use for modules which make effective word and phrase choices among closely related options are summarisation and rewriting (Barzilay and Lee, 2003) . Inkpen and Hirst (2006) extended the generation system HALogen (Langkilde and Knight, 1998; Langkilde, 2000) to include such a module.",
"cite_spans": [
{
"start": 582,
"end": 606,
"text": "(Barzilay and Lee, 2003)",
"ref_id": "BIBREF0"
},
{
"start": 609,
"end": 632,
"text": "Inkpen and Hirst (2006)",
"ref_id": "BIBREF10"
},
{
"start": 672,
"end": 700,
"text": "(Langkilde and Knight, 1998;",
"ref_id": "BIBREF12"
},
{
"start": 701,
"end": 717,
"text": "Langkilde, 2000)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We discuss a particular aspect of choice between closely related words and phrases: choice between words when there is any difference in meaning or attitude. Typical examples are frugal and stingy; slender and skinny; and error and blunder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, as in Gardiner and Dras (2007) , we explore whether corpus statistics methods have promise in discriminating between near-synonyms with attitude differences, particularly compared to near-synonyms that do not differ in attitude. In our work, we used the work of (Edmonds, 1997) , the first to attempt to distinguish among near-synonyms, adopting a corpus statistics approach. Based on that work, we found that there was a significant difference in attitudinal versus non-attitudinal nearsynonyms. However, the Edmonds algorithm produced on the whole poor results, only a little above the given baseline, if at all. According to (Inkpen, 2007) , the poor results were due to the way the al-gorithm handled data sparseness; she consequently presented an alternative algorithm with much better results. We also found that attitudinal versus non-attitudinal near-synonyms differed significantly in their baselines as a consequence of skewness of synset distribution, complicating analysis.",
"cite_spans": [
{
"start": 21,
"end": 45,
"text": "Gardiner and Dras (2007)",
"ref_id": "BIBREF7"
},
{
"start": 277,
"end": 292,
"text": "(Edmonds, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 643,
"end": 657,
"text": "(Inkpen, 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper then we develop an algorithm based on that of Inkpen, and use a far larger data set and a methodology suited to large data sets, to see whether this alternative method will support our previous findings. In addition we analyse results with regard to a measure of synset skewness. In Section 2 we outline the near-synonym task description; in Section 3 we present our method based on Inkpen; in Section 5 we present out method based on Inkpen, and our experimental method using it; in Section 4 we evaluate its effectiveness in comparison with Inkpen's own method; in Section 5 we test our hypothesis, present our results and discuss them; and in Section 6 we conclude.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our experiment tests a system's ability to fill a gap in a sentence from a given set of near-synonyms. This problem was first described by Edmonds (1997) . Edmonds describes an experiment that he designed to test whether or not co-occurrence statistics are sufficient to predict which word in a set of nearsynonyms fills a lexical gap. He gives this example of asking the system to choose which of error, mistake or oversight fits into the gap in this sentence:",
"cite_spans": [
{
"start": 139,
"end": 153,
"text": "Edmonds (1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "(1) However, such a move also of cutting deeply into U.S. economic growth, which is why some economists think it would be a big .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "Performance on the task is measured by comparing system performance against real word choices: that is, sentences such as example 1 are drawn from real text, a word is removed, and the system is asked to choose between that word and all of its nearsynonyms as candidates to fill the gap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "3 An approximation to Inkpen's solution to the near-synonym choice problem",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "We know of two descriptions of algorithms used to choose between near-synonyms based upon con-text: that described by Edmonds (1997) and that described by Inkpen (2007) . In our previous work we used Edmonds' method for discriminating between near-synonyms as a basis for comparing whether near-synonyms that differ in attitude in predictability from near-synonyms that do not. The more recent work by Inkpen is a more robust and reliable approach to the same problem, and therefore in this paper we develop a methodology based closely on that of Inkpen, using a different style of training corpus, in order to test whether the differences between the performance of nearsynonyms that differ in sentiment and those that do not persists on the better performing method.",
"cite_spans": [
{
"start": 118,
"end": 132,
"text": "Edmonds (1997)",
"ref_id": "BIBREF5"
},
{
"start": 155,
"end": 168,
"text": "Inkpen (2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "Edmonds' and Inkpen's approaches to nearsynonym prediction have the same underlying hypothesis: that the choice between near-synonyms can be predicted to an extent from the words immediately surrounding the gap. Returning to example 1, their approaches use words around the gap, eg big, to predict which of error, mistake or oversight would be used. They do this using some measure of how often big, and other words surrounding the gap, is used in contexts where each of error, mistake and oversight are used. Edmonds uses every word in the sentence containing the gap, whereas Inkpen uses a generally smaller window of words surrounding the gap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "In this section we briefly describe Edmonds' approach to discriminating between near-synonyms in Section 3.1 and describe Inkpen's approach in more detail in Section 3.2. We then describe our adaptation of Inkpen's approach in Section 3.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "In Edmonds' approach to the word choice problem, the suitability of any candidate word c for a sentence S can be approximated as a score(c, S) of suitability, and where score(c,S) is a sum of the association between the candidate c and every other word w in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edmonds' approach",
"sec_num": "3.1"
},
{
"text": "score(c, S) = w\u2208S sig(c, w) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edmonds' approach",
"sec_num": "3.1"
},
{
"text": "In Edmonds' original method, which we used in Gardiner and Dras (2007) , sig(c, w) is computed using either the t-score of c and w or a second degree association: a combination of the t-scores of c with a word w 0 and the same word w 0 with w. Edmonds' t-scores were computed using co-occurrence counts in the 1989 Wall Street Journal, and the performance did not improve greatly over a baseline of choosing the most frequent word in the synset to fill all gaps.",
"cite_spans": [
{
"start": 46,
"end": 70,
"text": "Gardiner and Dras (2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Edmonds' approach",
"sec_num": "3.1"
},
{
"text": "In Inkpen's method, the suitability of candidate c for a given gap is approximated slightly differently: the entire sentence is not used to measure the suitability of the word. Instead, a certain sized window of k words either side of the gap is used. For example, if k = 3, the word missing from the sentence in example 3 is predicted using only the six words shown in example 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inkpen's approach",
"sec_num": "3.2"
},
{
"text": "( 3)Visitors to Istanbul often sense a second, layer beneath the city's tangible beauty. 4sense a second, layer beneath the Given a text fragment f consisting of 2k words, k words either side of a gap g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inkpen's approach",
"sec_num": "3.2"
},
{
"text": "(w 1 , w 2 , . . . , w k , g, w k+1 , . . . , w 2k ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inkpen's approach",
"sec_num": "3.2"
},
{
"text": "the suitability s(c, g) of any given candidate word c to fill the gap g is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inkpen's approach",
"sec_num": "3.2"
},
{
"text": "s(c, g) = k j=1 PMI(c, w j ) + 2k j=k+1 PMI(w j , c) (5) PMI(x, y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inkpen's approach",
"sec_num": "3.2"
},
{
"text": "is the pointwise mutual information score of two words x and y, and is given by (Church and Hanks, 1991) :",
"cite_spans": [
{
"start": 80,
"end": 104,
"text": "(Church and Hanks, 1991)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inkpen's approach",
"sec_num": "3.2"
},
{
"text": "PMI(x, y) = log 2 C(x, y) \u2022 N C(x) \u2022 C(y) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inkpen's approach",
"sec_num": "3.2"
},
{
"text": "C(x), C(y) and C(x, y) are estimated using token counts in a corpus: C(x, y) is the number of times that x and y are found together, C(x) is the total number of occurrences of x in the corpus and C(y) the total number of occurrences of y in the corpus. N is the total number of words in the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inkpen's approach",
"sec_num": "3.2"
},
{
"text": "Inkpen estimated C(x), C(y) and C(x, y) by issuing queries to the Waterloo MultiText System (Clarke and Terra, 2003) . She defined C(x, y) the number of times where x is followed by y within a certain query frame of length q within a corpus, so that, for example, if q = 3, example 7 would count as a co-occurrence of fresh and mango, but example 8 would not: 7He likes fresh cold mango.",
"cite_spans": [
{
"start": 92,
"end": 116,
"text": "(Clarke and Terra, 2003)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inkpen's approach",
"sec_num": "3.2"
},
{
"text": "(8) I like fresh fruits in general, particularly mango.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inkpen's approach",
"sec_num": "3.2"
},
{
"text": "She also experimented with document counts where C(x) is the number of documents that x is found in and C(x, y) is the number of documents in which both x and y are found, called PMI-IR (Turney, 2001); but found that this method did not perform as well, although the difference was not statistically significant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inkpen's approach",
"sec_num": "3.2"
},
{
"text": "Inkpen's method outperformed both the baseline and Edmonds' method by 22 and 10 percentage points respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inkpen's approach",
"sec_num": "3.2"
},
{
"text": "Our variation on Inkpen's approach is designed to estimate PMI(x, y), the pointwise mutual information of words x and y, using the Web 1T 5-gram corpus Version 1 (Brants and Franz, 2006) .",
"cite_spans": [
{
"start": 162,
"end": 186,
"text": "(Brants and Franz, 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our variation of Inkpen's approach",
"sec_num": "3.3"
},
{
"text": "Web 1T contains n-gram frequency counts, up to and including 5-grams, as they occur in a trillion words of World Wide Web text. There is no context information beyond the n-gram boundaries. Examples of a 3-gram and a 5-gram and their respective counts from Web 1T are shown in examples 9 and 10: (9) means official and 41",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our variation of Inkpen's approach",
"sec_num": "3.3"
},
{
"text": "(10) Valley National Park 1948 Art 51",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our variation of Inkpen's approach",
"sec_num": "3.3"
},
{
"text": "These n-gram counts allow us to estimate C(x, y) for a given window width k by summing the Web 1T counts of k-grams in which words x and y occur and x is followed by y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our variation of Inkpen's approach",
"sec_num": "3.3"
},
{
"text": "Counts are computed using a an especially developed version of the Web 1T processing software \"Get 1T\" 1 originally described in Hawker (2007) and detailed in Hawker et. al (2007) . The Get 1T software allows n-gram queries of the form in the following examples, where < * > is a wildcard which will match any token in that place in the n-gram. In order to find the number of n-grams with fresh and mango we need to construct three queries:",
"cite_spans": [
{
"start": 129,
"end": 142,
"text": "Hawker (2007)",
"ref_id": "BIBREF9"
},
{
"start": 159,
"end": 179,
"text": "Hawker et. al (2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our variation of Inkpen's approach",
"sec_num": "3.3"
},
{
"text": "(11) < * > fresh mango (12) fresh < * > mango (13) fresh mango < * >",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our variation of Inkpen's approach",
"sec_num": "3.3"
},
{
"text": "However, in order to find fresh and mango within 4 grams we need multiple wildcards as in example 14, and added the embedded query hashing functionality described in Hawker et. al (2007) . 14fresh < * > < * > mango",
"cite_spans": [
{
"start": 166,
"end": 186,
"text": "Hawker et. al (2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our variation of Inkpen's approach",
"sec_num": "3.3"
},
{
"text": "Queries are matched case-insensitively, but no stemming takes place, and there is no deeper analysis (such as part of speech matching).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our variation of Inkpen's approach",
"sec_num": "3.3"
},
{
"text": "This gives us the following methodology for a given lexical gap g and a window of k words either side of the gap:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our variation of Inkpen's approach",
"sec_num": "3.3"
},
{
"text": "1. for every candidate near-synonym c:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our variation of Inkpen's approach",
"sec_num": "3.3"
},
{
"text": "(a) for every word w i in the set of words proceeding the gap, w 1 , . . . , w k , calculate PMI(w i , c) as in equation 6, given counts for C(w i ), C(c) and C(w i , c) from Web 1T 2 (b) for every word w j in the set of words following the gap, w k+1 , . . . , w 2k , calculate PMI(c, w j ) as in equation 6, given counts for C(c), C(w j ) and C(c, w j ) from Web 1T (c) compute the suitability score s(c, g) of candidate c as given by equation 5 2. select the candidate near-synonym with the highest suitability score for the gap where a single such candidate exists 3. where there is no single candidate with a highest suitability score, select the most frequent candidate for the gap (that is, fall back to the baseline described in Section 3.4) 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our variation of Inkpen's approach",
"sec_num": "3.3"
},
{
"text": "Since Web 1T contains 5-gram counts, we can use query frame sizes from q = 1 (words x and y must be adjacent, that is, occur in the 2-gram counts) to q = 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our variation of Inkpen's approach",
"sec_num": "3.3"
},
{
"text": "The baseline method that our method is compared to uses the most frequent word from a given synset as the chosen candidate for any gap requiring a member of that synset. Frequency is measured using frequency counts of the combined part of speech and word token in the 1989 Wall Street Journal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline method",
"sec_num": "3.4"
},
{
"text": "Inkpen's method",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of the approximation to",
"sec_num": "4"
},
{
"text": "In this section we compare our approximation of Inkpen's method described in Section 3.3 with her method described in Section 3.2. This will allow us to determine whether our approximation is effective enough to allow us to compare attitudinal and nonattitudinal near-synonyms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of the approximation to",
"sec_num": "4"
},
{
"text": "In order to compare the two methods, we use five sets of near-synonyms, also used as test sets by both Edmonds and Inkpen:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test sets",
"sec_num": "4.1"
},
{
"text": "\u2022 the adjectives difficult, hard and tough;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test sets",
"sec_num": "4.1"
},
{
"text": "\u2022 the nouns error, mistake and oversight;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test sets",
"sec_num": "4.1"
},
{
"text": "\u2022 the nouns job, task and duty;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test sets",
"sec_num": "4.1"
},
{
"text": "\u2022 the nouns responsibility, burden, obligation and commitment; and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test sets",
"sec_num": "4.1"
},
{
"text": "\u2022 the nouns material, stuff and substance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test sets",
"sec_num": "4.1"
},
{
"text": "Inkpen compared her method to Edmonds' using these five sets and two more, both sets of verbs, which we have not tested on, as our attitudinal and non-attitudinal data does not included annotated verbs. We are therefore interested in the predictive power of our method compared to Inkpen's and Edmond's on adjectives and nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test sets",
"sec_num": "4.1"
},
{
"text": "We performed this experiment, as Edmonds and Inkpen did, using the 1987 Wall Street Journal as a source of test sentences. 4 Where ever one of the words in a test set is found, it is removed from the context in which it occurs to generate a gap for the algorithm to fill.",
"cite_spans": [
{
"start": 123,
"end": 124,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test contexts",
"sec_num": "4.2"
},
{
"text": "So, for example, when sentence 15 is found in the test data, the word error is removed from it and the system is asked to predict which of error, mistake or oversight fills the gap at 16: (15) . . .his adversarys' characterization of that minor sideshow as somehow a colossal error on the order of a World War. . ..",
"cite_spans": [
{
"start": 188,
"end": 192,
"text": "(15)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test contexts",
"sec_num": "4.2"
},
{
"text": "(16) a colossal on the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test contexts",
"sec_num": "4.2"
},
{
"text": "Recall from Section 3.2 these two parameters used by Inkpen: k and q.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter settings",
"sec_num": "4.3"
},
{
"text": "Parameter k is the size of the 'window' of context on either side of a lexical gap in the test set: the k words on either side of a gap are used to predict which of the candidate words best fills the gap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter settings",
"sec_num": "4.3"
},
{
"text": "Parameter q is the query size used when querying the corpus to find out how often words x and y occur together in order to compute the value of C(x, y). In order to be counted as occurring together, x and y must occur within a window of length at most q.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter settings",
"sec_num": "4.3"
},
{
"text": "Inkpen found, using Edmonds' near-synonym set difficult and hard as a development set, that results are best for a small window (k \u2208 {1, 2}) but that the query frame had to be somewhat longer to get the best results. Her results were reported using k = 2 and q = 5, chosen via tuning on the development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter settings",
"sec_num": "4.3"
},
{
"text": "We have retained the setting k = 2 and explored results where q = 2 and q = 4: due to Web 1T containing 5-grams but no higher order n-grams we cannot measure the frequency of two words occurring together with any more than three intervening words, so q = 4 is the highest value q can have. Table 1 shows the performance of Edmonds' method and Inkpen's method as given in Inkpen (2007) 5 and our modified method on each of the test sets described in Section 4.1. Note that Inkpen reports different baseline results from us-we have not been able to reproduce her baselines. This may be due to choosing different part of speech tags: we simply used JJ for adjectives and NN for nouns.",
"cite_spans": [
{
"start": 371,
"end": 384,
"text": "Inkpen (2007)",
"ref_id": "BIBREF11"
},
{
"start": 385,
"end": 386,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 290,
"end": 297,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter settings",
"sec_num": "4.3"
},
{
"text": "Inkpen's improvements for the test synsets given in Section 4.1 were between +3.2% and 30.6%. Our performance is roughly comparable, with improvements as high as 31.2%. Further, we tend to improve especially largely over the baseline where Inkpen also does so: on the two sets error etc and responsibility etc..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.4"
},
{
"text": "The major anomaly when compared to Inkpen's performance is the set job, task and duty, where our method performs very badly compared to both Edmonds' and Inkpen's methods and the baseline (which perform similarly). We also perform under both methods on material, stuff and substance, although not as dramatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.4"
},
{
"text": "Overall, the fact that we tend to improve over Edmonds where Inkpen also does so suggests that our algorithm based on Inkpen's takes advantage of the same aspects as hers to gain improvements over Edmonds, and thus that the method is a good candidate for use in our main experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.4"
},
{
"text": "Having determined in Section 4 that our modified version of Inkpen's method performs as a passable approximation of hers, and particularly that where her method improved dramatically over the baseline and Edmonds' method that ours improves likewise, we then tested our central hypothesis: that attitudinal synsets respond better to statistical prediction techniques than non-attitudinal synsets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing attitudinal and non-attitudinal synsets",
"sec_num": "5"
},
{
"text": "In order to test our hypothesis, we use synsets divided into near-synonym sets that differ in attitudinal and sets that do not. This test set is drawn from our set of annotated attitudinal and non-attitudinal near-synonyms described in Gardiner and Dras (2007) . These are WordNet2.0 (Fellbaum, 1998) Street Journal. The synsets were annotated as attitudinal and non-attitudinal by the authors of this paper. Synsets were chosen where both annotators are certain of their label, and where both annotators have the same label. This results in 60 synsets in total: 8 where the annotators agreed that there was definitely an attitude difference between words in the synset, and 52 where the annotators agreed that there were definitely not attitude differences between the words in the synset. An example of a synset agreed to have attitudinal differences was:",
"cite_spans": [
{
"start": 236,
"end": 260,
"text": "Gardiner and Dras (2007)",
"ref_id": "BIBREF7"
},
{
"start": 284,
"end": 300,
"text": "(Fellbaum, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test set",
"sec_num": "5.1"
},
{
"text": "(17)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test set",
"sec_num": "5.1"
},
{
"text": "bad, insecure, risky, high-risk, speculative",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test set",
"sec_num": "5.1"
},
{
"text": "An example of synsets agreed to not have attitudinal differences was:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test set",
"sec_num": "5.1"
},
{
"text": "(18)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test set",
"sec_num": "5.1"
},
{
"text": "sphere, domain, area, orbit, field, arena",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test set",
"sec_num": "5.1"
},
{
"text": "The synsets are not used in their entirety, due to the differences in the number of words in each synset (compare {violence, force} with two members to {arduous, backbreaking, grueling, gruelling, hard, heavy, laborious, punishing, toilsome} with nine, for example). Instead, a certain number n of words are selected from each synset (where n \u2208 {3, 4}) based on the frequency count in the 1989 Wall Street Journal corpus. For example hard, heavy, gruelling and punishing are the four most frequent words in the {arduous, backbreaking, grueling, gruelling, hard, heavy, laborious, punishing, toilsome} synset, so when n = 4 those four words would be selected. When the synset's length is less than or equal to n, for example when n = 4 but the synset is {violence, force}, the entire synset is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test set",
"sec_num": "5.1"
},
{
"text": "These test sets are referred to as top3 (synsets reduced to 3 or less members) and top4 (synsets reduced to 4 or less members).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test set",
"sec_num": "5.1"
},
{
"text": "Exactly as in Section 4.2, our lexical gaps and their surrounding contexts are drawn from sentences in the 1987 Wall Street Journal containing one of the words in the test synsets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test contexts",
"sec_num": "5.2"
},
{
"text": "As described in Sections 3.2 and 4.3, there are two parameters that can be varied regarding the context around a lexical gap (k), and the nearness of two words x and y in the corpus in order for them to be considered to occur together (q).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter settings",
"sec_num": "5.3"
},
{
"text": "As per Inkpen's results on her development set, and as in Section 4 we use the setting k = 2 and vary q such that q = 2 on some test runs and q = 4 on others. We cannot test with Inkpen's suggested q = 5, as that would require 6-grams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter settings",
"sec_num": "5.3"
},
{
"text": "The overall performance of our method on our sets of attitudinal and non-attitudinal near-synonyms is shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 118,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.4"
},
{
"text": "We did four test runs in total, two each on sets top3 and top4 varying q between q = 2 and q = 4. The baseline result does not depend on q and therefore is the same for both tests of top3 and of top4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.4"
},
{
"text": "Baseline correctness (%) q Our method's correctness (%) Synsets Att. Table 3 : Distribution of improvements on baseline for top3, k = 2, q = 2",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 76,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Contexts containing a test word",
"sec_num": null
},
{
"text": "As in our previous paper (Gardiner and Dras, 2007) , the baselines behave noticeably differently for attitudinal and non-attitudinal synsets. Calculating the z-statistic as is standard for comparing two proportions (Moore and McCabe, 2003) we find that the difference between the pair of attitudinal and non-attitudinal results for each test are all statistically significant (p < 0.01). Thus, again, it is difficult from the data in Table 2 alone to determine whether the better performance of non-attitudinal synsets is due to the higher baseline performance for those same synsets.",
"cite_spans": [
{
"start": 25,
"end": 50,
"text": "(Gardiner and Dras, 2007)",
"ref_id": "BIBREF7"
},
{
"start": 215,
"end": 239,
"text": "(Moore and McCabe, 2003)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 434,
"end": 441,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Contexts containing a test word",
"sec_num": null
},
{
"text": "There are two major aspects of this result requiring further investigation. The first is that our method performs very similarly to the baseline according to these aggregate numbers, which wasn't anticipated based on the results in Section 4, which showed that on a limited set of synsets our method performed well above the baseline, although not as well as Inkpen's original method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contexts containing a test word",
"sec_num": null
},
{
"text": "Secondly, inspection of individual synsets and their performance reveals that this aggregate is not representative of the performance as a whole: it is simply an average of approximately equal numbers of good and bad predictions by our method. Table 3 shows that for one test run (top3, k = 2, q = 2) there were a number of synsets on which our method performed very well with an improvement of more than 20 percentage points over the baseline but also a substantial number where it performed very badly, losing more than 20 percentage points from the baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 244,
"end": 251,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Contexts containing a test word",
"sec_num": null
},
{
"text": "In our previous work we expressed a suspicion that the success of Edmonds' prediction method might be being influenced by the evenness of distribution of frequencies within a synset. That is, if a synset contains a very dominant member (which will cause the baseline to perform well) then the Edmonds method may perform worse against the baseline than it would for a synset in which the word choices were distributed fairly evenly among the members of the set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contexts containing a test word",
"sec_num": null
},
{
"text": "Given the results of the test runs shown in Table 2 , and the wide distribution of prediction successes shown in Table 3 , we decided to test this hypothesis that the distribution of words in the synsets influence the performance of prediction methods that use context. This is described in Section 5.4.1.",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 51,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 113,
"end": 120,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Contexts containing a test word",
"sec_num": null
},
{
"text": "In this section, we describe an analysis of the results in Section 5.4 in terms of whether the balance of frequencies among words in the synset contribute to the quality of our prediction result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy analysis",
"sec_num": "5.4.1"
},
{
"text": "In order to measure a correlation between the balance of frequencies of words and the prediction result, we need a measure of 'balance'. In this case we have chosen information entropy (Shannon, 1948) , the measure of bits of information required to convey a particular result. The entropy of a synset's frequencies here is measured using the proportion Table 4 : Regression co-efficients between independent variables synset category and synset entropy, and dependent variable prediction improvement over baseline (statistically significant results p < 0.05 marked *) of total uses of the synset that each particular word represents. A synset in which frequencies are reasonably evenly distributed has high information entropy and a synset in which one or more words are very frequent as a proportion of use of that synset as a whole have low entropy. We then carried out multiple regression analysis using the category of the synset (attitudinal or not attitudinal, coded as 1 and 0 for this analysis) and the entropy of the synset's members' frequencies as our two independent variables; this allows us to separate out the two effects of synset skewness and attitudinality. Regression co-efficients are shown in Table 4 . Table 4 shows that in general, performance is negatively correlated with both category but positively with entropy, although the correlation with category is not always significant. The positive relationship with entropy confirms our suspicion in Gardiner and Dras (2007) that statistical techniques perform better when the synset does not have a highly dominant member. The negative correlation with category implies that the reverse of our main hypothesis holds: that our statistical method works better for predicting the use of non-attitudinal near-synonyms.",
"cite_spans": [
{
"start": 185,
"end": 200,
"text": "(Shannon, 1948)",
"ref_id": "BIBREF15"
},
{
"start": 1472,
"end": 1496,
"text": "Gardiner and Dras (2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 354,
"end": 361,
"text": "Table 4",
"ref_id": null
},
{
"start": 1215,
"end": 1222,
"text": "Table 4",
"ref_id": null
},
{
"start": 1225,
"end": 1232,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entropy analysis",
"sec_num": "5.4.1"
},
{
"text": "There are two questions that arise from the result that our Inkpen-based method gives a different result from the Edmonds-based one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy analysis",
"sec_num": "5.4.1"
},
{
"text": "First, is our approximation to Inkpen's method inherently faulty or can it be improved in some way? We know from Section 4 that it tends to perform well where her method performs well. An obvious second test is to compare our results to another test described in Inkpen (2007) which used a larger set of near-synonyms and tested the predictive power using the British National Corpus as a source of test contexts. This test will test our system's performance in genres quite different from news-wire text, and allow us to make a further comparison with Inkpen's method.",
"cite_spans": [
{
"start": 263,
"end": 276,
"text": "Inkpen (2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy analysis",
"sec_num": "5.4.1"
},
{
"text": "Second, why do we perform significantly better for near-synonyms without attitude difference? One possible explanation that we intend to explore is that attitude differences are predicted by attitude differences exhibited in a very large context; perhaps an entire document or section thereof. Sentiment analysis techniques may be able to be used to detect the attitude bearing parts of a document and these may serve as more useful features for predicting attitudinal word choice than surrounding words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy analysis",
"sec_num": "5.4.1"
},
{
"text": "In this paper we have developed a modification to Inkpen's method of making a near-synonym choice that on a set of her test data performs reasonably promisingly; however, when tested on a larger set of near-synonyms on average it does not perform very differently to the baseline. We have also shown that, contrary to our hypothesis that near-synonyms with attitude differences would perform better using statistical methods, on this method the nearsynonyms without attitude differences are predicted better when there's a difference in predictive power.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Ultimately, we plan to develop a system that will acquire and predict usage of attitudinal nearsynonyms, drawing on statistical methods and methods from sentiment analysis. In order to achieve this we will need a comprehensive understanding of why this method's performance was not adequate for the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Available at http://get1t.sf.net/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The result of equation 6 is undefined when any of C(x) = 0, C(y) = 0 or C(x, y) = 0 hold, that is, x or y or at least one n-gram containing x and y cannot be found in the Web 1T counts. For the purpose of computing s(c, g), we define PMI(x, y) = 0 when C(x) = 0, C(y) = 0 or C(x, y) = 0, so that it has no influence on the score s(c, g) given by equation 5.3 Typically, in this case, all candidates have scored 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "All references to the Wall Street Journal data used in this paper refer toCharniak et. al (2000).5 Inkpen actually gives two methods, one using PMI estimates from document counts, one using PMI estimates using word counts. Here we are discussing her word count method and use those values in our table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Thank you to: Diana Inkpen for sending us a copy of Inkpen (2007) while it was under review; and Tobias Hawker for providing a copy of his Web 1T processing software, Get 1T, before its public release.This work has been supported by the Australian Research Council under Discovery Project DP0558852.",
"cite_spans": [
{
"start": 52,
"end": 65,
"text": "Inkpen (2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning to paraphrase: An unsupervised approach using multiplesequence alignment",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2003,
"venue": "HLT-NAACL 2003: Main Proceedings",
"volume": "",
"issue": "",
"pages": "16--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple- sequence alignment. In HLT-NAACL 2003: Main Pro- ceedings, pages 16-23.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Web 1T 5-gram Version 1",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram Version 1. http://www.ldc.upenn.edu/ Catalog/CatalogEntry.jsp?catalogId= LDC2006T13.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BLLIP 1987-89 WSJ Corpus Release 1",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Don",
"middle": [],
"last": "Blaheta",
"suffix": ""
},
{
"first": "Niyu",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak, Don Blaheta, Niyu Ge, Keith Hall, John Hale, , and Mark Johnson. 2000. BLLIP 1987-89 WSJ Corpus Release 1. http://www. ldc.upenn.edu/Catalog/CatalogEntry. jsp?catalogId=LDC2000T43.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Word association norms and mutual information",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1991,
"venue": "lexicography. Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Church and Patrick Hanks. 1991. Word asso- ciation norms and mutual information, lexicography. Computational Linguistics, 16(1):22-29.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Passage retrieval vs. document retrieval for factoid question answering",
"authors": [
{
"first": "L",
"middle": [
"A"
],
"last": "Charles",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Egidio",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Terra",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "427--428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles L. A. Clarke and Egidio L. Terra. 2003. Passage retrieval vs. document retrieval for factoid question an- swering. In Proceedings of the 26th Annual Interna- tional ACM SIGIR Conference on Research and De- velopment in Information Retrieval, pages 427-428, Toronto, Canada.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Choosing the word most typical in context using a lexical co-occurrence network",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Edmonds",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and the 8th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "507--509",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Edmonds. 1997. Choosing the word most typical in context using a lexical co-occurrence network. In Proceedings of the 35th Annual Meeting of the Associ- ation for Computational Linguistics and the 8th Con- ference of the European Chapter of the Association for Computational Linguistics, pages 507-509, July.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "WordNet: An Electronic Lexical Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. The MIT Press, May.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Corpus statistics approaches to discriminating among near-synonyms",
"authors": [
{
"first": "Mary",
"middle": [],
"last": "Gardiner",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dras",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics (PACLING 2007)",
"volume": "",
"issue": "",
"pages": "31--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mary Gardiner and Mark Dras. 2007. Corpus statistics approaches to discriminating among near-synonyms. In Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics (PACLING 2007), pages 31-39, Melbourne, Australia, September.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Practical queries of a massive n-gram database",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Hawker",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Gardiner",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Bennetts",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Australasian Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tobias Hawker, Mary Gardiner, and Andrew Bennetts. 2007. Practical queries of a massive n-gram database. In Proceedings of the Australasian Language Technol- ogy Workshop 2007 (ALTW 2007), Melbourne, Aus- tralia. To appear.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "USYD: WSD and lexical substitution using the Web 1T corpus",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Hawker",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of SemEval-2007: the 4th International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tobias Hawker. 2007. USYD: WSD and lexical substi- tution using the Web 1T corpus. In Proceedings of SemEval-2007: the 4th International Workshop on Se- mantic Evaluations, Prague, Czech Republic.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Building and using a lexical knowledge-base of near-synonym differences",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "2",
"pages": "223--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Inkpen and Graeme Hirst. 2006. Building and using a lexical knowledge-base of near-synonym dif- ferences. Computational Linguistics, 32(2):223-262, June.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A statistical model of nearsynonym choice",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2007,
"venue": "ACM Transactions of Speech and Language Processing",
"volume": "4",
"issue": "1",
"pages": "1--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Inkpen. 2007. A statistical model of near- synonym choice. ACM Transactions of Speech and Language Processing, 4(1):1-17, January.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The practical value of N-grams in generation",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Langkilde",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 9th International Natural Language Generation Workshop",
"volume": "",
"issue": "",
"pages": "248--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irene Langkilde and Kevin Knight. 1998. The practical value of N-grams in generation. In Proceedings of the 9th International Natural Language Generation Work- shop, pages 248-255, Niagra-on-the-Lake, Canada.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Forest-based statistical sentence generation",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Langkilde",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 1st Conference of the North American Chapter of the Association for Computational Linguistics and the 6th Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "170--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irene Langkilde. 2000. Forest-based statistical sentence generation. In Proceedings of the 1st Conference of the North American Chapter of the Association for Computational Linguistics and the 6th Conference on Applied Natural Language Processing (NAACL-ANLP 2000), pages 170-177, Seattle, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Introduction to the Practice of Statistics",
"authors": [
{
"first": "David",
"middle": [
"S"
],
"last": "Moore",
"suffix": ""
},
{
"first": "George",
"middle": [
"P"
],
"last": "Mccabe",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David S. Moore and George P. McCabe. 2003. Introduc- tion to the Practice of Statistics. W. H. Freeman and Company, 4 edition.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A mathematical theory of communication",
"authors": [
{
"first": "Claude",
"middle": [
"E"
],
"last": "Shannon",
"suffix": ""
}
],
"year": 1948,
"venue": "Bell System Technical Journal",
"volume": "27",
"issue": "",
"pages": "379--423",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claude E. Shannon. 1948. A mathematical theory of communication. Bell System Technical Journal, 27:379-423 and 623-656, July and October.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Mining the web for synonyms: PMI-IR versus LSA on TOEFL",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Twelfth European Conference on Machine Learning (ECML 2001)",
"volume": "",
"issue": "",
"pages": "491--502",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Turney. 2001. Mining the web for synonyms: PMI-IR versus LSA on TOEFL. In Proceedings of the Twelfth European Conference on Machine Learn- ing (ECML 2001), pages 491-502, Freiburg and Ger- many.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Improvement over</td><td colspan=\"2\">Number of synsets</td><td/></tr><tr><td>baseline</td><td/><td/><td/></tr><tr><td/><td colspan=\"3\">Att. Non-att. Total</td></tr><tr><td>\u2265 +20%</td><td>0</td><td>16</td><td>16</td></tr><tr><td>\u2265 +10% and &lt; +20%</td><td>1</td><td>7</td><td>8</td></tr><tr><td>\u2265 +5% and &lt; +1%</td><td>2</td><td>2</td><td>4</td></tr><tr><td>&gt; -5% and &lt; -5%</td><td>2</td><td>10</td><td>12</td></tr><tr><td>\u2264 -5% and &gt; -10%</td><td>0</td><td>6</td><td>6</td></tr><tr><td>\u2264 -10% and &gt; -20%</td><td>1</td><td>3</td><td>4</td></tr><tr><td>\u2264 -20%</td><td>1</td><td>8</td><td>9</td></tr></table>",
"text": "Performance of the baseline and our method on all test sentences (k = 2)",
"html": null
}
}
}
}