ACL-OCL / Base_JSON /prefixE /json /E14 /E14-1009.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E14-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:39:17.102360Z"
},
"title": "Multi-Granular Aspect Aggregation in Aspect-Based Sentiment Analysis",
"authors": [
{
"first": "John",
"middle": [],
"last": "Pavlopoulos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Athens University of Economics and Business",
"location": {
"addrLine": "Patission 76",
"postCode": "GR-104 34",
"settlement": "Athens",
"country": "Greece"
}
},
"email": ""
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Athens University of Economics and Business",
"location": {
"addrLine": "Patission 76",
"postCode": "GR-104 34",
"settlement": "Athens",
"country": "Greece"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Aspect-based sentiment analysis estimates the sentiment expressed for each particular aspect (e.g., battery, screen) of an entity (e.g., smartphone). Different words or phrases, however, may be used to refer to the same aspect, and similar aspects may need to be aggregated at coarser or finer granularities to fit the available space or satisfy user preferences. We introduce the problem of aspect aggregation at multiple granularities. We decompose it in two processing phases, to allow previous work on term similarity and hierarchical clustering to be reused. We show that the second phase, where aspects are clustered, is almost a solved problem, whereas further research is needed in the first phase, where semantic similarity measures are employed. We also introduce a novel sense pruning mechanism for WordNet-based similarity measures, which improves their performance in the first phase. Finally, we provide publicly available benchmark datasets.",
"pdf_parse": {
"paper_id": "E14-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "Aspect-based sentiment analysis estimates the sentiment expressed for each particular aspect (e.g., battery, screen) of an entity (e.g., smartphone). Different words or phrases, however, may be used to refer to the same aspect, and similar aspects may need to be aggregated at coarser or finer granularities to fit the available space or satisfy user preferences. We introduce the problem of aspect aggregation at multiple granularities. We decompose it in two processing phases, to allow previous work on term similarity and hierarchical clustering to be reused. We show that the second phase, where aspects are clustered, is almost a solved problem, whereas further research is needed in the first phase, where semantic similarity measures are employed. We also introduce a novel sense pruning mechanism for WordNet-based similarity measures, which improves their performance in the first phase. Finally, we provide publicly available benchmark datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Given a set of texts discussing a particular entity (e.g., reviews of a laptop), aspect-based sentiment analysis (ABSA) attempts to identify the most prominent (e.g., frequently discussed) aspects of the entity (e.g., battery, screen) and the average sentiment (e.g., 1 to 5 stars) for each aspect or group of aspects, as in Fig. 1 . Most ABSA systems perform all or some of the following (Liu, 2012) : subjectivity detection to retain only sentences (or other spans) expressing subjective opinions; aspect extraction to extract (and possibly rank) terms corresponding to aspects (e.g., 'battery'); aspect aggregation to group aspect terms that are nearsynonyms (e.g., 'price', 'cost') or to obtain aspects at a coarser granularity (e.g., 'chicken','steak', and 'fish' may be replaced by 'food' in restaurant reviews); and aspect sentiment score estimation to estimate the average sentiment for each aspect or group of aspects. In this paper, we focus on aspect aggregation, the least studied stage of the four.",
"cite_spans": [
{
"start": 389,
"end": 400,
"text": "(Liu, 2012)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 325,
"end": 331,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Aspect aggregation is needed to avoid reporting separate sentiment scores for aspect terms that are very similar. In Fig. 1 , for example, showing separate lines for 'money', 'price', and 'cost' would be confusing. The extent to which aspect terms should be aggregated, however, also depends on the available space and user preferences. On devices with smaller screens, it may be desirable to aggregate aspect terms that are similar, though not necessarily near-synonyms (e.g., 'design', 'color', 'feeling') to show fewer lines ( Fig. 1 ), but finer aspects may be preferable on larger screens. Users may also wish to adjust the granularity of aspects, e.g., by stretching or narrowing the height of Fig. 1 on a smartphone to view more or fewer lines. Hence, aspect aggregation should be able to produce groups of aspect terms for multiple granularities. We assume that the aggregated aspects are displayed as lists of terms, as in Fig. 1 . We make no effort to order (e.g., by frequency) the terms in each list, nor do we attempt to produce a single (more general) term to describe each aggregated aspect, leaving such tasks for future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 123,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 530,
"end": 536,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 700,
"end": 706,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 932,
"end": 938,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "ABSA systems usually group synonymous (or near-synonymous) aspect terms (Liu, 2012) . Ag-gregating only synonyms (or near-synonyms), however, does not allow users to select the desirable aspect granularity, and ignores the hierarchical relations between aspect terms. For example, 'pizza' and 'steak' are kinds of 'food' and, hence, the three terms can be aggregated to show fewer, coarser aspects, even though they are not synonyms. Carenini et al. (2005) used a predefined domain-specific taxonomy to hierarchically aggregate aspect terms, but taxonomies of this kind are often not available. By contrast, we use only general-purpose taxonomies (e.g., WordNet), term similarity measures based on general-purpose taxonomies or corpora, and hierarchical clustering.",
"cite_spans": [
{
"start": 72,
"end": 83,
"text": "(Liu, 2012)",
"ref_id": "BIBREF25"
},
{
"start": 434,
"end": 456,
"text": "Carenini et al. (2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We define multi-granular aspect aggregation to be the task of partitioning a given set of aspect terms (generated by a previous aspect extraction stage) into k non-overlapping clusters, for multiple values of k. A further constraint is that the clusters have to be consistent for different k values, meaning that if two aspect terms t 1 , t 2 are placed in the same cluster for k = k 1 , then t 1 and t 2 must also be grouped together (in the same cluster) for every k = k 2 with k 2 < k 1 , i.e., for every coarser grouping. For example, if 'waiter' and 'service' are grouped together for k = 5, they must also be grouped together for k = 4, 3, 2 and (trivially) k = 1, to allow the user to feel that selecting a smaller number of aspect groups (narrowing the height of Fig. 1 ) has the effect of zooming out (without aspect terms jumping unexpectedly to other aspect groups), and similarly for zooming in. 1 This requirement is satisfied by using agglomerative hierarchical clustering algorithms (Manning and Sch\u00fctze, 1999; Hastie et al., 2001) , which in our case produce term hierarchies like the ones of Fig. 2 . By using slices (nodes at a particular depth) of the hierarchies that are closer to the root or the leaves, we obtain fewer or more clusters. The vertical dotted lines of Fig. 2 illustrate two slices for k = 4. By contrast, flat clustering algorithms (e.g., k-means) do not satisfy the consistency constraint for different k values.",
"cite_spans": [
{
"start": 998,
"end": 1025,
"text": "(Manning and Sch\u00fctze, 1999;",
"ref_id": "BIBREF26"
},
{
"start": 1026,
"end": 1046,
"text": "Hastie et al., 2001)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 771,
"end": 777,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 1109,
"end": 1115,
"text": "Fig. 2",
"ref_id": "FIGREF1"
},
{
"start": 1289,
"end": 1295,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Agglomerative clustering algorithms require a measure of the distance between individuals, in our case a measure of how similar two aspect terms are, and a linkage criterion to specify which clusters should be merged to form larger (coarser) clusters. To experiment with different term sim- 1 We also require the clusters to be non-overlapping to make this zooming in and out metaphor clearer to the user. ilarity measures and linkage criteria, we decompose multi-granular aspect aggregation in two processing phases. Phase A fills in a symmetric matrix, like the one of Table 1 , with scores showing the similarity of each pair of input aspect terms; the matrix in effect defines the distance measure to be used by agglomerative clustering. In Phase B, the aspect terms are grouped into k non-overlapping clusters, for varying values of k, given the matrix of Phase A and a linkage criterion; a hierarchy like the ones of Fig. 2 is first formed via agglomerative clustering, and fewer or more clusters (for different values of k) are then obtained by using different slices of the hierarchy, as already discussed. Our two-phase decomposition can also accommodate non-hierarchical clustering algorithms, provided that the consistency constraint is satisfied, but we consider only agglomerative hierarchical clustering in this paper. The decomposition in two phases has three main advantages. Firstly, it allows reusing previous work on term similarity measures (Zhang et al., 2013) , which can be used to fill in the matrix of Phase A. Secondly, the decomposition allows different linkage criteria to be experimentally compared (in Phase B) using the same similarity matrix (of Phase A), i.e., the same distance measure. Thirdly, the decomposition leads to high inter-annotator agreement, as we show experimentally. By contrast, in preliminary experiments we found that asking humans to directly evaluate aspect hierarchies produced by hierarchical clustering, or to manually create gold aspect hierarchies led to poor inter-annotator agreement.",
"cite_spans": [
{
"start": 291,
"end": 292,
"text": "1",
"ref_id": null
},
{
"start": 1461,
"end": 1481,
"text": "(Zhang et al., 2013)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 571,
"end": 578,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 923,
"end": 929,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We show that existing term similarity measures perform reasonably well in Phase A, especially when combined, but there is a large scope for improvement. We also propose a novel sense pruning method for WordNet-based similarity measures, which leads to significant improvements in Phase A. In Phase B, we experiment with agglomerative clustering using four different linkage criteria, concluding that they all perform equally well and that Phase B is almost a solved problem when the gold similarity matrix of Phase A is used; however, further improvements are needed in the similarity measures of Phase A to produce a sufficiently good similarity matrix. We also make publicly available the datasets of our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions are: (i) to the best of our knowledge, we are the first to consider multi-granular aspect aggregation (not just merging near-synonyms) in ABSA without manually crafted domain-specific ontologies; (ii) we propose a two-phase decomposition that allows previous work on term similarity and hierarchical clustering to be reused and evaluated with high interannotator agreement; (iii) we introduce a novel sense pruning mechanism that improves WordNetbased similarity measures; (iv) we provide the first public datasets for multi-granular aspect aggregation; (v) we show that the second phase of our decomposition is almost a solved problem, and that research should focus on the first phase. Although we experiment with customer reviews of products and services, ABSA and the work of this paper in particular are, at least in principle, also applicable to texts expressing opinions about other kinds of entities (e.g., politicians, organizations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Section 2 below discusses related work. Sections 3 and 4 present our work for Phase A and B, respectively. Section 5 concludes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most existing approaches to aspect aggregation aim to produce a single, flat partitioning of aspect terms into aspect groups, rather than aspect groups at multiple granularities. The most com-mon approaches (Liu, 2012) are to aggregate only synonyms or near-synonyms, using WordNet (Liu et al., 2005) , statistics from corpora (Chen et al., 2006; Bollegala et al., 2007a; Lin and Wu, 2009) , or semi-supervised learning (Zhai et al., 2010; Zhai et al., 2011) , or to cluster the aspect terms using (latent) topic models (Titov and McDonald, 2008a; Guo et al., 2009; Brody and Elhadad, 2010; Jo and Oh, 2011) . Topic models do not perform better than other methods (Zhai et al., 2010) , and their clusters may overlap. 2 The topic model of Titov et al. (2008b) uses two granularity levels; we consider many more (3-10 levels). Carenini et al. (2005) used a predefined domainspecific taxonomy and similarity measures to aggregate related terms. Yu et al. (2011) used a tailored version of an existing taxonomy. By contrast, we assume no domain-specific taxonomy. Kobayashi et al. (2007) proposed methods to extract aspect terms and relations between them, including hierarchical relations. They extract, however, relations by looking for clues in texts (e.g., particular phrases). By contrast, we employ similarity measures and hierarchical clustering, which allows us to group similar aspect terms even when they do not cooccur in texts. Also, in contrast to Kobayashi et al. (2007) , we respect the consistency constraint discussed in Section 1.",
"cite_spans": [
{
"start": 207,
"end": 218,
"text": "(Liu, 2012)",
"ref_id": "BIBREF25"
},
{
"start": 282,
"end": 300,
"text": "(Liu et al., 2005)",
"ref_id": "BIBREF24"
},
{
"start": 327,
"end": 346,
"text": "(Chen et al., 2006;",
"ref_id": "BIBREF7"
},
{
"start": 347,
"end": 371,
"text": "Bollegala et al., 2007a;",
"ref_id": "BIBREF1"
},
{
"start": 372,
"end": 389,
"text": "Lin and Wu, 2009)",
"ref_id": "BIBREF22"
},
{
"start": 420,
"end": 439,
"text": "(Zhai et al., 2010;",
"ref_id": "BIBREF41"
},
{
"start": 440,
"end": 458,
"text": "Zhai et al., 2011)",
"ref_id": "BIBREF42"
},
{
"start": 520,
"end": 547,
"text": "(Titov and McDonald, 2008a;",
"ref_id": "BIBREF36"
},
{
"start": 548,
"end": 565,
"text": "Guo et al., 2009;",
"ref_id": "BIBREF14"
},
{
"start": 566,
"end": 590,
"text": "Brody and Elhadad, 2010;",
"ref_id": "BIBREF4"
},
{
"start": 591,
"end": 607,
"text": "Jo and Oh, 2011)",
"ref_id": "BIBREF19"
},
{
"start": 664,
"end": 683,
"text": "(Zhai et al., 2010)",
"ref_id": "BIBREF41"
},
{
"start": 739,
"end": 759,
"text": "Titov et al. (2008b)",
"ref_id": "BIBREF37"
},
{
"start": 826,
"end": 848,
"text": "Carenini et al. (2005)",
"ref_id": "BIBREF6"
},
{
"start": 943,
"end": 959,
"text": "Yu et al. (2011)",
"ref_id": "BIBREF39"
},
{
"start": 1061,
"end": 1084,
"text": "Kobayashi et al. (2007)",
"ref_id": "BIBREF21"
},
{
"start": 1458,
"end": 1481,
"text": "Kobayashi et al. (2007)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "A similar task is taxonomy induction. Cimiano and Staab (2005) automatically construct taxonomies from texts via agglomerative clustering, much as in our Phase B, but not in the context of ABSA, and without trying to learn a similarity matrix first. They also label the hierarchy's concepts, a task we do not consider. Klapaftis and Manandhar (2010) show how word sense induction can be combined with agglomerative clustering to obtain more accurate taxonomies, again not in the context of ABSA. Our sense pruning method was influenced by their work, but is much simpler than their word sense induction. Fountain and Lapata (2012) study unsupervised methods to induce concept taxonomies, without considering ABSA.",
"cite_spans": [
{
"start": 319,
"end": 349,
"text": "Klapaftis and Manandhar (2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We now discuss our work for Phase A. Recall that in this phase the input is a set of aspect terms and the goal is to fill in a matrix (Table 1) with scores showing the similarity of each pair of aspect terms.",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 143,
"text": "(Table 1)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Phase A",
"sec_num": "3"
},
{
"text": "We used two benchmark datasets that we had previously constructed to evaluate ABSA methods for subjectivity detection, aspect extraction, and aspect score estimation, but not aspect aggregation. We extended them to support aspect aggregation, and we make them publicly available. 3 The two original datasets contain sentences from customer reviews of restaurants and laptops, respectively. The reviews are manually split into sentences, and each sentence is manually annotated as 'subjective' (expressing opinion) or 'objective' (not expressing opinion). The restaurants dataset contains 3,710 English sentences from the restaurant reviews of Ganu et al. (2009) . The laptops dataset contains 3,085 English sentences from 394 customer reviews, collected from sites that host customer reviews. In the experiments of this paper, we use only the 3,057 (out of 3,710) subjective restaurant sentences and the 2,631 (out of 3,085) subjective laptop sentences.",
"cite_spans": [
{
"start": 280,
"end": 281,
"text": "3",
"ref_id": null
},
{
"start": 643,
"end": 661,
"text": "Ganu et al. (2009)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets used in Phase A",
"sec_num": "3.1"
},
{
"text": "For each subjective sentence, our datasets show the words that human annotators marked as aspect terms. For example, in \"The dessert was divine!\" the aspect term is 'dessert', and in \"Really bad waiter.\" it is 'waiter'. Among the 3,057 subjective restaurant sentences, 1,129 contain exactly one aspect term, 829 more than one, and 1,099 no aspect term; a subjective sentence may express an opinion about the restaurant (or laptop) being reviewed without mentioning a specific aspect (e.g., \"Really nice restaurant!\"), which is why no aspect terms are present in some subjective sentences. There are 558 distinct multi-word aspect terms and 431 distinct single-word aspect terms in the subjective restaurant sentences. Among the 2,631 subjective sentences of the laptop reviews, 823 contain exactly one aspect term, 389 more than one, and 1,419 no aspect term. There are 273 distinct multiword aspect terms and 330 distinct single-word aspect terms in the subjective laptop sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets used in Phase A",
"sec_num": "3.1"
},
{
"text": "From each dataset, we selected the 20 (distinct) aspect terms that the human annotators had annotated most frequently, taking annotation frequency to be an indicator of importance; there are only two multi-word aspect terms ('hard drive', 'bat-tery life') among the 20 most frequent ones in the laptops dataset, and none among the 20 most frequent aspect terms of the restaurants dataset. We then formed all the 190 possible pairs of the 20 terms and constructed an empty similarity matrix (Fig. 1) , one for each dataset, which was given to three human judges to fill in (1: strong dissimilarity, 5: strong similarity). 4 For each aspect term, all the subjective sentences mentioning the term were also provided, to help the judges understand how the terms are used in the particular domains (e.g., 'window' and 'Windows' have domain-specific meanings in laptop reviews).",
"cite_spans": [
{
"start": 621,
"end": 622,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 490,
"end": 498,
"text": "(Fig. 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Datasets used in Phase A",
"sec_num": "3.1"
},
{
"text": "The Pearson correlation coefficient indicated high inter-annotator agreement (0.81 for restaurants, 0.74 for laptops). We also measured the absolute inter-annotator agreement a(l 1 , l 2 ), defined below, where l 1 , l 2 are lists containing the scores (similarity matrix values) of two judges, N is the length of each list, and v max , v min are the largest and smallest possible scores (5 and 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets used in Phase A",
"sec_num": "3.1"
},
{
"text": "a(l 1 , l 2 ) = 1 N N i=1 1 \u2212 |l 1 (i) \u2212 l 2 (i)| v max \u2212 v min",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets used in Phase A",
"sec_num": "3.1"
},
{
"text": "The absolute interannotator agreement was also high (0.90 for restaurants, 0.91 for laptops). 5 With both measures, we compute the agreement of each judge with the averaged (for each matrix cell) scores of the other two judges, and we report the mean of the three agreement estimates. Finally, we created the gold similarity matrix of each dataset by placing in each cell the average scores that the three judges had provided for that cell.",
"cite_spans": [
{
"start": 94,
"end": 95,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets used in Phase A",
"sec_num": "3.1"
},
{
"text": "In preliminary experiments, we gave aspect terms to human judges, asking them to group any terms they considered near-synonyms. We then asked the judges to group the aspect terms into fewer, coarser groups by grouping terms that could be viewed as direct hyponyms of the same broader term (e.g., 'pizza' and 'steak' are both kinds of 'food'), or that stood in a hyponym-hypernym relation (e.g., 'pizza' and 'food'). We used the Dice coefficient to measure inter-annotator agreement, and we obtained reasonably good agreement for near-synonyms (0.77 for restaurants, 0.81 for laptops), but poor agreement for the coarser as-pects (0.25 and 0.11). 6 In other preliminary experiments, we asked human judges to rank alternative aspect hierarchies that had been produced by applying agglomerative clustering with different linkage criteria to 20 aspect terms, but we obtained very poor inter-annotator agreement (Pearson score \u22120.83 for restaurants and 0 for laptops).",
"cite_spans": [
{
"start": 646,
"end": 647,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets used in Phase A",
"sec_num": "3.1"
},
{
"text": "We employed five term similarity measures. The first two are WordNet-based (Budanitsky and Hirst, 2006) . The next two combine WordNet with statistics from corpora. The fifth one is a corpusbased distributional similarity measure.",
"cite_spans": [
{
"start": 75,
"end": 103,
"text": "(Budanitsky and Hirst, 2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "The first measure is Wu and Palmer's (1994) . It is actually a sense similarity measure (a term may have multiple senses). Given two senses s ij , s i j of terms t i , t i , the measure is defined as follows:",
"cite_spans": [
{
"start": 21,
"end": 43,
"text": "Wu and Palmer's (1994)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "WP (s ij , s i j ) = 2 \u2022 depth(lcs(s ij , s i j )) depth(s ij ) + depth(s ij )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": ",",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "where lcs(s ij , s i j ) is the least common subsumer, i.e., the most specific common ancestor of the two senses in WordNet, and depth(s) is the depth of sense s in WordNet's hierarchy. Most terms have multiple senses, however, and word sense disambiguation methods (Navigli, 2009) are not yet robust enough. Hence, when given two aspect terms t i , t i , rather than particular senses of the terms, a simplistic greedy approach is to compute the similarities of all the possible pairs of senses s ij , s i j of t i , t i , and take the similarity of t i , t i to be the maximum similarity of the sense pairs (Bollegala et al., 2007b; Zesch and Gurevych, 2010) . We use this greedy approach with all the WordNet-based measures, but we also propose a sense pruning mechanism below, which improves their performance. In all the WordNetbased measures, if a term is not in WordNet, we take its similarity to any other term to be zero. 7 The second measure, PATH (s ij , s i j ), is simply the inverse of the length (plus one) of the shortest path connecting the senses s ij , s i j in WordNet (Zhang et al., 2013) . Again, the greedy approach can be used with terms having multiple senses. 6 The Dice coefficient ranges from 0 to 1. There was a very large number of possible responses the judges could provide and, hence, it would be inappropriate to use Cohen's K.",
"cite_spans": [
{
"start": 266,
"end": 281,
"text": "(Navigli, 2009)",
"ref_id": "BIBREF31"
},
{
"start": 609,
"end": 634,
"text": "(Bollegala et al., 2007b;",
"ref_id": "BIBREF2"
},
{
"start": 635,
"end": 660,
"text": "Zesch and Gurevych, 2010)",
"ref_id": "BIBREF40"
},
{
"start": 931,
"end": 932,
"text": "7",
"ref_id": null
},
{
"start": 1089,
"end": 1109,
"text": "(Zhang et al., 2013)",
"ref_id": "BIBREF43"
},
{
"start": 1186,
"end": 1187,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "7 This never happened in the restaurants dataset. In the laptops dataset, it only happened for 'hard drive' and 'battery life'. We use the NLTK implementation of the first four measures (see http://nltk.org/) and our own implementation of the distributional similarity measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "The third measure is Lin's (1998) , defined as:",
"cite_spans": [
{
"start": 21,
"end": 33,
"text": "Lin's (1998)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "LIN (s ij , s i j ) = 2 \u2022 ic(lcs(s ij , s i j )) ic(s ij ) + ic(s i j ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "where s ij , s i j are senses of terms t i , t i , lcs(s ij , s i j ) is the least common subsumer of s ij , s i j in WordNet, and ic(s) = \u2212 log P (s) is the information content of sense s (Pedersen et al., 2004) , estimated from a corpus. When the corpus is not sense-tagged, we follow the common approach of treating each occurrence of a word as an occurrence of all of its senses, when estimating ic(s). 8 We experimented with two variants of Lin's measure, one where the ic(s) scores were estimated from the Brown corpus (Marcus et al., 1993) , and one where they were estimated from the (restaurant or laptop) reviews of our datasets. The fourth measure is Jiang and Conrath's (1997) , defined below. Again, we experimented with two variants of ic(s), as above.",
"cite_spans": [
{
"start": 189,
"end": 212,
"text": "(Pedersen et al., 2004)",
"ref_id": "BIBREF33"
},
{
"start": 407,
"end": 408,
"text": "8",
"ref_id": null
},
{
"start": 525,
"end": 546,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF27"
},
{
"start": 662,
"end": 688,
"text": "Jiang and Conrath's (1997)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "JCN (s ij , s i j ) = 1 ic(s ij ) + ic(s i j ) \u2212 2 \u2022 lcs(s ij , s i j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "For all the above WordNet-based measures, we experimented with a sense pruning mechanism, which discards some of the senses of the aspect terms, before applying the greedy approach. For each aspect term t i , we consider all of its Word-Net senses s ij . For each s ij and each other aspect term t i , we compute (using PATH ) the similarity between s ij and each sense s i j of t i , and we consider the relevance of s ij to t i to be: 9 rel (s ij , t i ) = max",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "s i j \u2208 senses(t i ) PATH (s ij , s i j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "The relevance of s ij to all of the N other aspect terms t i is taken to be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "rel (s ij ) = 1 N \u2022 i =i rel (s ij , t i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "For each aspect term t i , we retain only its senses s ij with the top rel (s ij ) scores, which tends to prune senses that are very irrelevant to the particular domain (e.g., laptops). This sense pruning mechanism is novel, and we show experimentally that it improves the performance of all the WordNet-based similarity measures we examined. We also implemented a distributional similarity measure (Harris, 1968; Pad\u00f3 and Lapata, 2007; Cimiano et al., 2009; Zhang et al., 2013) .",
"cite_spans": [
{
"start": 399,
"end": 413,
"text": "(Harris, 1968;",
"ref_id": "BIBREF16"
},
{
"start": 414,
"end": 436,
"text": "Pad\u00f3 and Lapata, 2007;",
"ref_id": "BIBREF32"
},
{
"start": 437,
"end": 458,
"text": "Cimiano et al., 2009;",
"ref_id": "BIBREF9"
},
{
"start": 459,
"end": 478,
"text": "Zhang et al., 2013)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "Following Lin and Wu (2009) , for each aspect term t, we create a vector v(t) = PMI (t, w 1 ), . . . , PMI (t, w n ) . The vector components are the Pointwise Mutual Information scores of t and each word w i of a corpus:",
"cite_spans": [
{
"start": 10,
"end": 27,
"text": "Lin and Wu (2009)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "PMI (t, w i ) = \u2212 log P (t, w i ) P (t) \u2022 P (w i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "We treat P (t, w i ) as the probability of t, w i cooccurring in the same sentence, and we use the (laptop or restaurant) reviews of our datasets as the corpus to estimate the probabilities. The distributional similarity DS (t, t ) of two aspect terms t, t is the cosine similarity of v(t), v(t ). 10 Finally, we tried combinations of the similarity measures: AVG is the average of all five; WN is the average of the first four, which employ Word-Net; and WNDS is the average of WN and DS ; all the scores range in [0, 1]. We also tried regression (e.g., SVR), but there was no improvement.",
"cite_spans": [
{
"start": 298,
"end": 300,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A methods",
"sec_num": "3.2"
},
{
"text": "Each similarity measure was evaluated by computing its Pearson correlation with the scores of the gold similarity matrix. Table 2 shows the results.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 129,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Phase A experimental results",
"sec_num": "3.3"
},
{
"text": "Our sense pruning consistently improves all four WordNet-based measures. It does not apply to DS , which is why the DS results are identical with and without pruning. A paired t test indicates that the other differences (with and without pruning) of Table 2 are statistically significant (p < 0.05). We used the senses with the top five rel (s ij ) scores for each aspect term t i during sense pruning. We also experimented with keeping fewer senses, but the results were inferior or there was no improvement.",
"cite_spans": [],
"ref_spans": [
{
"start": 250,
"end": 257,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Phase A experimental results",
"sec_num": "3.3"
},
{
"text": "Lin's measure performed better when information content was estimated on the (much larger, but domain-independent) Brown corpus (LIN @Brown), as opposed to using the (domainspecific) reviews of our datasets (LIN @domain), but we observed no similar consistent pattern for JCN . Given its simplicity, PATH performed remarkably well in the restaurants dataset; it was the best measure (including combinations) without sense pruning, and the best uncombined measure with sense pruning. It performed worse, however, compared to several other measures in the laptops dataset. Similar comments apply to WP , which is among the top-performing uncombined measures in restaurants, both with and without sense pruning, but the worst overall measure in laptops. DS is the best overall measure in laptops when compared to measures without sense pruning, and the third best overall when compared to measures that use sense pruning, but the worst overall in restaurants both with and without pruning. LIN and JCN , which use both WordNet and corpus statistics, have a more balanced performance across the two datasets, but they are not top-performers in any of the two. Combinations of similarity measures seem more stable across domains, as the results of AVG, WN , and WNDS indicate, though experiments with more domains are needed to investigate this issue. WNDS is the best overall method with sense pruning, and among the best three methods without pruning in both datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase A experimental results",
"sec_num": "3.3"
},
{
"text": "To get a better view of the performance of WNDS with sense pruning, i.e., the best overall measure of Table 2 , we compared it to two state of the art semantic similarity systems. First, we applied the system of Han et al. (2013) , one of the best systems of the recent *Sem 2013 semantic text similarity competition, to our Phase A data. The performance (Pearson correlation with gold similarities) of the same system on the widely used WordSim353 word similarity dataset (Agirre et al., 2009) is 0.73, much higher than the same system's performance on our Phase A data (see Table 3 which suggests that our data are more difficult. 11 We also employed the recent Word2Vec system, which computes continuous vector space representations of words from large corpora and has been reported to improve results in word similarity tasks (Mikolov et al., 2013) . We used the English Wikipedia to compute word vectors with 200 features. 12 The similarity between two aspect terms was taken to be the cosine similarity of their vectors. This system performed better than Han et al.'s with laptops, but not with restaurants. Table 3 shows that WNDS (with sense pruning) performed clearly better than the system of Han et al. and Word2Vec. Table 3 also shows the Pearson correlation of each judge's scores to the gold similarity scores, as an indication of the best achievable results. Although WNDS (with sense pruning) performs reasonably well in both domains, 13 there is large scope for improvement.",
"cite_spans": [
{
"start": 212,
"end": 229,
"text": "Han et al. (2013)",
"ref_id": "BIBREF15"
},
{
"start": 473,
"end": 494,
"text": "(Agirre et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 633,
"end": 635,
"text": "11",
"ref_id": null
},
{
"start": 830,
"end": 852,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF28"
},
{
"start": 928,
"end": 930,
"text": "12",
"ref_id": null
},
{
"start": 1203,
"end": 1235,
"text": "Han et al. and Word2Vec. Table 3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 102,
"end": 109,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 576,
"end": 583,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1114,
"end": 1121,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Phase A experimental results",
"sec_num": "3.3"
},
{
"text": "In Phase B, the aspect terms are to be grouped into k non-overlapping clusters, for varying values of k, given a Phase A similarity matrix. We experimented with both the gold similarity matrix of Phase A and similarity matrices produced by WNDS (with SP), the best Phase A method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase B",
"sec_num": "4"
},
{
"text": "We experimented with agglomerative clustering and four linkage criteria: single, complete, average, and Ward (Manning and Sch\u00fctze, 1999; Hastie et al., 2001) . Let d(t 1 , t 2 ) be the distance of 11 The system of Han et al. (2013) is available from http://semanticwebarchive.cs.umbc.edu/ SimService/; we use the STS similarity.",
"cite_spans": [
{
"start": 109,
"end": 136,
"text": "(Manning and Sch\u00fctze, 1999;",
"ref_id": "BIBREF26"
},
{
"start": 137,
"end": 157,
"text": "Hastie et al., 2001)",
"ref_id": "BIBREF17"
},
{
"start": 197,
"end": 199,
"text": "11",
"ref_id": null
},
{
"start": 214,
"end": 231,
"text": "Han et al. (2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phase B methods",
"sec_num": "4.1"
},
{
"text": "12 Word2Vec is available from https://code. google.com/p/word2vec/. We used the continuous bag of words model with default parameters, the first billion characters of the English Wikipedia, and the preprocessing of http://mattmahoney.net/dc/textdata.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase B methods",
"sec_num": "4.1"
},
{
"text": "13 Recall that the Pearson correlation ranges from \u22121 to 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase B methods",
"sec_num": "4.1"
},
{
"text": "two individual instances t 1 , t 2 ; in our case, the instances are aspect terms and d(t 1 , t 2 ) is the inverse of the similarity of t 1 , t 2 , defined by the Phase A similarity matrix (gold or produced by WNDS ). Different linkage criteria define differently the distance of two clusters D(C 1 , C 2 ), which affects the choice of clusters that are merged to produce coarser (higher-level) clusters:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase B methods",
"sec_num": "4.1"
},
{
"text": "D single (C 1 , C 2 ) = min t 1 \u2208C 1 ,t 2 \u2208C 2 d(t 1 , t 2 ) D compl (C 1 , C 2 ) = max t 1 \u2208C 1 ,t 2 \u2208C 2 d(t 1 , t 2 ) D avg (C 1 , C 2 ) = 1 |C 1 ||C 2 | t 1 \u2208C 1 t 2 \u2208C 2 d(t 1 , t 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase B methods",
"sec_num": "4.1"
},
{
"text": "Complete linkage tends to produce more compact clusters, compared to single linkage, with average linkage being in between. Ward minimizes the total in-cluster variance; consult Milligan (1980) for further details. 14",
"cite_spans": [
{
"start": 178,
"end": 193,
"text": "Milligan (1980)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phase B methods",
"sec_num": "4.1"
},
{
"text": "To evaluate the k clusters produced at each aspect granularity by the different linkage criteria, we used the Silhouette Index (SI ) (Rousseeuw, 1987) , a cluster evaluation measure that considers both inter-and intra-cluster coherence. 15 Given a set of clusters {C 1 , . . . , C k }, each SI (C i ) is defined as:",
"cite_spans": [
{
"start": 133,
"end": 150,
"text": "(Rousseeuw, 1987)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phase B experimental results",
"sec_num": "4.2"
},
{
"text": "SI (C i ) = 1 |C i | \u2022 |C i | j=1 b j \u2212 a j max(b j , a j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase B experimental results",
"sec_num": "4.2"
},
{
"text": ",",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase B experimental results",
"sec_num": "4.2"
},
{
"text": "where a j is the mean distance from the j-th instance of C i to the other instances in C i , and b j is the mean distance from the j-th instance of C i to the instances in the cluster nearest to C i . Then:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase B experimental results",
"sec_num": "4.2"
},
{
"text": "SI ({C 1 , . . . , C k }) = 1 k \u2022 k i=1 SI (C i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase B experimental results",
"sec_num": "4.2"
},
{
"text": "We always use the correct (gold) distances of the instances (terms) when computing the SI scores. As shown in Fig. 3 , no linkage criterion clearly outperforms the others, when the gold matrix of Phase A is used; all four criteria perform reasonably well. Note that the SI ranges from \u22121 to 14 We used the SCIPY implementations of agglomerative clustering with the four criteria (see http://www. scipy.org), relying on maxclust to obtain the slice of the resulting hierarchy that leads to k (or approx. k) clusters. 15 We used the SI implementation of Pedregosa et al. (2011) ; see http://scikit-learn.org/. We also experimented with the Dunn Index (Dunn, 1974) and the Davies-Bouldin Index (1979) , but we obtained similar results. Figure 4 shows that when the similarity matrix of WNDS (with SP) is used, the SI scores deteriorate significantly; again, there is no clear winner among the linkage criteria, but average and Ward seem to be overall better than the others. In a final experiment, we showed clusterings of varying granularities (k values) to four human judges (graduate CS students). The clusterings were produced by two systems: one that used the gold similarity matrix of Phase A and agglomerative clustering with average linkage in Phase B, and one that used the similarity matrix of WNDS (with SP) and again agglomerative clustering with average linkage. We showed all the clusterings to all the judges. Each judge was asked to eval-uate each clustering on a 1-5 scale. We measured the absolute inter-annotator agreement, as in Section 3.1, and found high agreement in all cases (0.93 and 0.83 for the two systems, respectively, in restaurants; 0.85 for both in laptops). 16 Figure 5 shows the average human scores of the two systems for different granularities. The judges considered the aspect groups always perfect or near-perfect when the gold similarity matrix of Phase A was used, but they found the aspect groups to be of rather poor quality when the similarity matrix of the best Phase A measure was used. These results, along with those of Fig. 3-4 , show that more effort needs to be devoted to improving the similarity measures of Phase A, whereas Phase B is in effect an almost solved problem, if a good similarity matrix is available.",
"cite_spans": [
{
"start": 291,
"end": 293,
"text": "14",
"ref_id": null
},
{
"start": 516,
"end": 518,
"text": "15",
"ref_id": null
},
{
"start": 552,
"end": 575,
"text": "Pedregosa et al. (2011)",
"ref_id": "BIBREF34"
},
{
"start": 649,
"end": 661,
"text": "(Dunn, 1974)",
"ref_id": "BIBREF11"
},
{
"start": 670,
"end": 697,
"text": "Davies-Bouldin Index (1979)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 110,
"end": 116,
"text": "Fig. 3",
"ref_id": "FIGREF3"
},
{
"start": 733,
"end": 741,
"text": "Figure 4",
"ref_id": "FIGREF5"
},
{
"start": 1693,
"end": 1701,
"text": "Figure 5",
"ref_id": "FIGREF6"
},
{
"start": 2067,
"end": 2075,
"text": "Fig. 3-4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Phase B experimental results",
"sec_num": "4.2"
},
{
"text": "We considered a new, more demanding form of aspect aggregation in ABSA, which aims to aggregate aspects at multiple granularities, as opposed to simply merging near-synonyms, and without assuming that manually crafted domain-specific ontologies are available. We decomposed the problem in two processing phases, which allow previous work on term similarity and hierarchical clustering to be reused and evaluated appropriately with high inter-annotator agreement. We showed that the second phase, where we used agglomerative clustering, is an almost solved problem, whereas further research is needed in the first phrase, where term similarity measures are employed. We also introduced a sense pruning mechanism that significantly improves WordNet-based similarity measures, leading to a measure that outperforms state of the art similarity methods in the first phase of our decomposition. We also made publicly available the datasets of our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Topic models are typically also used to perform aspect extraction, apart from aspect aggregation, but simple heuristics (e.g., most frequent nouns) often outperform them in aspect extraction(Liu, 2012;Moghaddam and Ester, 2012).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The datasets are available at http://nlp.cs. aueb.gr/software.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The matrix is symmetric; hence, the judges had to fill in only half of it. The guidelines and an annotation tool that were given to the judges are available upon request.5 The Pearson correlation ranges from \u22121 to 1, whereas the absolute inter-annotator agreement ranges from 0 to 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.d.umn.edu/\u02dctpederse/Data/ README-WN-IC-30.txt. We use the default counting.9 We also experimented with other similarity measures when computing rel (sij, t i ), instead of PATH , but there was no significant difference. We use NLTK to tokenize, remove punctuation, and stop-words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also experimented with Euclidean distance, a normalized PMI(Bouma, 2009), and the Brown corpus, but there was no improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The Pearson correlation cannot be computed, as several judges gave the same rating to the first system, for all k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank G. Batistatos, A. Zosakis, and G. Lampouras for their annotations in Phase A. We thank A. Kosmopoulos, G. Lampouras, P. Malakasiotis, and I. Lourentzou for their annotations in Phase B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A study on similarity and relatedness using distributional and wordnetbased approaches",
"authors": [
{
"first": "E",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kravalova",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pa\u015fca",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Annual Conference of NAACL",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Agirre, E. Alfonseca, K. Hall, J. Kravalova, M. Pa\u015fca, and A. Soroa. 2009. A study on similar- ity and relatedness using distributional and wordnet- based approaches. In Proceedings of the Annual Conference of NAACL, pages 19-27, Boulder, CO, USA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An integrated approach to measuring semantic similarity between words using information available on the web",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsuo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "340--347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Bollegala, Y. Matsuo, and M. Ishizuka. 2007a. An integrated approach to measuring semantic sim- ilarity between words using information available on the web. In Proceedings of HLT-NAACL, pages 340-347, Rochester, NY, USA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Measuring semantic similarity between words using web search engines",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsuo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 16th International Conference of WWW",
"volume": "766",
"issue": "",
"pages": "757--766",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Bollegala, Y. Matsuo, and M. Ishizuka. 2007b. Measuring semantic similarity between words using web search engines. In Proceedings of the 16th In- ternational Conference of WWW, volume 766, pages 757-766, Banff, Alberta, Canada.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Normalized (pointwise) mutual information in collocation extraction",
"authors": [
{
"first": "G",
"middle": [],
"last": "Bouma",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Biennial Conference of GSCL",
"volume": "",
"issue": "",
"pages": "31--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Bouma. 2009. Normalized (pointwise) mutual in- formation in collocation extraction. Proceedings of the Biennial Conference of GSCL, pages 31-40.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An unsupervised aspect-sentiment model for online reviews",
"authors": [
{
"first": "S",
"middle": [],
"last": "Brody",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Annual Conference of NAACL",
"volume": "",
"issue": "",
"pages": "804--812",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Brody and N. Elhadad. 2010. An unsupervised aspect-sentiment model for online reviews. In Pro- ceedings of the Annual Conference of NAACL, pages 804-812, Los Angeles, CA, USA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Evaluating WordNet-based measures of lexical semantic relatedness",
"authors": [
{
"first": "A",
"middle": [],
"last": "Budanitsky",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "1",
"pages": "13--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Budanitsky and G. Hirst. 2006. Evaluating WordNet-based measures of lexical semantic relat- edness. Computational Linguistics, 32(1):13-47.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Extracting knowledge from evaluative text",
"authors": [
{
"first": "G",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "R",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Zwart",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 3rd International Conference on Knowledge Capture",
"volume": "",
"issue": "",
"pages": "11--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Carenini, R. T. Ng, and E. Zwart. 2005. Extract- ing knowledge from evaluative text. In Proceedings of the 3rd International Conference on Knowledge Capture, pages 11-18, Banff, Alberta, Canada.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Novel association measures using web search with double checking",
"authors": [
{
"first": "H",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference of COLING and the 44th Annual Meeting of ACL",
"volume": "",
"issue": "",
"pages": "1009--1016",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Chen, M. Lin, and Y. Wei. 2006. Novel association measures using web search with double checking. In Proceedings of the 21st International Conference of COLING and the 44th Annual Meeting of ACL, pages 1009-1016, Sydney, Australia.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning concept hierarchies from text with a guided hierarchical clustering algorithm",
"authors": [
{
"first": "P",
"middle": [],
"last": "Cimiano",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Staab",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ICML -Workshop on Learning and Extending Lexical Ontologies with Machine Learning Methods",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Cimiano and S. Staab. 2005. Learning concept hier- archies from text with a guided hierarchical cluster- ing algorithm. In Proceedings of ICML -Workshop on Learning and Extending Lexical Ontologies with Machine Learning Methods, Bonn, Germany.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Ontology learning",
"authors": [
{
"first": "P",
"middle": [],
"last": "Cimiano",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "M\u00e4dche",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Staab",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "V\u00f6lker",
"suffix": ""
}
],
"year": 2009,
"venue": "Handbook on Ontologies",
"volume": "",
"issue": "",
"pages": "245--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Cimiano, A. M\u00e4dche, S. Staab, and J. V\u00f6lker. 2009. Ontology learning. In Handbook on Ontologies, pages 245-267. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A cluster separation measure",
"authors": [
{
"first": "D",
"middle": [
"L"
],
"last": "Davies",
"suffix": ""
},
{
"first": "D",
"middle": [
"W"
],
"last": "Bouldin",
"suffix": ""
}
],
"year": 1979,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "1",
"issue": "2",
"pages": "224--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. L. Davies and D. W. Bouldin. 1979. A cluster sepa- ration measure. IEEE Transactions on Pattern Anal- ysis and Machine Intelligence, 1(2):224-227.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Well-separated clusters and optimal fuzzy partitions",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "Dunn",
"suffix": ""
}
],
"year": 1974,
"venue": "Journal of Cybernetics",
"volume": "4",
"issue": "1",
"pages": "95--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. C. Dunn. 1974. Well-separated clusters and optimal fuzzy partitions. Journal of Cybernetics, 4(1):95- 104.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Taxonomy induction using hierarchical random graphs",
"authors": [
{
"first": "T",
"middle": [],
"last": "Fountain",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of NAACL:HLT",
"volume": "",
"issue": "",
"pages": "466--476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Fountain and M. Lapata. 2012. Taxonomy induction using hierarchical random graphs. In Proceedings of NAACL:HLT, pages 466-476, Montreal, Canada.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Beyond the stars: Improving rating predictions using review text content",
"authors": [
{
"first": "G",
"middle": [],
"last": "Ganu",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Elhadad",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Marian",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th International Workshop on the Web and Databases",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Ganu, N. Elhadad, and A. Marian. 2009. Beyond the stars: Improving rating predictions using review text content. In Proceedings of the 12th Interna- tional Workshop on the Web and Databases, Prov- idence, RI, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Product feature categorization with multilevel latent semantic association",
"authors": [
{
"first": "H",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 18th CIKM",
"volume": "",
"issue": "",
"pages": "1087--1096",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Guo, H. Zhu, Z. Guo, X. Zhang, and Z. Su. 2009. Product feature categorization with multilevel latent semantic association. In Proceedings of the 18th CIKM, pages 1087-1096.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Umbc ebiquity-core: Semantic textual similarity systems",
"authors": [
{
"first": "L",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kashyap",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Finin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mayfield",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Weese",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2nd Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "44--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Han, A. Kashyap, T. Finin, J. Mayfield, and J. Weese. 2013. Umbc ebiquity-core: Semantic tex- tual similarity systems. In Proceedings of the 2nd Joint Conference on Lexical and Computational Se- mantics, pages 44-52, Atlanta, GA, USA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Mathematical Structures of Language",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1968,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Harris. 1968. Mathematical Structures of Lan- guage. Wiley.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The Elements of Statistical Learning",
"authors": [
{
"first": "T",
"middle": [],
"last": "Hastie",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Tibshirani",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Hastie, R. Tibshirani, and J. Friedman. 2001. The Elements of Statistical Learning. Springer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semantic similarity based on corpus statistics and lexical taxonomy",
"authors": [
{
"first": "J",
"middle": [
"J"
],
"last": "Jiang",
"suffix": ""
},
{
"first": "D",
"middle": [
"W"
],
"last": "Conrath",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of ROCLING",
"volume": "",
"issue": "",
"pages": "19--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. J. Jiang and D. W. Conrath. 1997. Semantic similar- ity based on corpus statistics and lexical taxonomy. In Proceedings of ROCLING, pages 19-33, Taiwan, China.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Aspect and sentiment unification model for online review analysis",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Jo",
"suffix": ""
},
{
"first": "A",
"middle": [
"H"
],
"last": "Oh",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 4th International Conference of WSDM",
"volume": "",
"issue": "",
"pages": "815--824",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Jo and A. H. Oh. 2011. Aspect and sentiment unifi- cation model for online review analysis. In Proceed- ings of the 4th International Conference of WSDM, pages 815-824, Hong Kong, China.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Taxonomy learning using word sense induction",
"authors": [
{
"first": "I",
"middle": [
"P"
],
"last": "Klapaftis",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Manandhar",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "82--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. P. Klapaftis and S. Manandhar. 2010. Taxonomy learning using word sense induction. In Proceedings of NAACL, pages 82-90, Los Angeles, CA, USA.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Extracting aspect-evaluation and aspect-of relations in opinion mining",
"authors": [
{
"first": "N",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Joint Conference on EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "1065--1074",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Kobayashi, K. Inui, and Y. Matsumoto. 2007. Ex- tracting aspect-evaluation and aspect-of relations in opinion mining. In Proceedings of the Joint Confer- ence on EMNLP-CoNLL, pages 1065-1074, Prague, Czech Republic.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Phrase clustering for discriminative learning",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1030--1038",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Lin and X. Wu. 2009. Phrase clustering for dis- criminative learning. In Proceedings of ACL, pages 1030-1038, Suntec, Singapore. ACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "An information-theoretic definition of similarity",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 15th ICML",
"volume": "",
"issue": "",
"pages": "296--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Lin. 1998. An information-theoretic definition of similarity. In Proceedings of the 15th ICML, pages 296-304, Madison, WI, USA.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Opinion observer: analyzing and comparing opinions on the web",
"authors": [
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 14th International Conference of WWW",
"volume": "",
"issue": "",
"pages": "342--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Liu, M. Hu, and J. Cheng. 2005. Opinion observer: analyzing and comparing opinions on the web. In Proceedings of the 14th International Conference of WWW, pages 342-351, Chiba, Japan.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Synthesis Lectures on Human Language Technologies",
"authors": [
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Liu. 2012. Sentiment Analysis and Opinion Mining. Synthesis Lectures on Human Language Technolo- gies. Morgan & Claypool.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Foundations of Statistical Natural Language Processing",
"authors": [
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. D. Manning and H. Sch\u00fctze. 1999. Foundations of Statistical Natural Language Processing. MIT Press, Cambridge, MA, USA.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Building a large annotated corpus of english: The penn treebank",
"authors": [
{
"first": "M",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini. 1993. Building a large annotated corpus of en- glish: The penn treebank. Computational Linguis- tics, 19(2):313-330.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Kai",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Mikolov, C. Kai, G. Corrado, and J. Dean. 2013. Efficient estimation of word representations in vec- tor space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "An examination of the effect of six types of error perturbation on fifteen clustering algorithms",
"authors": [
{
"first": "G",
"middle": [
"W"
],
"last": "Milligan",
"suffix": ""
}
],
"year": 1980,
"venue": "Psychometrika",
"volume": "45",
"issue": "3",
"pages": "325--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G.W. Milligan. 1980. An examination of the effect of six types of error perturbation on fifteen clustering algorithms. Psychometrika, 45(3):325-342.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "On the design of lda models for aspect-based opinion mining",
"authors": [
{
"first": "S",
"middle": [],
"last": "Moghaddam",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ester",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 21st CIKM",
"volume": "",
"issue": "",
"pages": "803--812",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Moghaddam and M. Ester. 2012. On the design of lda models for aspect-based opinion mining. In Pro- ceedings of the 21st CIKM, pages 803-812, Maui, HI, USA.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Word sense disambiguation: A survey",
"authors": [
{
"first": "R",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM Computing Surveys",
"volume": "41",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Navigli. 2009. Word sense disambiguation: A sur- vey. ACM Computing Surveys, 41(2):10:1-10:69.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Dependency-based construction of semantic space models",
"authors": [
{
"first": "S",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "161--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Pad\u00f3 and M. Lapata. 2007. Dependency-based con- struction of semantic space models. Computational Linguistics, 33(2):161-199.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Wordnet::similarity: measuring the relatedness of concepts",
"authors": [
{
"first": "T",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michelizzi",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of NAACL:HTL -Demonstrations",
"volume": "",
"issue": "",
"pages": "38--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Pedersen, S. Patwardhan, and J. Michelizzi. 2004. Wordnet::similarity: measuring the relatedness of concepts. In Proceedings of NAACL:HTL -Demon- strations, pages 38-41, Boston, MA, USA.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Scikit-learn: Machine learning in python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Pas- sos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learn- ing in python. Journal of Machine Learning Re- search, 12:2825-2830.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Silhouettes: a graphical aid to the interpretation and validation of cluster analysis",
"authors": [
{
"first": "P",
"middle": [],
"last": "Rousseeuw",
"suffix": ""
}
],
"year": 1987,
"venue": "Journal of Computational and Applied Mathematics",
"volume": "20",
"issue": "1",
"pages": "53--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Rousseeuw. 1987. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathemat- ics, 20(1):53-65.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A joint model of text and aspect ratings for sentiment summarization",
"authors": [
{
"first": "I",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "R",
"middle": [
"T"
],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of ACL-HLT",
"volume": "",
"issue": "",
"pages": "308--316",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Titov and R. T. McDonald. 2008a. A joint model of text and aspect ratings for sentiment summarization. In Proceedings of the 46th Annual Meeting of ACL- HLT, pages 308-316, Columbus, OH, USA.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Modeling online reviews with multi-grain topic models",
"authors": [
{
"first": "I",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "R",
"middle": [
"T"
],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 17th International Conference of WWW",
"volume": "",
"issue": "",
"pages": "111--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Titov and R. T. McDonald. 2008b. Modeling online reviews with multi-grain topic models. In Proceed- ings of the 17th International Conference of WWW, pages 111-120, Beijing, China.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Verbs semantics and lexical selection",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd ACL",
"volume": "",
"issue": "",
"pages": "133--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Wu and M. Palmer. 1994. Verbs semantics and lexi- cal selection. In Proceedings of the 32nd ACL, pages 133-138, Las Cruces, NM, USA.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Domain-assisted product aspect hierarchy generation: towards hierarchical organization of unstructured consumer reviews",
"authors": [
{
"first": "J",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Zha",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Chua",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "140--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Yu, Z. Zha, M. Wang, K. Wang, and T. Chua. 2011. Domain-assisted product aspect hierarchy genera- tion: towards hierarchical organization of unstruc- tured consumer reviews. In Proceedings of EMNLP, pages 140-150, Edinburgh, UK.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Wisdom of crowds versus wisdom of linguists -measuring the semantic relatedness of words",
"authors": [
{
"first": "T",
"middle": [],
"last": "Zesch",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2010,
"venue": "Natural Language Engineering",
"volume": "16",
"issue": "1",
"pages": "25--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Zesch and I. Gurevych. 2010. Wisdom of crowds versus wisdom of linguists -measuring the semantic relatedness of words. Natural Language Engineer- ing, 16(1):25-59.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Grouping product features using semi-supervised learning with soft-constraints",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Jia",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference of COLING",
"volume": "",
"issue": "",
"pages": "1272--1280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Zhai, B. Liu, H. Xu, and P. Jia. 2010. Group- ing product features using semi-supervised learning with soft-constraints. In Proceedings of the 23rd International Conference of COLING, pages 1272- 1280, Beijing, China.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Clustering product features for opinion mining",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Jia",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 4th International Conference of WSDM",
"volume": "",
"issue": "",
"pages": "347--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Zhai, B. Liu, H. Xu, and P. Jia. 2011. Clustering product features for opinion mining. In Proceedings of the 4th International Conference of WSDM, pages 347-354, Hong Kong, China.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Recent advances in methods of lexical semantic relatedness -a survey",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gentile",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Ciravegna",
"suffix": ""
}
],
"year": 2013,
"venue": "Natural Language Engineering",
"volume": "",
"issue": "1",
"pages": "1--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Zhang, A. Gentile, and F. Ciravegna. 2013. Re- cent advances in methods of lexical semantic relat- edness -a survey. Natural Language Engineering, FirstView(1):1-69.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Aspect groups and scores of an entity.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Example aspect hierarchies produced by agglomerative hierarchical clustering.f ood f ish sushi dishes wine f",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Silhouette Index (SI) results for Phase B, using the gold similarity matrix of Phase A.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "SI results for Phase B, using the WNDS (with SP) similarity matrix of Phase A. 1, with higher values indicating better clustering.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"text": "Human evaluation of aspect groups.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table/>",
"type_str": "table",
"text": "",
"html": null,
"num": null
},
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"text": "Phase A results (Pearson correlation to gold similarities) with and without sense pruning.",
"html": null,
"num": null
},
"TABREF4": {
"content": "<table><tr><td>: Phase A results (Pearson correlation to</td></tr><tr><td>gold similarities) of WNDS with SP against se-</td></tr><tr><td>mantic similarity systems and human judges.</td></tr></table>",
"type_str": "table",
"text": "",
"html": null,
"num": null
}
}
}
}