ACL-OCL / Base_JSON /prefixD /json /D19 /D19-1008.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:09:40.947268Z"
},
"title": "Correlations between Word Vector Sets",
"authors": [
{
"first": "Vitalii",
"middle": [],
"last": "Zhelezniak",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "April",
"middle": [],
"last": "Shen",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Similarity measures based purely on word embeddings are comfortably competing with much more sophisticated deep learning and expert-engineered systems on unsupervised semantic textual similarity (STS) tasks. In contrast to commonly used geometric approaches, we treat a single word embedding as e.g. 300 observations from a scalar random variable. Using this paradigm, we first illustrate that similarities derived from elementary pooling operations and classic correlation coefficients yield excellent results on standard STS benchmarks, outperforming many recently proposed methods while being much faster and trivial to implement. Next, we demonstrate how to avoid pooling operations altogether and compare sets of word embeddings directly via correlation operators between reproducing kernel Hilbert spaces. Just like cosine similarity is used to compare individual word vectors, we introduce a novel application of the centered kernel alignment (CKA) as a natural generalisation of squared cosine similarity for sets of word vectors. Likewise, CKA is very easy to implement and enjoys very strong empirical results.",
"pdf_parse": {
"paper_id": "D19-1008",
"_pdf_hash": "",
"abstract": [
{
"text": "Similarity measures based purely on word embeddings are comfortably competing with much more sophisticated deep learning and expert-engineered systems on unsupervised semantic textual similarity (STS) tasks. In contrast to commonly used geometric approaches, we treat a single word embedding as e.g. 300 observations from a scalar random variable. Using this paradigm, we first illustrate that similarities derived from elementary pooling operations and classic correlation coefficients yield excellent results on standard STS benchmarks, outperforming many recently proposed methods while being much faster and trivial to implement. Next, we demonstrate how to avoid pooling operations altogether and compare sets of word embeddings directly via correlation operators between reproducing kernel Hilbert spaces. Just like cosine similarity is used to compare individual word vectors, we introduce a novel application of the centered kernel alignment (CKA) as a natural generalisation of squared cosine similarity for sets of word vectors. Likewise, CKA is very easy to implement and enjoys very strong empirical results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Distributed representations of text have had a massive impact on the natural language processing (NLP), information retrieval (IR), and machine learning (ML) communities, thanks in part to their ability to capture rich notions of semantic similarity. While this work originally began with word embeddings (Bengio et al., 2003; Mikolov et al., 2013a; Pennington et al., 2014; , there is now an ever-increasing number of representations for longer units of text based on simple aggregations of word vectors (Mitchell and Lapata, 2008; De Boom et al., 2016; Arora et al., 2017; Wieting et al., 2016; Wieting and Gimpel, 2018; Zhelezniak et al., 2019b) as well as complex neural architectures (Le and Mikolov, 2014; Kiros et al., 2015; Hill et al., 2016; Conneau et al., 2017; Gan et al., 2017; Tang et al., 2017; Pagliardini et al., 2018; Zhelezniak et al., 2018; Subramanian et al., 2018; Cer et al., 2018; Devlin et al., 2018) .",
"cite_spans": [
{
"start": 305,
"end": 326,
"text": "(Bengio et al., 2003;",
"ref_id": "BIBREF7"
},
{
"start": 327,
"end": 349,
"text": "Mikolov et al., 2013a;",
"ref_id": "BIBREF32"
},
{
"start": 350,
"end": 374,
"text": "Pennington et al., 2014;",
"ref_id": "BIBREF38"
},
{
"start": 505,
"end": 532,
"text": "(Mitchell and Lapata, 2008;",
"ref_id": "BIBREF35"
},
{
"start": 533,
"end": 554,
"text": "De Boom et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 555,
"end": 574,
"text": "Arora et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 575,
"end": 596,
"text": "Wieting et al., 2016;",
"ref_id": "BIBREF46"
},
{
"start": 597,
"end": 622,
"text": "Wieting and Gimpel, 2018;",
"ref_id": "BIBREF47"
},
{
"start": 623,
"end": 648,
"text": "Zhelezniak et al., 2019b)",
"ref_id": null
},
{
"start": 689,
"end": 711,
"text": "(Le and Mikolov, 2014;",
"ref_id": "BIBREF31"
},
{
"start": 712,
"end": 731,
"text": "Kiros et al., 2015;",
"ref_id": "BIBREF28"
},
{
"start": 732,
"end": 750,
"text": "Hill et al., 2016;",
"ref_id": "BIBREF23"
},
{
"start": 751,
"end": 772,
"text": "Conneau et al., 2017;",
"ref_id": "BIBREF14"
},
{
"start": 773,
"end": 790,
"text": "Gan et al., 2017;",
"ref_id": "BIBREF20"
},
{
"start": 791,
"end": 809,
"text": "Tang et al., 2017;",
"ref_id": "BIBREF43"
},
{
"start": 810,
"end": 835,
"text": "Pagliardini et al., 2018;",
"ref_id": "BIBREF37"
},
{
"start": 836,
"end": 860,
"text": "Zhelezniak et al., 2018;",
"ref_id": "BIBREF50"
},
{
"start": 861,
"end": 886,
"text": "Subramanian et al., 2018;",
"ref_id": "BIBREF42"
},
{
"start": 887,
"end": 904,
"text": "Cer et al., 2018;",
"ref_id": "BIBREF12"
},
{
"start": 905,
"end": 925,
"text": "Devlin et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "By contrast, relatively little effort has been directed towards understanding the similarity measures used to compare these textual embeddings, for which cosine similarity remains a convenient and widespread, yet somewhat arbitrary default, despite some emerging research into the alternatives (Camacho-Collados et al., 2015; De Boom et al., 2015; Santus et al., 2018; Zhelezniak et al., 2019b,a) . Part of the appeal of cosine similarity perhaps lies in the simple geometric interpretation behind it. However, as embeddings are ultimately just arrays of numbers, we are free to take alternative viewpoints other than the geometric ones, if they lead to illuminating insights or strong-performing methods.",
"cite_spans": [
{
"start": 294,
"end": 325,
"text": "(Camacho-Collados et al., 2015;",
"ref_id": "BIBREF10"
},
{
"start": 326,
"end": 347,
"text": "De Boom et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 348,
"end": 368,
"text": "Santus et al., 2018;",
"ref_id": "BIBREF41"
},
{
"start": 369,
"end": 396,
"text": "Zhelezniak et al., 2019b,a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Following Zhelezniak et al. (2019a) , we treat a word embedding not as a geometric vector but as a statistical sample (of e.g. 300 observations) from a scalar random variable, and indeed find insights that are both intriguing and noteworthy. We first illustrate that similarities derived from elementary pooling operations and classic univariate correlation coefficients yield excellent results on standard semantic textual similarity (STS) benchmarks, outperforming many recently proposed methods while being much faster and simpler to implement. This empirically validates the advantages of the statistical perspective on word embeddings over the geometric interpretations. In the process, we provide more evidence that departures from normality, and in particular the presence of outliers, can have severe negative effects on the performance of some correlation coeffi-cients. We show how to overcome these complications, by selecting an outlier-removing pooling operation such as max-pooling, applying a more robust correlation coefficient such as Spearman's \u03c1, or simply clipping (winsorizing) the word vectors.",
"cite_spans": [
{
"start": 10,
"end": 35,
"text": "Zhelezniak et al. (2019a)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Next, we demonstrate how to avoid pooling operations completely and compare sets of word embeddings directly via correlation operators between reproducing kernel Hilbert spaces (RKHS). We introduce a novel application of the kernel alignment (KA) and the centered kernel alignment (CKA) as a natural generalisation of the squared cosine similarity and Pearson correlation for the sets of word embeddings. These multivariate correlation coefficients are very easy to implement and also enjoy very strong empirical results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several lines of research seek to combine the strength of pretrained word embeddings and the elegance of set-or bag-of-words (BoW) representations. Any method that determines semantic similarity between sentences by comparing the corresponding sets of word embeddings is directly related to our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Perhaps the most obvious such approaches are based on elementary pooling operations such as average-, max-and min-pooling (Mitchell and Lapata, 2008; De Boom et al., 2015 . While seemingly over-simplistic, numerous studies have confirmed their impressive performance on the downstream tasks (Arora et al., 2017; Wieting et al., 2016; Wieting and Gimpel, 2018; Zhelezniak et al., 2019b) One step further, Zhao and Mao (2017) ; Zhelezniak et al. (2019b) introduce fuzzy bags-of-words (FBoW) where degrees of membership in a fuzzy set are given by the similarities between word embeddings. Zhelezniak et al. (2019b) show a close connection between FBoW and max-pooled word vectors.",
"cite_spans": [
{
"start": 122,
"end": 149,
"text": "(Mitchell and Lapata, 2008;",
"ref_id": "BIBREF35"
},
{
"start": 150,
"end": 170,
"text": "De Boom et al., 2015",
"ref_id": "BIBREF17"
},
{
"start": 291,
"end": 311,
"text": "(Arora et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 312,
"end": 333,
"text": "Wieting et al., 2016;",
"ref_id": "BIBREF46"
},
{
"start": 334,
"end": 359,
"text": "Wieting and Gimpel, 2018;",
"ref_id": "BIBREF47"
},
{
"start": 360,
"end": 385,
"text": "Zhelezniak et al., 2019b)",
"ref_id": null
},
{
"start": 404,
"end": 423,
"text": "Zhao and Mao (2017)",
"ref_id": "BIBREF49"
},
{
"start": 587,
"end": 612,
"text": "Zhelezniak et al. (2019b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Some approaches do not seek to build an explicit representation and instead focus directly on designing a similarity function between sets. Word Mover's Distance (WMD) (Kusner et al., 2015) is an instance of the Earth Mover's Distance (EMD) computed between normalised BoW, with the cost matrix given by Euclidean distances between word embeddings. In the soft cardinality framework of (Jimenez et al., 2010 (Jimenez et al., , 2015 , the contribution of a word to the cardinality of a set depends on its similarities to other words in the same set. Such sets are then compared using an appropriately defined Jaccard index or related measures. DynaMax (Zhelezniak et al., 2019b) uses universe-constrained fuzzy sets designed explicitly for similarity computations.",
"cite_spans": [
{
"start": 168,
"end": 189,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF30"
},
{
"start": 386,
"end": 407,
"text": "(Jimenez et al., 2010",
"ref_id": "BIBREF25"
},
{
"start": 408,
"end": 431,
"text": "(Jimenez et al., , 2015",
"ref_id": "BIBREF26"
},
{
"start": 651,
"end": 677,
"text": "(Zhelezniak et al., 2019b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Approaches that see word embeddings as statistical objects are very closely related to our work. Virtually all of them treat word embeddings as observations from some D-variate parametric family, where D is the embedding dimension. Arora et al. (2016 Arora et al. ( , 2017 introduce a latent discourse model and show the maximum likelihood estimate (MLE) for the discourse vector to be the weighted average of word embeddings in a sentence, where the weights are given by smooth inverse frequencies (SIF). Nikolentzos et al. (2017) ; Torki (2018) treat sets of word embeddings as observations from D-variate Gaussians, and compare such sets with cosine similarity between the parameters (means and covariances) estimated by maximum likelihood. Vargas et al. (2019) measure semantic similarity through penalised likelihood ratio between the joint and factorised models and explore Gaussian and von Mises-Fisher likelihoods.",
"cite_spans": [
{
"start": 232,
"end": 250,
"text": "Arora et al. (2016",
"ref_id": "BIBREF5"
},
{
"start": 251,
"end": 272,
"text": "Arora et al. ( , 2017",
"ref_id": "BIBREF6"
},
{
"start": 506,
"end": 531,
"text": "Nikolentzos et al. (2017)",
"ref_id": "BIBREF36"
},
{
"start": 744,
"end": 764,
"text": "Vargas et al. (2019)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Cosine similarity between covariances is an instance of the RV coefficient and its uncentered version was applied in the context of word embeddings before (Botev et al., 2017) . We arrive at a similar coefficient (but with different centering) as a special case of CKA, which in the general case makes no parametric assumptions about disbtributions whatsoever. In particular our version is suitable for comparing sets containing just one word vector, whereas the method of Nikolentzos et al. (2017); Torki (2018) requires at least two vectors in each set. Very recently, Kornblith et al. (2019) used CKA to compare representations between layers of the same or different neural networks. This is again an instance of treating such representations as observations from a D-variate distribution, where D is the dimension of the hidden layer in question. Our use of CKA is completely different from theirs.",
"cite_spans": [
{
"start": 155,
"end": 175,
"text": "(Botev et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 571,
"end": 594,
"text": "Kornblith et al. (2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Unlike all of the above approaches, (Zhelezniak et al., 2019a ) see each word embedding itself as D (e.g. 300) observations from some scalar random variable. They cast semantic similarity as correlations between these random variables and study their properties using simple tools from univariate statistics. While they consider correlations between individual word vectors and averaged word vectors, they do not formally explore correlations between word vector sets. We review their framework in Section 3 and then proceed to formalise and generalise it to the case of sets of word embeddings.",
"cite_spans": [
{
"start": 36,
"end": 61,
"text": "(Zhelezniak et al., 2019a",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Suppose we have a word embeddings matrix W \u2208 R N \u00d7D , where N is the number of words in the vocabulary and D is the embedding dimension (usually 300). In other words, each row w (i) of W is a D-dimensional word vector. When applying statistical analysis to these vectors, one might choose to treat each w (i) as an observation from some D-",
"cite_spans": [
{
"start": 305,
"end": 308,
"text": "(i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Correlation Coefficients and Semantic Similarity",
"sec_num": "3"
},
{
"text": "variate distribution P D (E 1 , . . . E D )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Correlation Coefficients and Semantic Similarity",
"sec_num": "3"
},
{
"text": "and model it with a Gaussian or a Gaussian Mixture. While such analysis helps in studying the overall geometry of the embedding space (how dimensions correlate and how embeddings cluster), P D is not directly useful for semantic similarity between individual words. For the latter, Zhelezniak et al. 2019aproposed to look at the transpose W T and the corresponding distribution P (W 1 , W 2 , . . . , W N ). Under this perspective, each word vector w (i) is now a sample of D (e.g. 300) observations from a scalar random variable W i . Luckily, in applications we are usually not interested in the full joint distribution but only in the similarity between two words, i.e. the bivariate marginal P (W i , W j ). In practice, we make inferences about this marginal from the paired sample (w (i) , w (j) ) through visualisations (histograms, Q-Q plots, scatter plots, etc.) as well as various statistics. Zhelezniak et al. (2019a) found that for all common models (GloVe, fastText, word2vec) the means across word embeddings are tightly concentrated around zero (relative to their dimensions), thus making the widely used cosine similarity practically equivalent to Pearson correlation. However, while word2vec vectors seem mostly normal, GloVe and fastText vectors are highly non-normal, likely due to the presence of heavy univariate and bivariate outliers (as suggested by visualisations mentioned earlier). Quantitatively, the majority of GloVe and fastText vectors fail the Shapiro-Wilk normality test at sig-nificance level 0.05. Therefore, while Pearson's r (and thus cosine similarity) may be acceptable for word2vec, it is preferable to resort to more robust non-parametic correlation coefficients such as Spearman's \u03c1 or Kendall's \u03c4 as a similarity measure between GloVe and fastText vectors.",
"cite_spans": [
{
"start": 903,
"end": 928,
"text": "Zhelezniak et al. (2019a)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Correlation Coefficients and Semantic Similarity",
"sec_num": "3"
},
{
"text": "Finally, very similar conclusions were shown to hold for sentence representations obtained by word vector averaging, also referred to as meanpooling. In particular, averaged fastText vectors compared with rank correlation coefficients already show impressive results on standard STS tasks, rivaling much more sophisticated systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Correlation Coefficients and Semantic Similarity",
"sec_num": "3"
},
{
"text": "We are interested in applying the statistical framework from Section 3 to measure the semantic similarity between two sentences s 1 and s 2 given by the sets (or bags) S 1 and S 2 of word embeddings respectively. To formalise this new setup, we may see each set of word embeddings S = {w (1) , w (2) , . . . , w (k) } as a sample (of e.g. 300 observations) from some theoretical set of scalar random variables R = {W 1 , W 2 , . . . , W k }. In light of the above, our task then lies in finding correlation coefficients corr(R 1 , R 2 ) between R 1 and R 2 and their empirical estimates corr(S 1 , S 2 ) obtained from the paired sample S 1 , S 2 , hoping that such coefficients will serve as a good proxy for semantic similarity. Recall that for singleword sets R 1 = {W i }, R 2 = {W j } the task simply reduces to computing a univariate correlation between word vectors w (i) and w (j) , where the choice of the coefficient (Pearson's r, Spearman's \u03c1, etc.) is made based on the statistics exhibited by the word embeddings matrix. While generalising this to sets of more than one variable is not particularly hard, there are several ways to do so, each with its own advantages and downsides. In the present work, we group these approaches into two broad families: pooling-based and poolingfree correlation coefficients.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Word Vector Sets",
"sec_num": "4"
},
{
"text": "Pooling-based approaches first reduce a set of random variables to a single scalar random variable W pool = f pool (W 1 , W 2 , . . . , W k ) and then apply univariate correlation coefficients between the pooled variables. In practice this would correspond to pooling word embeddings w (1) , w (2) , . . . , w (k) (along i = 1:k) into one Figure 1 : Normalised histograms of the mean distribution for sentence vectors generated by mean-, max-and min-pooling. Sentences were taken from the entire STS dataset (Agirre et al., 2012 (Agirre et al., , 2013 (Agirre et al., , 2014 (Agirre et al., , 2015 (Agirre et al., , 2016 Cer et al., 2017) , and we utilise three commonly-used word embedding models: GloVe (Pennington et al., 2014) , fastText , and word2vec (Mikolov et al., 2013b,c) .",
"cite_spans": [
{
"start": 508,
"end": 528,
"text": "(Agirre et al., 2012",
"ref_id": "BIBREF3"
},
{
"start": 529,
"end": 551,
"text": "(Agirre et al., , 2013",
"ref_id": "BIBREF4"
},
{
"start": 552,
"end": 574,
"text": "(Agirre et al., , 2014",
"ref_id": "BIBREF1"
},
{
"start": 575,
"end": 597,
"text": "(Agirre et al., , 2015",
"ref_id": "BIBREF0"
},
{
"start": 598,
"end": 620,
"text": "(Agirre et al., , 2016",
"ref_id": "BIBREF2"
},
{
"start": 621,
"end": 638,
"text": "Cer et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 705,
"end": 730,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF38"
},
{
"start": 757,
"end": 782,
"text": "(Mikolov et al., 2013b,c)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 339,
"end": 347,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Correlations between Pooled Variables",
"sec_num": "4.1"
},
{
"text": "Require: Word embeddings for the first sentence 2) . . . , x (k) \u2208 R 1\u00d7d Require: Word embeddings for the second sentence y (1) , y (2) . . . , y (l) \u2208 R 1\u00d7d Ensure: Similarity score M S # Max-pooling performed element-wise",
"cite_spans": [
{
"start": 48,
"end": 50,
"text": "2)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 MaxPool-Spearman",
"sec_num": null
},
{
"text": "x (1) , x (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 MaxPool-Spearman",
"sec_num": null
},
{
"text": "x \u2190 MAX POOL(x (1) , x (2) . . . , x (k) ) y \u2190 MAX POOL(y (1) , y (2) . . . , y (l) ) MS \u2190 SPEARMANCORRELATION(x, y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 MaxPool-Spearman",
"sec_num": null
},
{
"text": "fixed vector w pool , followed by computing univariate sample correlations. Certainly, these approaches are empirically attractive: not only are they very simple computationally (e.g. see Algorithm 1) but they also keep us in the realm of univariate statisics, where we have an entire arsenal of effective tools for making inferences about W pool . Unfortunately, it is not always clear a priori what should dictate our choice of the pooling function (though, as we will see shortly, for certain functions some statistical justifications do exist). By far the most common pooling operations for word embedding found in the literature are mean-, max-and min-pooling. It is also very common, with some exceptions, to treat these various pooled representation in a completely identical fashion, e.g. by comparing them all with cosine similarity. Intuitively, however, we suggest that the statistics of W pool must heavily depend on the pooling function f pool and thus each such pooled random variable should be studied in its own right. To illustrate this point, we would like to reveal the very different nature of mean-and max-and minpooled sentence vectors though a practical example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 MaxPool-Spearman",
"sec_num": null
},
{
"text": "A Practical Analysis Let us begin by examining sentence vectors obtained through mean-pooling. Recall that for common word embedding models, the mean across 300 dimensions of a single word embedding w (i) happens to be close to zero (relative to the dimensions). By the linearity of expectation, we have that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistics of the Pooled Representations:",
"sec_num": "4.2"
},
{
"text": "E[W mean ] = E k i=1 W i = k i=1 E [W i ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistics of the Pooled Representations:",
"sec_num": "4.2"
},
{
"text": ", and so the mean across w mean will also be close to zero at least for small k. In practice, this seems to hold even for moderate k in naturally occurring sentences, as seen in Figure 1 . Based on this, we expect Pearson correlation and cosine similarity to have almost identical performance on the downstream tasks, which is confirmed in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 186,
"text": "Figure 1",
"ref_id": null
},
{
"start": 340,
"end": 348,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Statistics of the Pooled Representations:",
"sec_num": "4.2"
},
{
"text": "On the other hand, intuition tells us that the means of the max-pooled vectors will be shifted , and Kendall's \u03c4 (KND). Plots generated for three pooling methods and the following word embedding models: GloVe (Pennington et al., 2014) , fastText , and word2vec (Mikolov et al., 2013b,c) .",
"cite_spans": [
{
"start": 209,
"end": 234,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF38"
},
{
"start": 261,
"end": 286,
"text": "(Mikolov et al., 2013b,c)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistics of the Pooled Representations:",
"sec_num": "4.2"
},
{
"text": "to the right because of the max operation, which we see in Figure 1 . In this case, cosine similarity and Pearson correlation will yield different results and, in fact, Pearson's r considerably outperforms cosine on the downstream tasks ( Figure 2 ). This in turn empirically adds weight to the statistical interpretation (correlation) over its geometrical counterpart (angle between vectors).",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 67,
"text": "Figure 1",
"ref_id": null
},
{
"start": 239,
"end": 247,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Statistics of the Pooled Representations:",
"sec_num": "4.2"
},
{
"text": "Recall also that unlike word2vec, GloVe and fastText vectors feature heavy univariate outliers, and the same can be expected to hold for the pooled representations; an example is shown in Figure 3 . In case of mean-pooled vectors, this particular departure from normality can be successfully detected by the Shapiro-Wilk normality test, informing the appropriate choice of the correlation coefficient (Pearson's r or robust rank correlation). By contrast, such procedure cannot be readily applied to max-pooled and min-pooled vectors as by construction they exhibit additional departures from normality, such as positive and negative skew respectively. It is always a good idea to consult visualisations for such vectors, such as the ones in Figure 3 . Interestingly though, we do observe the some noteworthy regularities, which we describe further in Section 5.",
"cite_spans": [],
"ref_spans": [
{
"start": 188,
"end": 196,
"text": "Figure 3",
"ref_id": null
},
{
"start": 742,
"end": 750,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Statistics of the Pooled Representations:",
"sec_num": "4.2"
},
{
"text": "The above example is meant to illustrate that even the simplest pooled random variables show strikingly different statistics depending on the aggregation. While the abundance of various pooling operations may be intimidating, the resulting vectors are always subject to the many tools of univariate statistics. As we hope to have shown, even crude analysis can shed light on the nature of these textual representations, which in turn has notable practical implications, as we will see in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistics of the Pooled Representations:",
"sec_num": "4.2"
},
{
"text": "Exactly as before, suppose we have two sentences S 1 = {x (1) , x (2) , . . . , x (k) } and S 2 = {y (1) , y (2) , . . . , y (l) } and the corresponding random vectors X = (X 1 , X 2 , . . . , X k ) and Y = (Y 1 , Y 2 , . . . , Y l ). At this point it is important to emphasise again that we relate each word vector x i to a random variable X i and treat the dimensions of x i as D observations from that variable, and similarly for y i and Y i . In contrast with the pooling-based approaches, our task here is to find a suitable correlation coefficient directly between the random vectors X and Y. We begin by recalling the expression for the basic univariate Pearson's r:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r XY = E X,Y [(X \u2212 \u00b5 X )(Y \u2212 \u00b5 Y )] \u03c3 X \u03c3 Y ,",
"eq_num": "(1)"
}
],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "Figure 3: Histograms for word embeddings of the word \"cats\" and pooled representations of the embeddings for the words in the sentence \"I like cats because they are very cute animals\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "\u00b5 X = E[X], \u03c3 X = E [X 2 ] \u2212 \u00b5 2 X ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "and similarly for \u00b5 Y and \u03c3 Y . The covariance term cov(X, Y ) in the numerator is readily generalised to random vectors by the following crosscovariance operator between reproducing kernel Hilbert spaces (RKHS) F and G",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C XY = E X,Y [(\u03c6(X) \u2212 \u00b5 X ) \u2297 (\u03c8(Y) \u2212 \u00b5 Y )] ,",
"eq_num": "(2)"
}
],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "where \u2297 denotes the tensor product and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "\u00b5 X = E X [\u03c6(X)], \u00b5 Y = E Y [\u03c8(Y)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": ". Here \u03c6 and \u03c8 are the feature maps such that \u03c6(x), \u03c6(x ) F = K(x, x ) and \u03c8(y), \u03c8(y ) G = L(y, y ), where K and L are the kernels associated with RKHS F and G respectively. Note that if \u03c6 and \u03c8 are the identity maps, the cross-covariance operator (2) simply becomes the cross-covariance matrix Gretton et al. (2005a) define the Hilbert-Schmidt independence criterion (HSIC) to be the squared Hilbert-Schmidt norm ||C XY || 2 HS of (2) and derive an expression for it in terms of kernels K and L",
"cite_spans": [
{
"start": 295,
"end": 317,
"text": "Gretton et al. (2005a)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "C XY = E X,Y (X \u2212 \u00b5 X ) (Y \u2212 \u00b5 Y ) T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "HSIC(X, Y, K, L) = E X,X ,Y,Y K(X, X )L(Y, Y ) +E X,X K(X, X ) E Y,Y L(Y, Y ) \u22122E X,Y E X K(X, X ) E Y L(Y, Y ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "They also show the empirical estimate of it to be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "HSIC(K, L) = (D \u2212 1) \u22122 Tr(KHLH), (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "H = I \u2212 1 D 11",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "T is the centering matrix and (j) ), i, j = 1:D are the kernel (Gram) matrices of observations. Crucially, the kernel evaluations for K take place between",
"cite_spans": [
{
"start": 30,
"end": 33,
"text": "(j)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "K = K(X (i) , X (j) ), L = L(Y (i) , Y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "X (i) = (x i (1) , x i (2) , . . . , x i (k) ) and X (j) = (x j (1) , x j (2) , . . . , x j (k) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "and not between the individual word embeddings x (i) and x (j) , and similarly for L. Thus, both K and L are square matrices of dimension D \u00d7 D . Indeed, for (3) to make sense, the dimensions of K and L must match. The matching dimension in our case is the word embedding dimension D, while the number of words k and l in the sentences may vary. This is in line with our formalism, which models word vectors as random variables and their dimensions as observations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "Finally, the Centered Kernel Alignment (CKA) (Cortes et al., 2012) is simply defined as CKA(K, L) = HSIC(K, L)",
"cite_spans": [
{
"start": 45,
"end": 66,
"text": "(Cortes et al., 2012)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "HSIC(K, K) HSIC(L, L) .",
"eq_num": "(4)"
}
],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "We see now that CKA not only generalises the squared Pearson correlation to the multivariate case, it also allows it to operate in highdimensional feature spaces, as commonly done in the kernel literature. The reason this is useful is that under certain conditions (when K and L are characteristic kernels), HSIC can detect any existing dependence with high probability, as the sample size increases (Gretton et al., 2005b) . One can also consider the Uncentered Kernel Alignment (or simply KA) (Cristianini et al., 2002) , which can then be seen as a similar generalisation but for the univariate cosine similarity. To the best of our knowledge, KA and CKA in general have never been applied before to measure semantic similarity between sets of word embeddings; therefore this work seeks to introduce them as standard definitions for squared Pearson's r and cosine similarity for such sets.",
"cite_spans": [
{
"start": 400,
"end": 423,
"text": "(Gretton et al., 2005b)",
"ref_id": "BIBREF22"
},
{
"start": 495,
"end": 521,
"text": "(Cristianini et al., 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correlations between Random Vectors",
"sec_num": "4.3"
},
{
"text": "We now empirically demonstrate the power of the methods and statistical analysis presented in Section 4, through a set of evaluations on the Se-mantic Textual Similarity (STS) tasks series 2012-2016 (Agirre et al., 2012 (Agirre et al., , 2013 (Agirre et al., , 2014 (Agirre et al., , 2015 (Agirre et al., , 2016 Cer et al., 2017) . For methods involving pretrained word embeddings, we use fastText trained on Common Crawl (600B tokens), as previous evaluations have indicated that fastText vectors have uniformly the best performance on these tasks out of commonlyused pretrained unsupervised word vectors (Conneau et al., 2017; Perone et al., 2018; Zhelezniak et al., 2019a,b) . We provide experiments and significance analysis for additional word vector in the Appendix. The success metric for the STS tasks is the Pearson correlation between the sentence similarity scores provided by human annotators and the scores generated by a candidate algorithm. Note that the dataset for the STS13 SMT subtask is no longer publicly available, so the mean Pearson correlation for STS13 reported in our experiments has been re-calculated accordingly. The code for our experiments builds on the SentEval toolkit (Conneau and Kiela, 2018) and is available on GitHub 1 . We first conduct a set of experiments to validate the observations of Sections 4.1 and 4.2 regarding the performance of cosine similarity and various univariate correlation coefficients when applied to pooled word vectors. These results are depicted in Figure 2 , for which we can make the following observations.",
"cite_spans": [
{
"start": 199,
"end": 219,
"text": "(Agirre et al., 2012",
"ref_id": "BIBREF3"
},
{
"start": 220,
"end": 242,
"text": "(Agirre et al., , 2013",
"ref_id": "BIBREF4"
},
{
"start": 243,
"end": 265,
"text": "(Agirre et al., , 2014",
"ref_id": "BIBREF1"
},
{
"start": 266,
"end": 288,
"text": "(Agirre et al., , 2015",
"ref_id": "BIBREF0"
},
{
"start": 289,
"end": 311,
"text": "(Agirre et al., , 2016",
"ref_id": "BIBREF2"
},
{
"start": 312,
"end": 329,
"text": "Cer et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 606,
"end": 628,
"text": "(Conneau et al., 2017;",
"ref_id": "BIBREF14"
},
{
"start": 629,
"end": 649,
"text": "Perone et al., 2018;",
"ref_id": "BIBREF39"
},
{
"start": 650,
"end": 677,
"text": "Zhelezniak et al., 2019a,b)",
"ref_id": null
},
{
"start": 1203,
"end": 1228,
"text": "(Conneau and Kiela, 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 1513,
"end": 1521,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "First, max and min-pooled vectors consistently outperform mean-pooled vectors when all three representations are compared with Pearson correlation. We hypothesise that this is in part because max and min-pooling remove the outliers (to which Pearson's r is very sensitive) from at least one tail of the distribution whereas mean-pooled vectors have outliers in both tails. This outlierremoving property, however, cannot be taken as a sole explanation behind excellent performance of max-pooled vectors, as max-pooling still tends to outperform mean-pooling when both are compared with correlations that are robust to outliers, as well as on word vectors that have very few outliers to begin with (e.g. word2vec).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In addition, the strong performance of rank correlation coefficients (Spearman's \u03c1 and Kendall's \u03c4 ) comes solely from their robustness to outliers, as clipping (winsorizing) the top and bottom 5% of the values and then proceeding with Pearson's r closes the gap almost completely. Consistently, on vectors with few outliers (word2vec), Pearson's r achieves the same performance as rank correlations even without winsorization. However, unlike outliers, positive (negative) skew of max-(min-) pooled vectors does not seem to hurt Pearson's r on STS tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Next, we conduct evaluations of the methods proposed in this work alongside other deep learning and set-based similarity measures for STS from the literature. The methods we compare are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "\u2022 Deep representation approaches: BoW with ELMo embeddings (Peters et al., 2018) , Skip-Thought (Kiros et al., 2015) , InferSent (Conneau et al., 2017) , Universal Sentence Encoder both DAN and Transformer (Cer et al., 2018) , STN multitask embeddings (Subramanian et al., 2018) , and BERT 12and 24-layer models (Devlin et al., 2018) .",
"cite_spans": [
{
"start": 59,
"end": 80,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF40"
},
{
"start": 96,
"end": 116,
"text": "(Kiros et al., 2015)",
"ref_id": "BIBREF28"
},
{
"start": 129,
"end": 151,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 206,
"end": 224,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 252,
"end": 278,
"text": "(Subramanian et al., 2018)",
"ref_id": "BIBREF42"
},
{
"start": 312,
"end": 333,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "\u2022 Set-based similarity measures: Word Mover's Distance (WMD) (Kusner et al., 2015) , soft-cardinality with Jaccard coefficient (Jimenez et al., 2012) , DynaMax with Jaccard (Zhelezniak et al., 2019b) , meanand max-pooled word vectors with cosine similarity (COS), and mean-pooled word vectors with Spearman correlation (SPR) (Zhelezniak et al., 2019a ).",
"cite_spans": [
{
"start": 61,
"end": 82,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF30"
},
{
"start": 127,
"end": 149,
"text": "(Jimenez et al., 2012)",
"ref_id": "BIBREF24"
},
{
"start": 173,
"end": 199,
"text": "(Zhelezniak et al., 2019b)",
"ref_id": null
},
{
"start": 325,
"end": 350,
"text": "(Zhelezniak et al., 2019a",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "\u2022 Proposed set-based approaches: max-pooled word vectors with Spearman correlation, CKA with linear kernel (also known as RVcoefficient), CKA with Gaussian kernel (median estimation for \u03c3 2 ), and CKA with distance kernel (distance correlation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Note that for BERT we evaluated all pooling strategies available in bert-as-service (Xiao, 2018) applied to either the last or second-to-last layers and report results for the best-performing combination, which was mean-pooling on the last layer for both model sizes. Our results are presented in Table 1 . We can clearly see that deep learningbased methods do not shine on STS tasks, while simple compositions of word vectors can perform extremely well, especially when an appropriate correlation coefficient is used as the similarity measure. Indeed, the performance of max-pooled vectors with Spearman correlation approaches or Approach Time complexity exceeds that of more expensive or offline methods like that of Arora et al. (2017) , which performs PCA computations on the entire test set. Additionally, while the multivariate correlation methods such as CKA are more computationally expensive than pooling-based approaches (see Table 2 ), they can provide performance boost on some tasks, making the cost worth it depending on the application. Finally, we conducted an exploratory error analysis and found that many errors are due to the well-known inherent weaknesses of word embeddings. For example, the proposed approaches heavily overestimate similarity when two sentences contain antonyms or when one sentence is the negation of the other. We illustrate these and other cases in the Appendix.",
"cite_spans": [
{
"start": 84,
"end": 96,
"text": "(Xiao, 2018)",
"ref_id": "BIBREF48"
},
{
"start": 719,
"end": 738,
"text": "Arora et al. (2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 297,
"end": 304,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 936,
"end": 943,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "MaxPool+SPR O(nd + d log d) CKA O(nd 2 + d 2 ) DynaMax O(n 2 d) SoftCard O(n 2 d) WMD O(n 3 log n \u2022 d) WMD (relaxed) O(n 2 d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In this work we investigate the application of statistical correlation coefficients to sets of word vectors as a method for computing semantic textual similarity (STS). This can be done either by pooling these word vectors and computing univariate correlations between the resulting representations, or by applying multivariate correlation coefficients to the sets of vectors directly. We provide further empirical evidence that outliers in word vector distributions disrupt performance of set-based similarity metrics as previously shown (Zhelezniak et al., 2019a) . We also show working methods for solving or avoiding the issue through vector pooling operations, robust correlations or winsorization. In addition, we found that pooling operations in conjunction with univariate correlation coefficients yield one of the strongest results on downstream STS tasks, while being computationally much more efficient than competing set-based methods. Our findings are supported by a combination of statistical analysis, practical examples and visualisations, and empirical evaluation on standard benchmark datasets.",
"cite_spans": [
{
"start": 539,
"end": 565,
"text": "(Zhelezniak et al., 2019a)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Both proposed families of approaches serve as strong baselines for future research into STS, as well as useful algorithms for the practitioner, being efficient and simple to implement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We believe our findings speak to the efficacy of the statistical perspective on word embeddings, which we hope will encourage others to explore further implications of not only this particular framework, but also completely novel interpretations of textual representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/Babylonpartners/ corrsim",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the three anonymous reviewers for their useful feedback and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Inigo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Montse",
"middle": [],
"last": "Maritxalar",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
},
{
"first": "Larraitz",
"middle": [],
"last": "Uria",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "252--263",
"other_ids": {
"DOI": [
"10.18653/v1/S15-2045"
]
},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. Semeval-2015 task 2: Semantic tex- tual similarity, english, spanish and pilot on inter- pretability. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 252-263. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semeval-2014 task 10: Multilingual semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "81--91",
"other_ids": {
"DOI": [
"10.3115/v1/S14-2010"
]
},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. Semeval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 81-91. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "497--511",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1081"
]
},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the 10th International Workshop on Semantic Evalua- tion (SemEval-2016), pages 497-511. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Semeval-2012 task 6: A pilot on semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
}
],
"year": 2012,
"venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "385--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pi- lot on semantic textual similarity. In *SEM 2012: The First Joint Conference on Lexical and Compu- tational Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385- 393. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "*sem 2013 shared task: Semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity",
"volume": "1",
"issue": "",
"pages": "32--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- Agirre, and Weiwei Guo. 2013. *sem 2013 shared task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Seman- tics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 32-43. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A latent variable model approach to pmi-based word embeddings",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yuanzhi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Risteski",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "385--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. A latent variable model approach to pmi-based word embeddings. Transac- tions of the Association for Computational Linguis- tics, 4:385-399.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Simple but Tough-to-Beat Baseline for Sentence Embeddings",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A Simple but Tough-to-Beat Baseline for Sentence Embeddings. International Conference on Learning Representations.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Rjean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Jauvin",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, Rjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Re- search, 3:1137-1155.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Word importance-based similarity of documents metric (wisdm): Fast and scalable document similarity metric for analysis of scientific documents",
"authors": [
{
"first": "Viktor",
"middle": [],
"last": "Botev",
"suffix": ""
},
{
"first": "Kaloyan",
"middle": [],
"last": "Marinov",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Sch\u00e4fer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 6th International Workshop on Mining Scientific Publications",
"volume": "",
"issue": "",
"pages": "17--23",
"other_ids": {
"DOI": [
"10.1145/3127526.3127530"
]
},
"num": null,
"urls": [],
"raw_text": "Viktor Botev, Kaloyan Marinov, and Florian Sch\u00e4fer. 2017. Word importance-based similarity of doc- uments metric (wisdm): Fast and scalable docu- ment similarity metric for analysis of scientific doc- uments. In Proceedings of the 6th International Workshop on Mining Scientific Publications, WOSP 2017, pages 17-23, New York, NY, USA. ACM.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Nasari: a novel approach to a semantically-aware representation of items",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "567--577",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1059"
]
},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. Nasari: a novel ap- proach to a semantically-aware representation of items. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 567-577. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Inigo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2001"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Eval- uation (SemEval-2017), pages 1-14. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Universal sentence encoder for english",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sheng-Yi",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "St",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Tar",
"suffix": ""
},
{
"first": "Ray",
"middle": [],
"last": "Strope",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kurzweil",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "169--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for english. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing: System Demonstrations, pages 169-174. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Senteval: An evaluation toolkit for universal sentence representations",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). European Language Resource Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representa- tions. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). European Language Resource Asso- ciation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Supervised learning of universal sentence representations from natural language inference data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "670--680",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1070"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Algorithms for learning kernels based on centered alignment",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
},
{
"first": "Afshin",
"middle": [],
"last": "Rostamizadeh",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Machine Learning Research",
"volume": "13",
"issue": "",
"pages": "795--828",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corinna Cortes, Mehryar Mohri, and Afshin Ros- tamizadeh. 2012. Algorithms for learning kernels based on centered alignment. Journal of Machine Learning Research, 13(Mar):795-828.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "On kernel-target alignment",
"authors": [
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
},
{
"first": "Andre",
"middle": [],
"last": "Elisseeff",
"suffix": ""
},
{
"first": "Jaz S",
"middle": [],
"last": "Kandola",
"suffix": ""
}
],
"year": 2002,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "367--373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nello Cristianini, John Shawe-Taylor, Andre Elisseeff, and Jaz S Kandola. 2002. On kernel-target align- ment. In Advances in neural information processing systems, pages 367-373.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning semantic similarity for very short texts",
"authors": [
{
"first": "C",
"middle": [
"De"
],
"last": "Boom",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Van Canneyt",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bohez",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Demeester",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Dhoedt",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 IEEE International Conference on Data Mining Workshop (ICDMW)",
"volume": "",
"issue": "",
"pages": "1229--1234",
"other_ids": {
"DOI": [
"10.1109/ICDMW.2015.86"
]
},
"num": null,
"urls": [],
"raw_text": "C. De Boom, S. Van Canneyt, S. Bohez, T. Demeester, and B. Dhoedt. 2015. Learning semantic similar- ity for very short texts. In 2015 IEEE International Conference on Data Mining Workshop (ICDMW), pages 1229-1234.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Representation learning for very short texts using weighted word embedding aggregation",
"authors": [
{
"first": "Cedric",
"middle": [
"De"
],
"last": "Boom",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Van Canneyt",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Demeester",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Dhoedt",
"suffix": ""
}
],
"year": 2016,
"venue": "Pattern Recogn. Lett",
"volume": "80",
"issue": "",
"pages": "150--156",
"other_ids": {
"DOI": [
"10.1016/j.patrec.2016.06.012"
]
},
"num": null,
"urls": [],
"raw_text": "Cedric De Boom, Steven Van Canneyt, Thomas De- meester, and Bart Dhoedt. 2016. Representation learning for very short texts using weighted word embedding aggregation. Pattern Recogn. Lett., 80(C):150-156.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning generic sentence representations using convolutional neural networks",
"authors": [
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Yunchen",
"middle": [],
"last": "Pu",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Henao",
"suffix": ""
},
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2390--2400",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1254"
]
},
"num": null,
"urls": [],
"raw_text": "Zhe Gan, Yunchen Pu, Ricardo Henao, Chunyuan Li, Xiaodong He, and Lawrence Carin. 2017. Learning generic sentence representations using convolutional neural networks. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, pages 2390-2400. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Measuring statistical dependence with hilbert-schmidt norms",
"authors": [
{
"first": "Arthur",
"middle": [],
"last": "Gretton",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Bousquet",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Sch\u00f6lkopf",
"suffix": ""
}
],
"year": 2005,
"venue": "International conference on algorithmic learning theory",
"volume": "",
"issue": "",
"pages": "63--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Sch\u00f6lkopf. 2005a. Measuring statistical dependence with hilbert-schmidt norms. In Inter- national conference on algorithmic learning theory, pages 63-77. Springer.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Kernel methods for measuring independence",
"authors": [
{
"first": "Arthur",
"middle": [],
"last": "Gretton",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Herbrich",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Bousquet",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Sch\u00f6lkopf",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Machine Learning Research",
"volume": "6",
"issue": "",
"pages": "2075--2129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur Gretton, Ralf Herbrich, Alexander Smola, Olivier Bousquet, and Bernhard Sch\u00f6lkopf. 2005b. Kernel methods for measuring independence. Jour- nal of Machine Learning Research, 6(Dec):2075- 2129.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning distributed representations of sentences from unlabelled data",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1367--1377",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1367-1377. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Soft cardinality: A parameterized similarity function for text comparison",
"authors": [
{
"first": "Sergio",
"middle": [],
"last": "Jimenez",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Becerra",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Sixth International Workshop on Semantic Evaluation, SemEval '12",
"volume": "1",
"issue": "",
"pages": "449--453",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergio Jimenez, Claudia Becerra, and Alexander Gel- bukh. 2012. Soft cardinality: A parameterized sim- ilarity function for text comparison. In Proceedings of the First Joint Conference on Lexical and Com- putational Semantics -Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, SemEval '12, pages 449- 453, Stroudsburg, PA, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Text comparison using soft cardinality",
"authors": [
{
"first": "Sergio",
"middle": [],
"last": "Jimenez",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Gonzalez",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2010,
"venue": "String Processing and Information Retrieval",
"volume": "",
"issue": "",
"pages": "297--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergio Jimenez, Fabio Gonzalez, and Alexander Gel- bukh. 2010. Text comparison using soft cardinal- ity. In String Processing and Information Retrieval, pages 297-302, Berlin, Heidelberg. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Soft cardinality in semantic text processing: Experience of the SemEval international competitions",
"authors": [
{
"first": "Sergio",
"middle": [],
"last": "Jimenez",
"suffix": ""
},
{
"first": "Fabio",
"middle": [
"A"
],
"last": "Gonzalez",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2015,
"venue": "Polibits",
"volume": "51",
"issue": "",
"pages": "63--72",
"other_ids": {
"DOI": [
"10.17562/pb-51-9"
]
},
"num": null,
"urls": [],
"raw_text": "Sergio Jimenez, Fabio A. Gonzalez, and Alexander Gelbukh. 2015. Soft cardinality in semantic text processing: Experience of the SemEval international competitions. Polibits, 51:63-72.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "2",
"issue": "",
"pages": "427--431",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa- pers, pages 427-431. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Skip-Thought Vectors",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ruslan",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3294--3302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-Thought Vectors. In Advances in Neural Information Processing Sys- tems, pages 3294-3302.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Similarity of neural network representations revisited",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Kornblith",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2019,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. 2019. Similarity of neural net- work representations revisited. In ICML.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "From word embeddings to document distances",
"authors": [
{
"first": "Matt",
"middle": [
"J"
],
"last": "Kusner",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [
"I"
],
"last": "Kolkin",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32Nd International Conference on International Conference on Machine Learning",
"volume": "37",
"issue": "",
"pages": "957--966",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kil- ian Q. Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32Nd In- ternational Conference on International Conference on Machine Learning, volume 37 of ICML'15, pages 957-966. JMLR.org.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Distributed Representations of Sentences and Documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed Rep- resentations of Sentences and Documents. In In- ternational Conference on Machine Learning, pages 1188-1196.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Efficient Estimation of Word Representations in Vector Space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient Estimation of Word Representations in Vector Space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems, pages 3111-3119.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Linguistic Regularities in Continuous Space Word Representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic Regularities in Continuous Space Word Representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746-751.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Vector-based models of semantic composition",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "236--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL-08: HLT, pages 236-244. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Multivariate Gaussian document representation from word embeddings for text categorization",
"authors": [
{
"first": "Giannis",
"middle": [],
"last": "Nikolentzos",
"suffix": ""
},
{
"first": "Polykarpos",
"middle": [],
"last": "Meladianos",
"suffix": ""
},
{
"first": "Francois",
"middle": [],
"last": "Rousseau",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "450--455",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giannis Nikolentzos, Polykarpos Meladianos, Francois Rousseau, Yannis Stavrakas, and Michalis Vazir- giannis. 2017. Multivariate Gaussian document rep- resentation from word embeddings for text catego- rization. In Proceedings of the 15th Conference of the European Chapter of the Association for Compu- tational Linguistics: Volume 2, Short Papers, pages 450-455, Valencia, Spain. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Unsupervised learning of sentence embeddings using compositional n-gram features",
"authors": [
{
"first": "Matteo",
"middle": [],
"last": "Pagliardini",
"suffix": ""
},
{
"first": "Prakhar",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Jaggi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "528--540",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1049"
]
},
"num": null,
"urls": [],
"raw_text": "Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised learning of sentence embed- dings using compositional n-gram features. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long Papers), pages 528-540. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Evaluation of sentence embeddings in downstream and linguistic probing tasks",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Christian S Perone",
"suffix": ""
},
{
"first": "Thomas S",
"middle": [],
"last": "Silveira",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paula",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.06259"
]
},
"num": null,
"urls": [],
"raw_text": "Christian S Perone, Roberto Silveira, and Thomas S Paula. 2018. Evaluation of sentence embeddings in downstream and linguistic probing tasks. arXiv preprint arXiv:1806.06259.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227- 2237. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "A rank-based similarity metric for word embeddings",
"authors": [
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
},
{
"first": "Hongmin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Emmanuele",
"middle": [],
"last": "Chersoni",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "552--557",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enrico Santus, Hongmin Wang, Emmanuele Chersoni, and Yue Zhang. 2018. A rank-based similarity met- ric for word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 552-557. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Learning general purpose distributed sentence representations via large scale multi-task learning",
"authors": [
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"J"
],
"last": "Pal",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandeep Subramanian, Adam Trischler, Yoshua Ben- gio, and Christopher J Pal. 2018. Learning gen- eral purpose distributed sentence representations via large scale multi-task learning. In International Conference on Learning Representations.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Exploring asymmetric encoder-decoder structure for contextbased sentence representation learning",
"authors": [
{
"first": "Shuai",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Hailin",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Zhaowen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Virginia",
"middle": [
"R"
],
"last": "De Sa",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, and Virginia R. de Sa. 2017. Exploring asymmetric encoder-decoder structure for context- based sentence representation learning. CoRR, abs/1710.10380.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A document descriptor using covariance of word vectors",
"authors": [
{
"first": "Marwan",
"middle": [],
"last": "Torki",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "527--532",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marwan Torki. 2018. A document descriptor using covariance of word vectors. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 527-532, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Model comparison for semantic grouping",
"authors": [
{
"first": "Francisco",
"middle": [],
"last": "Vargas",
"suffix": ""
},
{
"first": "Kamen",
"middle": [],
"last": "Brestnichki",
"suffix": ""
},
{
"first": "Nils",
"middle": [],
"last": "Hammerla",
"suffix": ""
}
],
"year": 2019,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francisco Vargas, Kamen Brestnichki, and Nils Ham- merla. 2019. Model comparison for semantic group- ing. In ICML.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Towards Universal Paraphrastic Sentence Embeddings",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards Universal Paraphrastic Sen- tence Embeddings. In International Conference on Learning Representations.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Paranmt-50m: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting and Kevin Gimpel. 2018. Paranmt-50m: Pushing the limits of paraphrastic sentence embed- dings with millions of machine translations. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 451-462. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "bert-as-service",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Han Xiao. 2018. bert-as-service. https:// github.com/hanxiao/bert-as-service.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Fuzzy bag-of-words model for document representation",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kezhi",
"middle": [],
"last": "Mao",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE Transactions on Fuzzy Systems",
"volume": "",
"issue": "",
"pages": "1--1",
"other_ids": {
"DOI": [
"10.1109/tfuzz.2017.2690222"
]
},
"num": null,
"urls": [],
"raw_text": "Rui Zhao and Kezhi Mao. 2017. Fuzzy bag-of-words model for document representation. IEEE Transac- tions on Fuzzy Systems, pages 1-1.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Decoding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity Tasks",
"authors": [
{
"first": "Vitalii",
"middle": [],
"last": "Zhelezniak",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Busbridge",
"suffix": ""
},
{
"first": "April",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"L"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Nils",
"middle": [
"Y"
],
"last": "Hammerla",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vitalii Zhelezniak, Dan Busbridge, April Shen, Samuel L. Smith, and Nils Y. Hammerla. 2018. De- coding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity Tasks. CoRR, abs/1805.03435.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Correlation coefficients and semantic textual similarity",
"authors": [
{
"first": "Vitalii",
"middle": [],
"last": "Zhelezniak",
"suffix": ""
},
{
"first": "Aleksandar",
"middle": [],
"last": "Savkov",
"suffix": ""
},
{
"first": "April",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Nils",
"middle": [
"Y"
],
"last": "Hammerla",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vitalii Zhelezniak, Aleksandar Savkov, April Shen, and Nils Y. Hammerla. 2019a. Correlation coefficients and semantic textual similarity. In NAACL-HLT.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Don't settle for average, go for the max: Fuzzy sets and max-pooled word vectors",
"authors": [
{
"first": "",
"middle": [],
"last": "Hammerla",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hammerla. 2019b. Don't settle for average, go for the max: Fuzzy sets and max-pooled word vectors. In International Conference on Learning Represen- tations.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Bar plots of Pearson correlation on STS tasks between human scores and the following set-based similarity metrics: Cosine similarity (COS), Pearson's r (PRS), Winsorized Pearson's r (WPRS), Spearman's \u03c1 (SPR)",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"text": "Mean Pearson correlation on STS tasks for Deep Learning and Set-based methods using fastText.",
"type_str": "table",
"content": "<table><tr><td>Methods proposed in this work are denoted with .</td></tr><tr><td>Values in bold indicate best results per task. Previous</td></tr><tr><td>results are taken from Perone et al. (2018), Subrama-</td></tr><tr><td>nian et al. (2018) and Zhelezniak et al. (2019b,a). \u2020 in-</td></tr><tr><td>dicates the only STS13 result (to our knowledge) that</td></tr><tr><td>includes the SMT subtask.</td></tr></table>",
"num": null,
"html": null
},
"TABREF2": {
"text": "Computational complexity of some of the setbased STS methods discussed in this paper. Here n is the sentence length and d is the dimensionality of the word embeddings.",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
}
}
}
}