ACL-OCL / Base_JSON /prefixE /json /E17 /E17-1016.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E17-1016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:50:53.642356Z"
},
"title": "Evaluation by Association: A Systematic Study of Quantitative Word Association Evaluation",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Lab, DTAL",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": "",
"affiliation": {},
"email": "dkiela@fb.com"
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Lab, DTAL",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent work on evaluating representation learning architectures in NLP has established a need for evaluation protocols based on subconscious cognitive measures rather than manually tailored intrinsic similarity and relatedness tasks. In this work, we propose a novel evaluation framework that enables large-scale evaluation of such architectures in the free word association (WA) task, which is firmly grounded in cognitive theories of human semantic representation. This evaluation is facilitated by the existence of large manually constructed repositories of word association data. In this paper, we (1) present a detailed analysis of the new quantitative WA evaluation protocol, (2) suggest new evaluation metrics for the WA task inspired by its direct analogy with information retrieval problems, (3) evaluate various state-of-the-art representation models on this task, and (4) discuss the relationship between WA and prior evaluations of semantic representation with well-known similarity and relatedness evaluation sets. We have made the WA evaluation toolkit publicly available.",
"pdf_parse": {
"paper_id": "E17-1016",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent work on evaluating representation learning architectures in NLP has established a need for evaluation protocols based on subconscious cognitive measures rather than manually tailored intrinsic similarity and relatedness tasks. In this work, we propose a novel evaluation framework that enables large-scale evaluation of such architectures in the free word association (WA) task, which is firmly grounded in cognitive theories of human semantic representation. This evaluation is facilitated by the existence of large manually constructed repositories of word association data. In this paper, we (1) present a detailed analysis of the new quantitative WA evaluation protocol, (2) suggest new evaluation metrics for the WA task inspired by its direct analogy with information retrieval problems, (3) evaluate various state-of-the-art representation models on this task, and (4) discuss the relationship between WA and prior evaluations of semantic representation with well-known similarity and relatedness evaluation sets. We have made the WA evaluation toolkit publicly available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The quality of word representations in semantic models is often measured using intrinsic evaluations that capture particular types of relationships (typically semantic similarity and relatedness) between word pairs (Finkelstein et al., 2002; Schnabel et al., 2015; Tsvetkov et al., 2015, inter alia) .",
"cite_spans": [
{
"start": 215,
"end": 241,
"text": "(Finkelstein et al., 2002;",
"ref_id": "BIBREF21"
},
{
"start": 242,
"end": 264,
"text": "Schnabel et al., 2015;",
"ref_id": "BIBREF64"
},
{
"start": 265,
"end": 299,
"text": "Tsvetkov et al., 2015, inter alia)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Whereas the notions of semantic similarity and relatedness constitute key concepts in such evaluations, they are in fact vaguely defined (Batchkarov et al., 2016; Ettinger and Linzen, 2016) . The construction of ground truth evaluation sets that reflect these relations, such as SimLex-999 , SimVerb-3500 , MEN (Bruni et al., 2014) or Rare Words (Luong et al., 2013) , relies on manually constructed guidelines that trigger subjective human interpretation of the task at hand. This in turn introduces inter-annotator variability (Batchkarov et al., 2016) and does not account for the fact that human similarity judgements are asymmetric by nature (Tversky, 1977) .",
"cite_spans": [
{
"start": 137,
"end": 162,
"text": "(Batchkarov et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 163,
"end": 189,
"text": "Ettinger and Linzen, 2016)",
"ref_id": "BIBREF17"
},
{
"start": 311,
"end": 331,
"text": "(Bruni et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 346,
"end": 366,
"text": "(Luong et al., 2013)",
"ref_id": "BIBREF42"
},
{
"start": 529,
"end": 554,
"text": "(Batchkarov et al., 2016)",
"ref_id": "BIBREF3"
},
{
"start": 647,
"end": 662,
"text": "(Tversky, 1977)",
"ref_id": "BIBREF75"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "What is more, given that humans perform linguistic comparisons between concepts on a subconscious level (Kutas and Federmeier, 2011) , it is at least debatable whether current similarity/relatedness evaluation sets fully capture the implicit relational structure underlying human language representation and understanding.",
"cite_spans": [
{
"start": 104,
"end": 132,
"text": "(Kutas and Federmeier, 2011)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As evidenced by recent workshops on evaluation of semantic representations 1 , the community appears to recognise that current evaluation methods are inadequate. To fill in this gap, recent work has proposed using subconscious cognitive measures of semantic connection instead, as a proxy for measuring the ability of statistical models to tackle various problems in human language understanding (Ettinger and Linzen, 2016; S\u00f8gaard, 2016; Mandera et al., 2017) .",
"cite_spans": [
{
"start": 396,
"end": 423,
"text": "(Ettinger and Linzen, 2016;",
"ref_id": "BIBREF17"
},
{
"start": 424,
"end": 438,
"text": "S\u00f8gaard, 2016;",
"ref_id": "BIBREF68"
},
{
"start": 439,
"end": 460,
"text": "Mandera et al., 2017)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Motivated by these insights, this work proposes an evaluation framework based on the word association (WA) task, firmly rooted in and described by the psychology literature, e.g., Nelson et al. (2000) and 2 . Word associations, provided as simple (cue, response) concept pairs, are naturally asymmetric: they tend to be given as a repository of ranked lists of concepts col-lected as responses (i.e., assocations) given a target cue/query concept. The ranking of the response list is based on the WA strength between the cue and each generated response. WAs are directly tied to language use and the memory systems that support online linguistic processing (Till et al., 1988; Nelson et al., 1998) .",
"cite_spans": [
{
"start": 180,
"end": 200,
"text": "Nelson et al. (2000)",
"ref_id": "BIBREF55"
},
{
"start": 657,
"end": 676,
"text": "(Till et al., 1988;",
"ref_id": "BIBREF73"
},
{
"start": 677,
"end": 697,
"text": "Nelson et al., 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We build our WA evaluation framework around a large repository of the University of South Florida (USF) association norms (Nelson et al., 2000; . After post-processing, the repository contains~5K queries, and~70,000 (cue, response) pairs, making it one of the largest semantic evaluation databases available (by contrast, the largest word pair scoring data sets in NLP, SimVerb and MEN, contain 3,500 and 3,000 word pairs respectively). This new resource enables comprehensive quantitative studies of WA and may be used to guide the future development of representation learning architectures.",
"cite_spans": [
{
"start": 122,
"end": 143,
"text": "(Nelson et al., 2000;",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While parts of the USF data set have been used for evaluation in NLP before (Michelbacher et al., 2007; Silberer and Lapata, 2012; , inter alia), we conduct the first full study regarding the evaluation on the quantitative WA task. We compare a wide variety of different semantic representation models, discuss various evaluation metrics and analyse the links between word association and semantic similarity and relatedness. In summary, the main contributions of this paper are as follows: 3 (C1) We present an end-to-end evaluation framework for the WA task, and provide new evaluation metrics and detailed guidelines for evaluating semantic models on the WA task. (C2) We conduct a systematic study and comparison of current state-of-the-art representation learning architectures on the WA task. (C3) We present a systematic quantitative analysis of the connections between the models' performance on the subconscious WA task and their performance on benchmarking similarity and relatedness evaluation sets. measures (most notably semantic priming) and semantic relations encountered in vector space models (VSMs) (McDonald and Brew, 2004; Jones et al., 2006; Pad\u00f3 and Lapata, 2007; Herdagdelen et al., 2009) , suggesting that some of the implicit relation structure in the human brain is already reflected in current statistical models of meaning.",
"cite_spans": [
{
"start": 76,
"end": 103,
"text": "(Michelbacher et al., 2007;",
"ref_id": "BIBREF50"
},
{
"start": 104,
"end": 130,
"text": "Silberer and Lapata, 2012;",
"ref_id": "BIBREF67"
},
{
"start": 1117,
"end": 1142,
"text": "(McDonald and Brew, 2004;",
"ref_id": "BIBREF47"
},
{
"start": 1143,
"end": 1162,
"text": "Jones et al., 2006;",
"ref_id": "BIBREF32"
},
{
"start": 1163,
"end": 1185,
"text": "Pad\u00f3 and Lapata, 2007;",
"ref_id": "BIBREF57"
},
{
"start": 1186,
"end": 1211,
"text": "Herdagdelen et al., 2009)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These findings encouraged Ettinger and Linzen (2016) to propose a preliminary evaluation framework based on semantic priming experiments (Meyer and Schvaneveldt, 1971 ). 4 They demonstrate the feasibility of such an evaluation using a subconscious language processing task. They use the online database of the Semantic Priming Project (SPP), which compiles priming data for over 6,000 word pairs.",
"cite_spans": [
{
"start": 26,
"end": 52,
"text": "Ettinger and Linzen (2016)",
"ref_id": "BIBREF17"
},
{
"start": 137,
"end": 166,
"text": "(Meyer and Schvaneveldt, 1971",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here, we go one step further and demonstrate that another subconscious language processing task, with much more available data, can also be used to evaluate representations. We construct an evaluation framework based on the USF free word association (WA) norms quantifying the strength of association between cue and response concepts for more than 70,000 concept pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Word Association WA has been a long-standing research topic in cognitive psychology, as evidenced by the following statement (Deese, 1966) :",
"cite_spans": [
{
"start": 125,
"end": 138,
"text": "(Deese, 1966)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Are there any more fascinating data in psychology than tables of association? (Deese, 1966) Word association still remains one of the fundamental questions in cognitive psychology, as emphasised by e.g. :",
"cite_spans": [
{
"start": 78,
"end": 91,
"text": "(Deese, 1966)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Association has been part of the theoretical armory of cognitive psychologists since Thomas Hobbes used the notion to account for the structure of our \"trayne of thoughts\" in 1651.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These insights illustrate how WA can provide a useful benchmark for evaluating models of human semantic representation. WA norms are commonly used in constructing memory experiments (Dennis and Humphreys, 2001; Steyvers and Malmberg, 2003) , and statistics derived from them have been shown to be important in predicting cued recall 4 Semantic priming measures a response time with a human subject performing a simple language task (e.g., classifying strings into words vs. non-words). It was shown that human subjects are able to solve the task more quickly if the word to which they are responding is preceded by a semantically related word. The magnitude of the speed-up can be taken as the strength of relation between the two concepts. and recognition (Nelson et al., 1998) , and false memories (Roediger et al., 2001 ). 5",
"cite_spans": [
{
"start": 182,
"end": 210,
"text": "(Dennis and Humphreys, 2001;",
"ref_id": "BIBREF16"
},
{
"start": 211,
"end": 239,
"text": "Steyvers and Malmberg, 2003)",
"ref_id": "BIBREF71"
},
{
"start": 333,
"end": 334,
"text": "4",
"ref_id": null
},
{
"start": 757,
"end": 778,
"text": "(Nelson et al., 1998)",
"ref_id": null
},
{
"start": 800,
"end": 822,
"text": "(Roediger et al., 2001",
"ref_id": "BIBREF63"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "WA Evaluation Set: USF The USF norms data set (hereafter USF) is the largest database of free word association collected for English . It was generated by presenting human subjects with one of 5, 000 cue concepts and asking them to write the first word coming to their mind that is associated with that concept. Each cue concept was normed by at least 100 participants, resulting in a set of associates (or responses) for each cue, for a total of \u223c72,000 (cue, response) pairs. A sample of the USF data is presented in Tab. 1. The data are accessible online. 6 For each such pair, the proportion of participants that produced the response w r when presented with cue word w c can be used as a proxy for the strength of association between the two words (FSG in Tab. 1). BSG denotes the backward association strength, when the roles of a cue and a response are reversed, shows that the WA relation is inherently asymmetrical.",
"cite_spans": [
{
"start": 559,
"end": 560,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "5 From another viewpoint, the WA evaluation aims to answer a different question than a typical intrinsic evaluation on data sets such as SimLex-999, MEN, WordSim-353, or SimVerb-3500. The goal of the latter is to assess the quality of learned text representations as a proxy towards downstream NLP tasks. The goal of the former is to assess the capability of representation learning and NLP architectures to help in advancing our understanding and modeling of human cognitive processes (occurring on a sub-conscious level), while at the same time it could still be used as a proxy evaluation in NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "6 http://w3.usf.edu/FreeAssociation/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Terminology W c = {w c 1 , . . . , w c i , . . . , w c |W C | } denotes a set of |W c | cue or normed words (more generally, concepts) in the evaluation set. For each cue word w c i , the data set contains a ranked list of concepts or responses R i sorted according to the strength of forward association, from cue to response (i.e., the FSG field in Tab. 1). The list R i contains entries of the format w r,j : fsg i,j , where w r,j is the j th most associated concept in the ranked list, and fsg i,j is the accompanying strength of forward association between cue w c i and response w r,j . Let R g i refer to the ground truth ranked list for w c i , which contains only responses where fsg i,j > 0 in the USF data, and R s i to the ranked list retrieved by an automatic system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Protocol",
"sec_num": "3"
},
{
"text": "The vocabulary or search space from which responses for all cues are drawn is labeled V r . Note that V r may also contain words from W c and that V r may contain words that do not occur in any of the ground truth lists R g i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Protocol",
"sec_num": "3"
},
{
"text": "Why Evaluate on Word Association? A standard evaluation protocol with word pair scoring evaluation sets such as SimLex-999 or MEN is to compute Spearman's \u03c1 correlations between the ranking obtained by an automatic system and the ground truth ranking. This protocol, however, is not directly applicable to the USF test data. First, the evaluated relation of WA is asymmetric, and the pairs (X, Y ) and (Y, X) may differ dramatically in their WA scores (see the difference in FSG and BSG values from Tab. 1). Second, instead of one global list of pairs, the data comprises a series of ranked lists conditioned on the cue/normed word w c (see Tab. 1 again). Finally, unlike with SimLex-999 or MEN scores where it is difficult to interpret \"what a similarity/relatedness of 7.69 exactly means\" (Batchkarov et al., 2016; Avraham and Goldberg, 2016) , the USF FSG scores have a direct meaningful interpretation (i.e., F SG = #P/#G).",
"cite_spans": [
{
"start": 791,
"end": 816,
"text": "(Batchkarov et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 817,
"end": 844,
"text": "Avraham and Goldberg, 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Protocol",
"sec_num": "3"
},
{
"text": "To fully capture all aspects of the ground truth USF data set, an evaluation protocol should ideally be based not only on response rankings, but also on the actual scores, i.e., the association strength. In this paper, we propose and investigate two different families of evaluation metrics on the USF data: Sect. 3.1 discusses rank correlation evaluation metrics inspired by recent work on the evaluation of vector space models in distributional semantics (Bruni et al., 2014; , inter alia). Sect. 3.2 draws inspiration from research on evaluation in information retrieval (IR). We show that the problem of evaluating USF association lists may be naturally framed as an ad-hoc IR task (Manning et al., 2008) . This enables the application of standard IR evaluation methodology.",
"cite_spans": [
{
"start": 457,
"end": 477,
"text": "(Bruni et al., 2014;",
"ref_id": "BIBREF5"
},
{
"start": 686,
"end": 708,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Protocol",
"sec_num": "3"
},
{
"text": "Averaged Standard Spearman's Correlation The first protocol, labeled \u03c1-std, first computes the standard Spearman's \u03c1 correlation between R g i and R s i . The system list R s i is pruned so that it contains only those items that also occur in R g i . The two lists are then correlated to obtain the score \u03c1 i for cue w c i . Following that, the correlation scores are averaged. First, we apply the Fisher z-transformation (Fisher, 1915) and then average over the transformed scores:",
"cite_spans": [
{
"start": 422,
"end": 436,
"text": "(Fisher, 1915)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Correlation Evaluation",
"sec_num": "3.1"
},
{
"text": "zi = 1 2 ln 1 + \u03c1i 1 \u2212 \u03c1i = arctanh(\u03c1i) (1) zavg = |W c | i=1 zi (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Correlation Evaluation",
"sec_num": "3.1"
},
{
"text": "The final output score is obtained by applying the inverse z-transformation on z avg :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Correlation Evaluation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c1avg = tanh(zavg)",
"eq_num": "(3)"
}
],
"section": "Rank Correlation Evaluation",
"sec_num": "3.1"
},
{
"text": "Averaged Weighted Spearman's Correlation The previous protocol treats all ranks equally, despite the fact that the system should be rewarded more for getting the strongest responses correct (and penalised when failing to do so). Therefore, we also experiment with weighted rank correlation measures, which weigh the distance between two ranks, and assign more importance to higher ranks (i.e., in our setting, to stronger associates). Several weighted correlation metrics have been proposed (Blest, 2000; Pinto da Costa and Soares, 2005; Dancelli et al., 2013; Pinto da Costa, 2015) . We show results with the weighted Spearman's correlation (further labelled \u03c1-w) from Pinto da Costa (2015). 7 Let us denote Q 1 = [Q 1,1 , Q 1,2 , . . . , Q 1,n ] and Q 1 = [Q 2,1 , Q 2,2 , . . . , Q 2,n ] two vectors of ranks obtained on a sample of size n. The weighted rank correlation \u03c1 between the vectors is computed as:",
"cite_spans": [
{
"start": 491,
"end": 504,
"text": "(Blest, 2000;",
"ref_id": "BIBREF4"
},
{
"start": 505,
"end": 537,
"text": "Pinto da Costa and Soares, 2005;",
"ref_id": "BIBREF60"
},
{
"start": 538,
"end": 560,
"text": "Dancelli et al., 2013;",
"ref_id": "BIBREF10"
},
{
"start": 561,
"end": 582,
"text": "Pinto da Costa, 2015)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Correlation Evaluation",
"sec_num": "3.1"
},
{
"text": "1\u2212 6 n i=1 (Q1,i \u2212 Q2,i)((n \u2212 Q1,i + 1) + (n \u2212 Q2,i + 1)) n 4 + n 3 \u2212 n 2 \u2212 n (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Correlation Evaluation",
"sec_num": "3.1"
},
{
"text": "We refer the interested reader to the relevant literature (Pinto da Costa, 2015) for further details, theoretical implications and property proofs related to Eq. (4). \u03c1 i scores for all cue words W c are then obtained using Eq. 4, and the averaged score \u03c1 avg is computed as before, see Eq.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Correlation Evaluation",
"sec_num": "3.1"
},
{
"text": "(1)-Eq. (3). While the two metrics are intuitive and capture the ability of models to correctly rank (a subset of) associates/responses, note that they have deficiencies. They only evaluate the rankings of words occurring in R g i , which effectively reduces the search space V r to the small subset {w 1 , . . . , w |R g i | } \u2282 V r . This effectively means that the final score simply ignores incorrect responses that are ranked highly by a system but that do not occur in R g i . It also does not take into account the actual strength of association.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rank Correlation Evaluation",
"sec_num": "3.1"
},
{
"text": "Intuition Another set of evaluation metrics is inspired by the resemblance of the USF data structure to the typical output of ad-hoc IR systems (Manning et al., 2008; Pound et al., 2010) . That is, each cue word w c can be thought of as an input query issued against some target concept collection V r , where the goal of our association retrieval system is to rank items from the target collection according to their relevance (i.e., their association strength) to the issued query. The output of the system is the ranked list R s i of length |V r |, with ground truth relevance assessments provided in R g i . MRR and MAP The first two metrics assume non-weighted or binary relevance: the retrieved response is either relevant to the issued cue (labeled 1) or it is non-relevant (0). We assume that all responses found in the ground truth lists R g i where f sg i,j > t are relevant responses, where t is a threshold. 8 We label this reduced set of relevant responses RR g i . The most lenient evaluation metric is Mean Reciprocal Rank (MRR) (Voorhees, 1999; Craswell, 2009) . The reciprocal rank of a query response is the multiplicative inverse of the rank of the first relevant answer, and the final score is then averaged over all |W c | queries/cues. More formally:",
"cite_spans": [
{
"start": 144,
"end": 166,
"text": "(Manning et al., 2008;",
"ref_id": "BIBREF46"
},
{
"start": 167,
"end": 186,
"text": "Pound et al., 2010)",
"ref_id": "BIBREF62"
},
{
"start": 920,
"end": 921,
"text": "8",
"ref_id": null
},
{
"start": 1044,
"end": 1060,
"text": "(Voorhees, 1999;",
"ref_id": "BIBREF76"
},
{
"start": 1061,
"end": 1076,
"text": "Craswell, 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IR-Inspired Evaluation",
"sec_num": "3.2"
},
{
"text": "M RR(W c ) = 1 |W c | |W c | i=1 1 ranki (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR-Inspired Evaluation",
"sec_num": "3.2"
},
{
"text": "where rank i is the rank position of the first relevant response (i.e., the first response found in the set RR g i ) for the cue word w c i . Since MRR cannot assess multiple correct answers and their ranking in the retrieved list, an alternative metric is Mean Average Precision (MAP):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR-Inspired Evaluation",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M AP (W c ) = 1 |W c | |W c | i=1 AP (w c i ) (6) AP (w c i ) = N k=1 P k \u2022 irel k |RR g i |",
"eq_num": "(7)"
}
],
"section": "IR-Inspired Evaluation",
"sec_num": "3.2"
},
{
"text": "Here, AP (w c i ) denotes Average Precision for query/cue w c i , N \u2264 |V r | denotes the number of responses retrieved by the system. P k is the precision at cut-off k in the list, and irel k is an indicator function which 'turns on' only if the response at rank k is the relevant response (i.e., present in RR g i ). The average is computed over all relevant responses, and the non-retrieved relevant responses from V r get a precision score of 0. N << |V r | is typically used (e.g., standard values are N = 100 or N = 1000) to reduce the execution time of the evaluation procedure, since it is expected that a good retrieval system should obtain a majority of relevant responses in the first N responses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR-Inspired Evaluation",
"sec_num": "3.2"
},
{
"text": "Compared to measures from Sect. 3.1, MRR and MAP are better estimators of the model's ability to capture word association, as they operate over the entire search space V r for each cue word. This effectively means that systems get rewarded if they are able to consistently rank relevant responses higher than non-relevant responses. However, these metrics still rely on binary non-weighted relevance judgements, and are therefore unable to reward models that rank highly relevant responses (i.e., strongly associated responses, see Tab. 1) higher than weakly relevant responses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR-Inspired Evaluation",
"sec_num": "3.2"
},
{
"text": "NDCG@k In other words, the most expressive evaluation metric should be able to distinguish that cue-response pairs such as (lunch, dinner) and (lunch, food) should be ranked higher than weakly associated pairs such as (lunch, box) or (lunch, sandwich). In addition, the metric should still reward models that rank relevant responses higher than non-relevant ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR-Inspired Evaluation",
"sec_num": "3.2"
},
{
"text": "An IR metric which takes all these aspects into account is Discounted Cumulative Gain (DCG) (J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002) . DCG operates with weighted relevance values: in the USF scenario, these are forward association strengths, i.e., scores fsg i,j . The main idea behind using DCG is that highly relevant responses appearing lower in a ranked list should be penalised. The penalty is implemented by reducing the weighted relevance value logarithmically proportional to the position of the particular response. We opt for a more recent variant of DCG which puts more emphasis on retrieving relevant responses (Burges et al., 2005) . DCG@k, the DCG score accumulated at a particular rank position k is computed as follows:",
"cite_spans": [
{
"start": 92,
"end": 123,
"text": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002)",
"ref_id": "BIBREF31"
},
{
"start": 614,
"end": 635,
"text": "(Burges et al., 2005)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IR-Inspired Evaluation",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "DCG@k = k i=1 2 wrel i \u2212 1 log 2 (i + 1)",
"eq_num": "(8)"
}
],
"section": "IR-Inspired Evaluation",
"sec_num": "3.2"
},
{
"text": "wrel i is the graded relevance of the response at rank i given by the ground truth data, i.e., fsg i,j if the cue-response pair occurs in R g i , or 0 otherwise. To make results comparable across different queries, a normalised variant of DCG is typically used. First, all relevant responses are sorted by their graded relevance value, producing the maximum possible DCG at each position k. The score of the ideal ranking at rank k is called Ideal DCG (IDCG@k). NDCG@k for a single query is then:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR-Inspired Evaluation",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "N DCG@k = DCG@k IDCG@k",
"eq_num": "(9)"
}
],
"section": "IR-Inspired Evaluation",
"sec_num": "3.2"
},
{
"text": "Finally, the mean NDCG@k is produced for the entire collection W c by averaging over all single NDCG@k values. In all experiments we rely on a standard choice for k: NDCG@100, while similar trends are observed with NDCG@10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR-Inspired Evaluation",
"sec_num": "3.2"
},
{
"text": "LDA-Based Approach First, we evaluate an approach based on latent topic modeling, rooted in the psychology literature (Steyvers et al., 2004; . 9 The following quantitative model of word association has been proposed :",
"cite_spans": [
{
"start": 118,
"end": 141,
"text": "(Steyvers et al., 2004;",
"ref_id": "BIBREF72"
},
{
"start": 144,
"end": 145,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup and Models",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (w r |w c ) = M i=1 P (w r |toi)P (toi|w c )",
"eq_num": "(10)"
}
],
"section": "Experimental Setup and Models",
"sec_num": "4"
},
{
"text": "where w c is a cue word, w r \u2208 V r any concept from the search space, and to i is the i th latent topic from the set of M topics induced from the corpus data (using LDA). We label this model LDA-assoc. The probability scores P (w r |to i ) select words that are highly descriptive for each particular topic. P (to i |w c ) scores are computed as in prior work, by assuming topic independence and applying Bayes' rule on the LDA output per-topic word distributions P (\u2022|to i ) Vuli\u0107 and Moens, 2013) . 10 We train LDA with 1,000 topics using suggested parameters .",
"cite_spans": [
{
"start": 476,
"end": 498,
"text": "Vuli\u0107 and Moens, 2013)",
"ref_id": "BIBREF77"
},
{
"start": 501,
"end": 503,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup and Models",
"sec_num": "4"
},
{
"text": "We evaluate the best performing reduced count-based model from . We label this model count-ppmi-500d. 11 For a more detailed description of the model's training data and setup we refer the reader to the original work and supplementary material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Count-Based Models",
"sec_num": null
},
{
"text": "Vector Space Models We also compare the performance of prominent representation models on the WA USF task. We include: (1) unsupervised models that learn from distributional information in text, including Glove (Pennington et al., 2014) with d = 50 and d = 300 dimensions (glove-6B-50d and glove-6B-300d), the skip-gram negative-sampling (SGNS) 300dimensional vectors (Mikolov et al., 2013) with various contexts (bow = bag-of-words; deps = dependency contexts) as in (Levy and Goldberg, 2014) and (Schwartz et al., 2015 ) (sgns-pw-bow-w2, sgns-pw-bow-w5, sgns-pw-deps, sgns-8b-bow-w2), and the symmetric-pattern based vectors by Schwartz et al. (2015) (sympat-500d); (2) Models that rely on linguistic hand-crafted resources or curated knowledge bases. Here, we use vectors finetuned to a paraphrase database (paragram-25d, 10 The generative model closely resembles the actual process in the human brain ) -when we generate responses, we first tend to associate that word with a related semantic/cognitive concept, i.e., a latent topic (the factor P (toi|w c )), and then, after establishing the concept, we output a list of words that we consider the most prominent/descriptive for that concept (words with high scores in the factor P (w r |toi)).",
"cite_spans": [
{
"start": 368,
"end": 390,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF51"
},
{
"start": 468,
"end": 493,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF41"
},
{
"start": 498,
"end": 520,
"text": "(Schwartz et al., 2015",
"ref_id": "BIBREF66"
},
{
"start": 825,
"end": 827,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Count-Based Models",
"sec_num": null
},
{
"text": "11 We have also experimented with simple count-based asymmetric association measures proposed by Michelbacher et al. (2007) , estimated using the same corpus as the countppmi-500d model. We do not report the results with these measures, as they show a very poor performance when compared to all other models in our comparison.",
"cite_spans": [
{
"start": 97,
"end": 123,
"text": "Michelbacher et al. (2007)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Count-Based Models",
"sec_num": null
},
{
"text": "paragram-300d, (Wieting et al., 2015) ) further refined using linguistic constraints (paragram+cf-300d, (Mrk\u0161i\u0107 et al., 2016) USF Data Processing and Parameters Only USF pairs where both words are single word expressions were retained, and the rest was discarded. This yields 4,992 single word queries in total. The total number of finally retained USF pairs is \u2248 70,000. Note that this evaluation set is by an order of magnitude larger than current benchmarking word pair scoring datasets such as MEN (3000 word pairs in total), SimVerb (3500), SimLex 999and Rare Words (2034), and thus allows for a truly comprehensive evaluation of quantitative WA models. Only responses generated by at least 3 human subjects in each list of responses are taken as relevant in all experiments (see Foot. 7 in Sect. 3.2), all other (cue, response) pairs and pairs not present in the USF data are considered non-relevant. 12",
"cite_spans": [
{
"start": 15,
"end": 37,
"text": "(Wieting et al., 2015)",
"ref_id": "BIBREF79"
},
{
"start": 104,
"end": 125,
"text": "(Mrk\u0161i\u0107 et al., 2016)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Count-Based Models",
"sec_num": null
},
{
"text": "Computational complexity is not an issue for standard semantic benchmarks such as SimLex-999 or MEN: these data sets require only N gt similarity computations in total, where N gt is the number of word pairs in each benchmark (999 or 3000). However, complexity plays a major role in the USF evaluation: the system has to compute |W c | \u2022 |V r | similarity scores, where |W c | \u2248 5, 000, and |V r | is large for large vocabularies (typically covering > 100K words). In addition, each list of |V r | has to be sorted according to the WA strength: this means that the complexity is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp. I: Making the Evaluation Tractable",
"sec_num": null
},
{
"text": "O(|W c | \u2022 (|V r | + |V r | log |V r |)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp. I: Making the Evaluation Tractable",
"sec_num": null
},
{
"text": "Since this is prohibitively expensive, our solution is to restrict the search space V r only to words (both cues and responses) occurring in USF: |V r | = 10, 070. 13 Besides the gains in evaluation efficiency, when using the USF vocabulary all models operate over exactly the same search space: Table 3 : Results on the USF WA task using different evaluation metrics proposed in Sect. 3. V r = U SF for all models. The best results per column are in bold, second best in italic.",
"cite_spans": [],
"ref_spans": [
{
"start": 296,
"end": 303,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Exp. I: Making the Evaluation Tractable",
"sec_num": null
},
{
"text": "V r = 100K V r = U",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp. I: Making the Evaluation Tractable",
"sec_num": null
},
{
"text": "therefore, their results are directly comparable as the data coverage bias should be largely mitigated. To fully support this choice, we perform a simple experiment using a subset of models from Sect. 4. In the first evaluation, V r contains the most frequent 100K words for all models, where frequency was computed on their respective training data. In the second evaluation, V r contains only the USF vocabulary words. The results with IR-style metrics are shown in Tab. 2, and similar trends are observed with Spearman's \u03c1 correlations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp. I: Making the Evaluation Tractable",
"sec_num": null
},
{
"text": "The results support several conclusions. (i) Coverage over cue words is very high for all models (the model with the lowest coverage from Tab. 2 has a coverage of 98.2%). This, along with the same search space (the USF vocabulary) indicates a fair comparison of different models. (ii) Different IR metrics produce consistent model rankings, with a slight variation in the middle of the rankings. Interestingly, the best scoring model is Glove, a model which uses document-level co-occurrence, which steers it towards learning topical similarity. On the other hand, the worst performing model relies on dependency-based contexts which better capture functional similarity (Levy and Goldberg, 2014) and outperform other context choices in word similarity tasks on SimLex and SimVerb (Melamud et al., 2016; . (iii) Most importantly, the reduction of V r again yields consistent rankings with all metrics, which are also fairly consistent with the rankings obtained in the ten times larger 100K search space. Therefore, in all further experiments we use the USF vocabulary as our search space.",
"cite_spans": [
{
"start": 671,
"end": 696,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF41"
},
{
"start": 781,
"end": 803,
"text": "(Melamud et al., 2016;",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exp. I: Making the Evaluation Tractable",
"sec_num": null
},
{
"text": "Exp. II: Results on USF WA Next, we evaluate all models from Sect. 3 on the WA task. The results with different metrics are summarised in Tab. 3. The results suggest that all proposed evaluation metrics indeed reflect the ability of different models to capture WA. We observe strong correlations of the models' rankings with all five metrics (Tab. 4). \u03c1-w is a slightly more conservative metric than \u03c1std on average, but it does not affect model rankings at all (see also Tab. 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp. I: Making the Evaluation Tractable",
"sec_num": null
},
{
"text": "Further, the LDA-based WA model is largely outperformed by VSM-based approaches. As expected, similar VSMs with more dimensions are more expressive and score higher (e.g., note the scores with glove and paragram models). Additionally, models trained on larger corpora are also able to improve the overall results (e.g., note the scores with sgns trained on the Polyglot Wikipedia (PW, 2B tokens) vs. the 8B word2vec corpus). The paragram models specialised for similarity tasks are unable to match unsupervised VSMs that train on running text (e.g., paragram+cf-300d obtains a SimLex score of 0.74 compared to 0.46 with sgns-8b-bow-w2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp. I: Making the Evaluation Tractable",
"sec_num": null
},
{
"text": "Two models using bilingual training (biskip-256d and bicca-512d) seem unable to match the Table 4 : Spearman's \u03c1 correlations between different evaluation protocols for vector space models divided into (a) Association, (b) Similarity, and (c) Relatedness. The correlation scores are based on the rankings of all the evaluated models (see Sect. 4.1) in each experiment. The lower-left part of the table (below the main diagonal, in lighter gray) reports standard Spearman's \u03c1-std correlations between different model rankings, while \u03c1-w is reported in the upper-right part (in darker gray). We report model rankings based on the 5 different metrics introduced for the WA USF evaluation. Model rankings for Similarity and Relatedness experiments are according to the \u03c1-std correlation on the respective ground truth data sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Exp. I: Making the Evaluation Tractable",
"sec_num": null
},
{
"text": "best performing monolingual models: however, we plan to further analyse the influence of bilingual information in the WA task in future work. Finally, a comparison of sgns-pw-* models (where the only varied parameter is the context used in training) reveals that (i) larger windows improve WA scores (we test this phenomenon further in Exp. III), (ii) sgns-pw-deps, which captures functional similarity through dependency-based contexts, yields lower WA scores, while it improves on SimLex-999 compared to the other two models. This insight leads us to further investigate this phenomenon in Exp. IV.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp. I: Making the Evaluation Tractable",
"sec_num": null
},
{
"text": "In the next experiment, we analysed the effect of the window size on models' ability to capture similarity, relatedness, and association. We train the sgns-pw-bow model (d = 300) with varying window sizes in the interval [1, 30] . The results on similarity (SimLex-999), relatedness (MEN), and WA benchmarks (USF) are presented in Fig. 1(a)-1(b) . It is clear that using larger windows deteriorates the performance on SimLex-999 as the focus of the model is shifted from functional to topical similarity. This shift has been detected in prior work on vector space models . However, we also observe a similar trend with MEN scores, although an opposite effect was expected, which questions the ability of MEN to accurately evaluate relatedness. The opposite effect is, however, visible with the WA evaluation, where it is evident that larger win-dows (leading to topical similarity) lead to better WA estimates. This also provides the first hint that WA and semantic similarity capture two completely distinct semantic phenomena.",
"cite_spans": [
{
"start": 221,
"end": 224,
"text": "[1,",
"ref_id": null
},
{
"start": 225,
"end": 228,
"text": "30]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 331,
"end": 345,
"text": "Fig. 1(a)-1(b)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Exp. III: Window Size",
"sec_num": null
},
{
"text": "Exp. IV: WA vs. Similarity vs. Relatedness We delve deeper into this conjecture by computing correlations between model rankings on the WA task and two prominent similarity and relatedness data sets. The results from Tab. 4 indicate the following. First, semantic relatedness and similarity are correlated although they clearly refer to two distinct semantic phenomena as emphasised in prior work . The correlations between different metrics proposed for the WA task are very high (e.g., the lowest correlation score among any of the two is \u03c1 = 0.921). Second, WA and similarity capture very distinct relations (this is evident from low, even negative \u03c1 correlation scores). Third, WA and relatedness are strongly correlated, 14 but the correlation is not as high as expected, given that the two are often considered equivalent, e.g., (Kiela et al., 2015) . Future work should investigate whether the difference originates from inadequate evaluation data and protocols (see Fig. 1(a) -1(b) again), or whether the difference is fundamental.",
"cite_spans": [
{
"start": 835,
"end": 855,
"text": "(Kiela et al., 2015)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 974,
"end": 983,
"text": "Fig. 1(a)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Exp. III: Window Size",
"sec_num": null
},
{
"text": "We have proposed and released a new end-to-end evaluation framework for the task of free word association (WA). We have also provided new evaluation metrics inspired by research in IR, and guidelines for evaluating semantic representation models on the quantitative WA task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Besides serving as a gold standard in NLP, the comprehensive WA evaluation resource and accompanying evaluation protocol should enable the development of data-driven automatic systems that can capture the notion of word association, and further analysis on how humans perceive (types of) semantic relatedness and similarity (Spence and Owens, 1990; Maki and Buchanan, 2008; De Deyne et al., 2013) . These systems, as discussed in this paper, may additionally facilitate research in cognitive psychology pertaining to human semantic representation and memory.",
"cite_spans": [
{
"start": 324,
"end": 348,
"text": "(Spence and Owens, 1990;",
"ref_id": "BIBREF69"
},
{
"start": 349,
"end": 373,
"text": "Maki and Buchanan, 2008;",
"ref_id": "BIBREF44"
},
{
"start": 374,
"end": 396,
"text": "De Deyne et al., 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "In future work, we plan to test the portability of the evaluation protocol and apply it to other repositories of word association data in English (De Deyne et al., 2016) , as well as in other languages, using existing WA tables in, e.g., German (Schulte im Walde et al., 2008) , Dutch (De Deyne and Storms, 2008; Brysbaert et al., 2014) , Italian (Guida and Lenci, 2007) , Japanese (Joyce, 2005) , or Cantonese (Kwong, 2013). 15 In another line of future work, we will experiment with other \"cognitively plausible\" evaluation data such as N400 (Kutas and Federmeier, 2011; , and will analyse the similarities and differences between WA and other such \"cognitive\" evaluation protocols, as the one relying on semantic priming (SPP) (Hutchison et al., 2013; Ettinger and Linzen, 2016) .",
"cite_spans": [
{
"start": 146,
"end": 169,
"text": "(De Deyne et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 257,
"end": 276,
"text": "Walde et al., 2008)",
"ref_id": "BIBREF65"
},
{
"start": 285,
"end": 312,
"text": "(De Deyne and Storms, 2008;",
"ref_id": "BIBREF12"
},
{
"start": 313,
"end": 336,
"text": "Brysbaert et al., 2014)",
"ref_id": "BIBREF6"
},
{
"start": 347,
"end": 370,
"text": "(Guida and Lenci, 2007)",
"ref_id": "BIBREF26"
},
{
"start": 382,
"end": 395,
"text": "(Joyce, 2005)",
"ref_id": "BIBREF33"
},
{
"start": 426,
"end": 428,
"text": "15",
"ref_id": null
},
{
"start": 544,
"end": 572,
"text": "(Kutas and Federmeier, 2011;",
"ref_id": "BIBREF37"
},
{
"start": 730,
"end": 754,
"text": "(Hutchison et al., 2013;",
"ref_id": "BIBREF30"
},
{
"start": 755,
"end": 781,
"text": "Ettinger and Linzen, 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "All evaluation scripts and detailed guidelines related to this work are freely available at: github.com/cambridgeltl/wa-eval/ corpora. 19 This model was used as a baseline in (Schwartz et al., 2015) : sgns-8b-bow-w2.",
"cite_spans": [
{
"start": 135,
"end": 137,
"text": "19",
"ref_id": null
},
{
"start": 175,
"end": 198,
"text": "(Schwartz et al., 2015)",
"ref_id": "BIBREF66"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "(3) A template-based approach to vector space modeling introduced by Schwartz et al. (2015) . Vectors are trained based on co-occurrence of words in symmetric patterns (Davidov and Rappoport, 2006) . We use pre-trained dense vectors (d = 500) trained on the 8B corpus available online: 20 sympat-500d.",
"cite_spans": [
{
"start": 69,
"end": 91,
"text": "Schwartz et al. (2015)",
"ref_id": "BIBREF66"
},
{
"start": 168,
"end": 197,
"text": "(Davidov and Rappoport, 2006)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "(4) Models that use additional linguistic repositories to build semantically specialised improved word vectors. Wieting et al. (2015) use the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013) to learn word vectors which emphasise paraphrasability. They do this by fine-tuning, also known as retro-fitting , SGNS vectors using an objective function designed to incorporate the PPDB semantic similarity constraints. We test two variants of the Paragram model (d = 25 and d = 300) available online: 21 paragram-25d and paragram-300d.",
"cite_spans": [
{
"start": 112,
"end": 133,
"text": "Wieting et al. (2015)",
"ref_id": "BIBREF79"
},
{
"start": 169,
"end": 196,
"text": "(Ganitkevitch et al., 2013)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Another variant of the fine-tuning procedure called counter-fitting (CF) was recently proposed by Mrk\u0161i\u0107 et al. (2016) . The model further improves the Paragram vectors by injecting antonymy constraints from PPDB v2.0 (Pavlick et al., 2015) into the final vector space. d = 300. We label this model paragram+cf-300d. 22 (5) Two multilingual pre-trained embedding models, aiming to test whether multilingual supervision can help in capturing word association the same way it helps semantic similarity tasks. We use pre-trained vectors of (Luong et al., 2015) (biskip-256d) which rely on word-aligned parallel data, 23 and CCA-based vectors of Faruqui and Dyer (2014) (bicca-512d) which require readily available translation lexicons. 24 As bilingual representations are not the main focus of this work, for further training details, we refer the reader to the literature.",
"cite_spans": [
{
"start": 98,
"end": 118,
"text": "Mrk\u0161i\u0107 et al. (2016)",
"ref_id": "BIBREF52"
},
{
"start": 218,
"end": 240,
"text": "(Pavlick et al., 2015)",
"ref_id": "BIBREF58"
},
{
"start": 317,
"end": 319,
"text": "22",
"ref_id": null
},
{
"start": 614,
"end": 616,
"text": "23",
"ref_id": null
},
{
"start": 733,
"end": 735,
"text": "24",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "E.g. RepEval, https://sites.google.com/site/repevalacl16/ 2 The WA task is a free-association task, in which participants are asked to produce the first word that came into their head in response to a cue or query word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also experimented with other weighted variants, but detected similar trends in reported model rankings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In our experiments, we impose a simple heuristic and take responses as relevant if they were generated by at least 3 different human subjects in the USF experiments. This heuristic reduces the noise in human answers and provides a more coherent set of responses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "also experimented with LSA (Landauer and Dumais, 1997) and found that their LDA-based approach consistently outperformed LSA-based approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For efficiency reasons with IR metrics, we evaluate results only over the top N = 1000 retrieved responses for each cue.13 Prior work shows that the USF data represents a good range of distinct semantic phenomena, which suggests that the USF vocabulary represents a balanced sample of the English vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Although it comes as slightly counter-intuitive, research in statistics has shown that transitivity between correlation coefficients does not hold in general(Langford et al., 2001;Castro Sotos et al., 2009). Therefore, the observed behaviour is possible: Relatedness indeed correlates both with Association and with Similarity, while at the same time we do not observe any correlation between Association and Similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See also https://smallworldofwords.org/ for the project aiming to develop WA tables using crowdsourcing in more languages (e.g., Vietnamese, Spanish, French).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://clic.cimec.unitn.it/composes/semanticvectors.html 17 http://nlp.stanford.edu/projects/glove/ 18 https://levyomer.wordpress.com/publications/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "code.google.com/p/word2vec/source/browse/trunk/demotrain-big-model-v1.sh 20 http://homes.cs.washington.edu/\u223croysch/papers/ sp embeddings/sp embeddings.html 21 http://ttic.uchicago.edu/\u223cwieting/ 22 https://github.com/nmrksic/counter-fitting 23 http://stanford.edu/\u223clmthang/bivec/ 24 http://www.manaalfaruqui.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported by ERC Consolidator Grant LEXICAL (no 648909). The authors are grateful to the anonymous reviewers for their helpful comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "We evaluate a suite of pre-trained vector space models readily accessible online. We note that these models typically use different training data and other additional resources, and have a varying coverage of the English lexicon, but the evaluation score still reveals their ability to effectively capture word association. As mentioned in the paper, we have aimed at making the comparison fair by evaluating all models using the USF vocabulary as the search space for each model in our comparison. (0) We evaluate a traditional count-based representation model which uses positive PMI weighting and SVD dimensionality reduction. This is the best performing reduced count-based model from . The model was trained on concatenated ukWaC, the English Wikipedia and the British National Corpus with the window size 2, and dimensionality after SVD is set to d = 500. Vectors were obtained online. 16 We label this model count-ppmi-500d.(1) Two sets of Glove vectors (Pennington et al., 2014) were used (d = 50 and d = 30) trained on the 6B corpus of concatenated Wikipedia and GigaWord: 17 glove-6B-50d and glove-6B-300d.(2) Pre-trained vectors obtained using skip-gram with negative sampling (SGNS) (Mikolov et al., 2013) . We use SGNS vectors from (Levy and Goldberg, 2014) : sgns-pw-bow-w2 and sgns-pwbow-w5 denote vectors trained with bag-of-words (BOW) contexts on the Polyglot Wikipedia (PW) (Al-Rfou et al., 2013) with window sizes 2 and 5, respectively; sgns-pw-deps denotes vectors trained with dependency-based contexts. All vectors are 300-dimensional. 18 For more details including the preprocessing procedure and the specification of the used dependency parser, we refer the reader to the original work. We evaluate another SGNS-BOW model trained on a large 8B corpus with the window size 2 and d = 500 to measure the potential gains stemming from the use of larger training",
"cite_spans": [
{
"start": 892,
"end": 894,
"text": "16",
"ref_id": null
},
{
"start": 961,
"end": 986,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF59"
},
{
"start": 1195,
"end": 1217,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF51"
},
{
"start": 1245,
"end": 1270,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF41"
},
{
"start": 1559,
"end": 1561,
"text": "18",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Space Models",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Polyglot: Distributed word representations for multilingual NLP",
"authors": [
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Perozzi",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Skiena",
"suffix": ""
}
],
"year": 2013,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "183--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. In CoNLL, pages 183-192.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improving reliability of word similarity evaluation by redesigning annotation task and performance measure",
"authors": [
{
"first": "Oded",
"middle": [],
"last": "Avraham",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "REPEVAL",
"volume": "",
"issue": "",
"pages": "106--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oded Avraham and Yoav Goldberg. 2016. Improving reliability of word similarity evaluation by redesign- ing annotation task and performance measure. In REPEVAL, pages 106-110.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "238--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In ACL, pages 238-247.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A critique of word similarity as a method for evaluating distributional semantic models",
"authors": [
{
"first": "Miroslav",
"middle": [],
"last": "Batchkarov",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kober",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Reffin",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 2016,
"venue": "REPEVAL",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miroslav Batchkarov, Thomas Kober, Jeremy Reffin, Julie Weeds, and David Weir. 2016. A critique of word similarity as a method for evaluating distribu- tional semantic models. In REPEVAL, pages 7-12.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Theory & methods: Rank correlation -an alternative measure",
"authors": [
{
"first": "C",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blest",
"suffix": ""
}
],
"year": 2000,
"venue": "Australian & New Zealand Journal of Statistics",
"volume": "42",
"issue": "1",
"pages": "101--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David C. Blest. 2000. Theory & methods: Rank cor- relation -an alternative measure. Australian & New Zealand Journal of Statistics, 42(1):101-111.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multimodal distributional semantics",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Nam-Khanh",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Artificial Intelligence Research",
"volume": "49",
"issue": "",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Ar- tificial Intelligence Research, 49:1-47.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Norms of age of acquisition and concreteness for 30,000 Dutch words",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Brysbaert",
"suffix": ""
},
{
"first": "Micha\u00ebl",
"middle": [],
"last": "Stevens",
"suffix": ""
}
],
"year": 2014,
"venue": "Acta psychologica",
"volume": "150",
"issue": "",
"pages": "80--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Brysbaert, Micha\u00ebl Stevens, Simon De Deyne, Wouter Voorspoels, and Gert Storms. 2014. Norms of age of acquisition and concreteness for 30,000 Dutch words. Acta psychologica, 150:80-84.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning to rank using gradient descent",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "Christopher",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Burges",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Shaked",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Renshaw",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Lazier",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Deeds",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"N"
],
"last": "Hamilton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hullender",
"suffix": ""
}
],
"year": 2005,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "89--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher J. C. Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Gre- gory N. Hullender. 2005. Learning to rank using gradient descent. In ICML, pages 89-96.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The transitivity misconception of Pearsons correlation coefficient",
"authors": [
{
"first": "Ana",
"middle": [
"Elisa"
],
"last": "",
"suffix": ""
},
{
"first": "Castro",
"middle": [],
"last": "Sotos",
"suffix": ""
},
{
"first": "Stijn",
"middle": [],
"last": "Vanhoof",
"suffix": ""
}
],
"year": 2009,
"venue": "Statistics Education Research Journal",
"volume": "8",
"issue": "2",
"pages": "33--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ana Elisa Castro Sotos, Stijn Vanhoof, Wim Van Den Noortgate, and Patrick Onghena. 2009. The transitivity misconception of Pearsons correlation coefficient. Statistics Education Research Journal, 8(2):33-55.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Mean reciprocal rank",
"authors": [
{
"first": "Nick",
"middle": [],
"last": "Craswell",
"suffix": ""
}
],
"year": 2009,
"venue": "Encyclopedia of Database Systems",
"volume": "",
"issue": "",
"pages": "1703--1703",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nick Craswell. 2009. Mean reciprocal rank. In Ency- clopedia of Database Systems, pages 1703-1703.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "On two classes of weighted rank correlation measures deriving from the Spearman's \u03c1. Statistical Models for Data Analysis",
"authors": [
{
"first": "Livia",
"middle": [],
"last": "Dancelli",
"suffix": ""
},
{
"first": "Marica",
"middle": [],
"last": "Manisera",
"suffix": ""
},
{
"first": "Marika",
"middle": [],
"last": "Vezzoli",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "107--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Livia Dancelli, Marica Manisera, and Marika Vezzoli. 2013. On two classes of weighted rank correlation measures deriving from the Spearman's \u03c1. Statisti- cal Models for Data Analysis, pages 107-114.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Efficient unsupervised discovery of word categories using symmetric patterns and high frequency words",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Davidov",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2006,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "297--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Davidov and Ari Rappoport. 2006. Efficient unsupervised discovery of word categories using symmetric patterns and high frequency words. In ACL, pages 297-304.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Word associations: Norms for 1,424 Dutch words in a continuous task",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "De Deyne",
"suffix": ""
},
{
"first": "Gert",
"middle": [],
"last": "Storms",
"suffix": ""
}
],
"year": 2008,
"venue": "Behavior Research Methods",
"volume": "40",
"issue": "1",
"pages": "198--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon De Deyne and Gert Storms. 2008. Word asso- ciations: Norms for 1,424 Dutch words in a contin- uous task. Behavior Research Methods, 40(1):198- 205.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Better explanations of lexical and semantic cognition using networks derived from continued rather than single-word associations. Behavior Research",
"authors": [
{
"first": "Daniel",
"middle": [
"J"
],
"last": "Simon De Deyne",
"suffix": ""
},
{
"first": "Gert",
"middle": [],
"last": "Navarro",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Storms",
"suffix": ""
}
],
"year": 2013,
"venue": "Methods",
"volume": "45",
"issue": "2",
"pages": "480--498",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon De Deyne, Daniel J. Navarro, and Gert Storms. 2013. Better explanations of lexical and semantic cognition using networks derived from continued rather than single-word associations. Behavior Re- search Methods, 45(2):480-498.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Predicting human similarity judgments with distributional models: The value of word associations",
"authors": [
{
"first": "Amy",
"middle": [],
"last": "Simon De Deyne",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"J"
],
"last": "Perfors",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Navarro",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "1861--1870",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon De Deyne, Amy Perfors, and Daniel J. Navarro. 2016. Predicting human similarity judgments with distributional models: The value of word associa- tions. In COLING, pages 1861-1870.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The Structure of Associations in Language and Thought",
"authors": [
{
"first": "James",
"middle": [],
"last": "Deese",
"suffix": ""
}
],
"year": 1966,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Deese. 1966. The Structure of Associations in Language and Thought.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A context noise model of episodic word recognition",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Dennis",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"S"
],
"last": "Humphreys",
"suffix": ""
}
],
"year": 2001,
"venue": "Psychological Review",
"volume": "108",
"issue": "2",
"pages": "452--478",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Dennis and Michael S. Humphreys. 2001. A context noise model of episodic word recognition. Psychological Review, 108(2):452-478.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Evaluating vector space models using human semantic priming results",
"authors": [
{
"first": "Allyson",
"middle": [],
"last": "Ettinger",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2016,
"venue": "REPEVAL",
"volume": "",
"issue": "",
"pages": "72--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allyson Ettinger and Tal Linzen. 2016. Evaluating vector space models using human semantic priming results. In REPEVAL, pages 72-77.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Modeling N400 amplitude using vector space models of word representation",
"authors": [
{
"first": "Allyson",
"middle": [],
"last": "Ettinger",
"suffix": ""
},
{
"first": "Naomi",
"middle": [
"H"
],
"last": "Feldman",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Phillips",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Annual Meeting of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allyson Ettinger, Naomi H. Feldman, Philip Resnik, and Colin Phillips. 2016. Modeling N400 ampli- tude using vector space models of word representa- tion. In Proceedings of the Annual Meeting of the Cognitive Science Society.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving vector space word representations using multilingual correlation",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2014,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "462--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In EACL, pages 462-471.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Retrofitting word vectors to semantic lexicons",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "Sujay",
"middle": [],
"last": "Kumar Jauhar",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "1606--1615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In NAACL-HLT, pages 1606-1615.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Placing search in context: The concept revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Yossi",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Zach",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "Gadi",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "Eytan",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Transactions on Information Systems",
"volume": "20",
"issue": "1",
"pages": "116--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on Informa- tion Systems, 20(1):116-131.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Frequency distribution of the values of the correlation coefficient in samples from an indefinitely large population",
"authors": [
{
"first": "Ronald",
"middle": [
"A"
],
"last": "Fisher",
"suffix": ""
}
],
"year": 1915,
"venue": "Biometrika",
"volume": "10",
"issue": "4",
"pages": "507--521",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald A. Fisher. 1915. Frequency distribution of the values of the correlation coefficient in samples from an indefinitely large population. Biometrika, 10(4):507-521.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "PPDB: The paraphrase database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "758--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In In NAACL-HLT, pages 758-764.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "SimVerb-3500: A largescale evaluation set of verb similarity",
"authors": [
{
"first": "Daniela",
"middle": [],
"last": "Gerz",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniela Gerz, Ivan Vuli\u0107, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. SimVerb-3500: A large- scale evaluation set of verb similarity. In EMNLP.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Topics in semantic representation",
"authors": [
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steyvers",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2007,
"venue": "Psychological Review",
"volume": "114",
"issue": "2",
"pages": "211--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas L. Griffiths, Mark Steyvers, and Joshua B. Tenenbaum. 2007. Topics in semantic representa- tion. Psychological Review, 114(2):211-244.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Semantic properties of word associations to Italian verbs",
"authors": [
{
"first": "Annamaria",
"middle": [],
"last": "Guida",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
}
],
"year": 2007,
"venue": "Italian Journal of Linguistics",
"volume": "19",
"issue": "2",
"pages": "293--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annamaria Guida and Alessandro Lenci. 2007. Se- mantic properties of word associations to Italian verbs. Italian Journal of Linguistics, 19(2):293- 326.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Measuring semantic relatedness with vector space models and random walks",
"authors": [
{
"first": "Ama\u00e7",
"middle": [],
"last": "Herdagdelen",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Workshop on Graph-based Methods for NLP",
"volume": "",
"issue": "",
"pages": "50--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ama\u00e7 Herdagdelen, Katrin Erk, and Marco Baroni. 2009. Measuring semantic relatedness with vector space models and random walks. In Proceedings of the 2009 Workshop on Graph-based Methods for NLP, pages 50-53.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning abstract concept embeddings from multi-modal data: Since you probably can't see what I mean",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "255--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill and Anna Korhonen. 2014. Learning ab- stract concept embeddings from multi-modal data: Since you probably can't see what I mean. In EMNLP, pages 255-265.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "SimLex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "4",
"pages": "665--695",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The semantic priming project",
"authors": [
{
"first": "Keith",
"middle": [
"A"
],
"last": "Hutchison",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Balota",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Neely",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Cortese",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"R"
],
"last": "Cohen-Shikora",
"suffix": ""
},
{
"first": "Chi-Shing",
"middle": [],
"last": "Tse",
"suffix": ""
},
{
"first": "Melvin",
"middle": [
"J"
],
"last": "Yap",
"suffix": ""
},
{
"first": "Jesse",
"middle": [
"J"
],
"last": "Bengson",
"suffix": ""
},
{
"first": "Dale",
"middle": [],
"last": "Niemeyer",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Buchanan",
"suffix": ""
}
],
"year": 2013,
"venue": "Behavior Research Methods",
"volume": "45",
"issue": "4",
"pages": "1099--1114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keith A. Hutchison, David A. Balota, James H. Neely, Michael J. Cortese, Emily R. Cohen-Shikora, Chi- Shing Tse, Melvin J. Yap, Jesse J. Bengson, Dale Niemeyer, and Erin Buchanan. 2013. The seman- tic priming project. Behavior Research Methods, 45(4):1099-1114.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Cumulated gain-based evaluation of IR techniques",
"authors": [
{
"first": "Kalervo",
"middle": [],
"last": "J\u00e4rvelin",
"suffix": ""
},
{
"first": "Jaana",
"middle": [],
"last": "Kek\u00e4l\u00e4inen",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Transactions on Information Systems",
"volume": "20",
"issue": "4",
"pages": "422--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2002. Cumu- lated gain-based evaluation of IR techniques. ACM Transactions on Information Systems, 20(4):422- 446.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "High-dimensional semantic space accounts of priming",
"authors": [
{
"first": "N",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Douglas",
"middle": [
"J K"
],
"last": "Kintsch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mewhort",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Memory and Language",
"volume": "55",
"issue": "4",
"pages": "534--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael N. Jones, Walter Kintsch, and Douglas J.K. Mewhort. 2006. High-dimensional semantic space accounts of priming. Journal of Memory and Lan- guage, 55(4):534-552.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Constructing a large-scale database of Japanese word associations",
"authors": [
{
"first": "Terry",
"middle": [
"Joyce"
],
"last": "",
"suffix": ""
}
],
"year": 2005,
"venue": "Corpus Studies on Japanese Kanji (Glottometrics",
"volume": "10",
"issue": "",
"pages": "82--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Joyce. 2005. Constructing a large-scale database of Japanese word associations. Corpus Studies on Japanese Kanji (Glottometrics 10), pages 82-98.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A systematic study of semantic vector space model parameters",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality",
"volume": "",
"issue": "",
"pages": "21--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela and Stephen Clark. 2014. A system- atic study of semantic vector space model parame- ters. In Proceedings of the 2nd Workshop on Contin- uous Vector Space Models and their Compositional- ity, pages 21-30.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Improving multi-modal representations using image dispersion: Why less is sometimes more",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "835--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Felix Hill, Anna Korhonen, and Stephen Clark. 2014. Improving multi-modal representa- tions using image dispersion: Why less is sometimes more. In ACL, pages 835-841.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Specializing word embeddings for similarity or relatedness",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "2044--2048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Felix Hill, and Stephen Clark. 2015. Specializing word embeddings for similarity or re- latedness. In EMNLP, pages 2044-2048.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Thirty years and counting: Finding meaning in the N400 component of the event related brain potential (ERP)",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Kutas",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kara",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Federmeier",
"suffix": ""
}
],
"year": 2011,
"venue": "Annual Review of Psychology",
"volume": "62",
"issue": "",
"pages": "621--647",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Kutas and Kara D. Federmeier. 2011. Thirty years and counting: Finding meaning in the N400 component of the event related brain potential (ERP). Annual Review of Psychology, 62:621-647.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Exploring the Chinese mental lexicon with word association norms",
"authors": [
{
"first": "Yee",
"middle": [],
"last": "Oi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kwong",
"suffix": ""
}
],
"year": 2013,
"venue": "PACLIC",
"volume": "",
"issue": "",
"pages": "153--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oi Yee Kwong. 2013. Exploring the Chinese mental lexicon with word association norms. In PACLIC, pages 153-162.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Solutions to Plato's problem: The Latent Semantic Analysis theory of acquisition, induction, and representation of knowledge",
"authors": [
{
"first": "K",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "Psychological Review",
"volume": "104",
"issue": "2",
"pages": "211--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas K. Landauer and Susan T. Dumais. 1997. Solutions to Plato's problem: The Latent Semantic Analysis theory of acquisition, induction, and rep- resentation of knowledge. Psychological Review, 104(2):211-240.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Is the property of being positively correlated transitive?",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Langford",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Schwertman",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Owens",
"suffix": ""
}
],
"year": 2001,
"venue": "The American Statistician",
"volume": "55",
"issue": "4",
"pages": "322--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Langford, Neil Schwertman, and Margaret Owens. 2001. Is the property of being positively correlated transitive? The American Statistician, 55(4):322- 325.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Dependencybased word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "302--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Dependency- based word embeddings. In ACL, pages 302-308.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Better word representations with recursive neural networks for morphology",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "104--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Richard Socher, and Christopher Man- ning. 2013. Better word representations with re- cursive neural networks for morphology. In CoNLL, pages 104-113.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Bilingual word representations with monolingual quality in mind",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "151--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151-159.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Latent structure in measures of associative, semantic, and thematic knowledge",
"authors": [
{
"first": "S",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Maki",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Buchanan",
"suffix": ""
}
],
"year": 2008,
"venue": "Psychonomic Bulletin & Review",
"volume": "15",
"issue": "3",
"pages": "598--603",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William S. Maki and Erin Buchanan. 2008. Latent structure in measures of associative, semantic, and thematic knowledge. Psychonomic Bulletin & Re- view, 15(3):598-603.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Explaining human performance in psycholinguistic tasks with models of semantic similarity based on prediction and counting: A review and empirical validation",
"authors": [
{
"first": "Pawe\u0142",
"middle": [],
"last": "Mandera",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Keuleers",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Brysbaert",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Memory and Language",
"volume": "92",
"issue": "",
"pages": "57--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pawe\u0142 Mandera, Emmanuel Keuleers, and Marc Brys- baert. 2017. Explaining human performance in psy- cholinguistic tasks with models of semantic similar- ity based on prediction and counting: A review and empirical validation. Journal of Memory and Lan- guage, 92:57-78.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Evaluation in information retrieval. Introduction to Information Retrieval",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "151--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2008. Evaluation in informa- tion retrieval. Introduction to Information Retrieval, pages 151-175.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "A distributional model of semantic context effects in lexical processing",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2004,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott McDonald and Chris Brew. 2004. A distribu- tional model of semantic context effects in lexical processing. In ACL, pages 17-24.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "The role of context types and dimensionality in learning word embeddings",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Melamud",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "1030--1040",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Melamud, David McClosky, Siddharth Patward- han, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embed- dings. In NAACL-HLT, pages 1030-1040.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations",
"authors": [
{
"first": "David",
"middle": [
"E"
],
"last": "Meyer",
"suffix": ""
},
{
"first": "Roger",
"middle": [
"W"
],
"last": "Schvaneveldt",
"suffix": ""
}
],
"year": 1971,
"venue": "Journal of Experimental Psychology",
"volume": "90",
"issue": "2",
"pages": "227--234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David E. Meyer and Roger W. Schvaneveldt. 1971. Fa- cilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. Journal of Experimental Psychology, 90(2):227-234.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Asymmetric association measures",
"authors": [
{
"first": "Lukas",
"middle": [],
"last": "Michelbacher",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Evert",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2007,
"venue": "RANLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lukas Michelbacher, Stefan Evert, and Hinrich Sch\u00fctze. 2007. Asymmetric association measures. In RANLP.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed repre- sentations of words and phrases and their composi- tionality. In NIPS, pages 3111-3119.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Counter-fitting word vectors to linguistic constraints",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Lina",
"middle": [
"Maria"
],
"last": "Rojas-Barahona",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Steve",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "142--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Mrk\u0161i\u0107, Diarmuid\u00d3 S\u00e9aghdha, Blaise Thom- son, Milica Ga\u0161i\u0107, Lina Maria Rojas-Barahona, Pei- Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve J. Young. 2016. Counter-fitting word vec- tors to linguistic constraints. In NAACL-HLT, pages 142-148.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Interpreting the influence of implicitly activated memories on recall and recognition",
"authors": [
{
"first": "",
"middle": [],
"last": "Gee",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gerson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Janczura",
"suffix": ""
}
],
"year": 1998,
"venue": "Psychological review",
"volume": "105",
"issue": "2",
"pages": "299--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gee, and Gerson A. Janczura. 1998. Interpreting the influence of implicitly activated memories on recall and recognition. Psychological review, 105(2):299- 324.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "What is free association and what does it measure? Memory and Cognition",
"authors": [
{
"first": "Douglas",
"middle": [
"L"
],
"last": "Nelson",
"suffix": ""
},
{
"first": "Cathy",
"middle": [
"L"
],
"last": "Mcevoy",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Dennis",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "28",
"issue": "",
"pages": "887--899",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas L. Nelson, Cathy L. McEvoy, and Simon Den- nis. 2000. What is free association and what does it measure? Memory and Cognition, 28(6):887-899.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "The University of South Florida free association, rhyme, and word fragment norms",
"authors": [
{
"first": "Douglas",
"middle": [
"L"
],
"last": "Nelson",
"suffix": ""
},
{
"first": "Cathy",
"middle": [
"L"
],
"last": "Mcevoy",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"A"
],
"last": "Schreiber",
"suffix": ""
}
],
"year": 2004,
"venue": "Behavior Research Methods, Instruments, & Computers",
"volume": "36",
"issue": "3",
"pages": "402--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas L. Nelson, Cathy L. McEvoy, and Thomas A. Schreiber. 2004. The University of South Florida free association, rhyme, and word fragment norms. Behavior Research Methods, Instruments, & Com- puters, 36(3):402-407.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Dependency-based construction of semantic space models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "161--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Pad\u00f3 and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161-199.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "425--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, fine- grained entailment relations, word embeddings, and style classification. In ACL, pages 425-430.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532-1543.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "A weighted rank measure of correlation",
"authors": [
{
"first": "Joaquim",
"middle": [],
"last": "Pinto Da Costa",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Soares",
"suffix": ""
}
],
"year": 2005,
"venue": "Australian & New Zealand Journal of Statistics",
"volume": "47",
"issue": "4",
"pages": "515--529",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joaquim Pinto da Costa and Carlos Soares. 2005. A weighted rank measure of correlation. Australian & New Zealand Journal of Statistics, 47(4):515-529.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Rankings and Preferences: New Results in Weighted Correlation and Weighted Principal Component Analysis with Applications",
"authors": [
{
"first": "Joaquin",
"middle": [],
"last": "Pinto Da",
"suffix": ""
},
{
"first": "Costa",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joaquin Pinto da Costa. 2015. Rankings and Pref- erences: New Results in Weighted Correlation and Weighted Principal Component Analysis with Appli- cations.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Ad-hoc object retrieval in the Web of data",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pound",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Mika",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Zaragoza",
"suffix": ""
}
],
"year": 2010,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "771--780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pound, Peter Mika, and Hugo Zaragoza. 2010. Ad-hoc object retrieval in the Web of data. In WWW, pages 771-780.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Factors that determine false recall: A multiple regression analysis",
"authors": [
{
"first": "L",
"middle": [],
"last": "Henry",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"M"
],
"last": "Roediger",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"B"
],
"last": "Watson",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Mc-Dermott",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gallo",
"suffix": ""
}
],
"year": 2001,
"venue": "Psychonomic Bulletin & Review",
"volume": "8",
"issue": "3",
"pages": "385--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henry L. Roediger, Jason M. Watson, Kathleen B. Mc- Dermott, and David A. Gallo. 2001. Factors that determine false recall: A multiple regression analy- sis. Psychonomic Bulletin & Review, 8(3):385-407.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Evaluation methods for unsupervised word embeddings",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Schnabel",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Labutov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In EMNLP.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "An empirical characterisation of response types in German association norms",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "Alissa",
"middle": [],
"last": "Melinger",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2008,
"venue": "Research on Language and Computation",
"volume": "6",
"issue": "2",
"pages": "205--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Schulte im Walde, Alissa Melinger, Michael Roth, and Andrea Weber. 2008. An empirical char- acterisation of response types in German association norms. Research on Language and Computation, 6(2):205-238.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Symmetric pattern based word embeddings for improved word similarity prediction",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2015,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "258--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for im- proved word similarity prediction. In CoNLL, pages 258-267.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Grounded models of semantic representation",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2012,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1423--1433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carina Silberer and Mirella Lapata. 2012. Grounded models of semantic representation. In EMNLP, pages 1423-1433.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Evaluating word embeddings with fMRI and eye-trackings",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2016,
"venue": "REPEVAL",
"volume": "",
"issue": "",
"pages": "116--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard. 2016. Evaluating word embeddings with fMRI and eye-trackings. In REPEVAL, pages 116-121.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Lexical co-occurrence and association strength",
"authors": [
{
"first": "P",
"middle": [],
"last": "Donald",
"suffix": ""
},
{
"first": "Kimberly",
"middle": [
"C"
],
"last": "Spence",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Owens",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of Psycholinguistic Research",
"volume": "19",
"issue": "5",
"pages": "317--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donald P. Spence and Kimberly C. Owens. 1990. Lex- ical co-occurrence and association strength. Journal of Psycholinguistic Research, 19(5):317-330.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Probabilistic topic models. Handbook of Latent Semantic Analysis",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steyvers",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Griffiths",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "427",
"issue": "",
"pages": "424--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steyvers and Tom Griffiths. 2007. Probabilistic topic models. Handbook of Latent Semantic Analy- sis, 427(7):424-440.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "The effect of normative context variability on recognition memory",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steyvers",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [
"J"
],
"last": "Malmberg",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Experimental Psychology: Learning, Memory, and Cognition",
"volume": "29",
"issue": "5",
"pages": "760--766",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steyvers and Kenneth J. Malmberg. 2003. The effect of normative context variability on recogni- tion memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(5):760-766.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "Word association spaces for predicting semantic similarity effects in episodic memory",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steyvers",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"M"
],
"last": "Shiffrin",
"suffix": ""
},
{
"first": "Douglas",
"middle": [
"L"
],
"last": "Nelson",
"suffix": ""
}
],
"year": 2004,
"venue": "Experimental Cognitive Psychology and its Applications",
"volume": "",
"issue": "",
"pages": "237--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steyvers, Richard M. Shiffrin, and Douglas L. Nelson. 2004. Word association spaces for predict- ing semantic similarity effects in episodic memory. Experimental Cognitive Psychology and its Applica- tions, pages 237-249.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "Time course of priming for associate and inference words in a discourse context",
"authors": [
{
"first": "Robert",
"middle": [
"E"
],
"last": "Till",
"suffix": ""
},
{
"first": "Ernest",
"middle": [
"F"
],
"last": "Mross",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Kintsch",
"suffix": ""
}
],
"year": 1988,
"venue": "Memory & Cognition",
"volume": "16",
"issue": "4",
"pages": "283--298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert E. Till, Ernest F. Mross, and Walter Kintsch. 1988. Time course of priming for associate and in- ference words in a discourse context. Memory & Cognition, 16(4):283-298.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "Evaluation of word vector representations by subspace alignment",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "2049--2054",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guil- laume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment. In EMNLP, pages 2049-2054.",
"links": null
},
"BIBREF75": {
"ref_id": "b75",
"title": "Features of similarity",
"authors": [
{
"first": "Amos",
"middle": [],
"last": "Tversky",
"suffix": ""
}
],
"year": 1977,
"venue": "Psychological Review",
"volume": "84",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amos Tversky. 1977. Features of similarity. Psycho- logical Review, 84(4):327.",
"links": null
},
"BIBREF76": {
"ref_id": "b76",
"title": "The TREC-8 question answering track report",
"authors": [
{
"first": "Ellen",
"middle": [
"M"
],
"last": "Voorhees",
"suffix": ""
}
],
"year": 1999,
"venue": "TREC",
"volume": "",
"issue": "",
"pages": "77--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen M. Voorhees. 1999. The TREC-8 question an- swering track report. In TREC, pages 77-82.",
"links": null
},
"BIBREF77": {
"ref_id": "b77",
"title": "Crosslingual semantic similarity of words as the similarity of their semantic word responses",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2013,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "106--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2013. Cross- lingual semantic similarity of words as the similarity of their semantic word responses. In NAACL-HLT, pages 106-116.",
"links": null
},
"BIBREF78": {
"ref_id": "b78",
"title": "HyperLex: A largescale evaluation of graded lexical entailment",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Gerz",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. 2016. HyperLex: A large- scale evaluation of graded lexical entailment. CoRR, abs/1608.02117.",
"links": null
},
"BIBREF79": {
"ref_id": "b79",
"title": "From paraphrase database to compositional paraphrase model and back",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the ACL",
"volume": "3",
"issue": "",
"pages": "345--358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to com- positional paraphrase model and back. Transactions of the ACL, 3:345-358.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Motivation: Association and USF Implicit Cognitive Measures: Means of Semantic Evaluation? Several studies have shown clear correspondence between implicit cognitive",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "); (3) Multilingual embedding models from Luong et al. (2015) (biskip-256d) and Faruqui and Dyer (2014) (bicca-512d). More detailed descriptions of all VSM models are available in the listed papers and supplementary material attached to this work.",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Influence of the window size on the ability of vector space models to capture Similarity (evaluated on SimLex-999), Relatedness (MEN), and Association (USF) (a) Spearman's \u03c1-std correlations on all three data sets; (b) Behaviour of other evaluation metrics used in the USF evaluation. All tested models are SGNS, d = 300, and the only varied hyper-parameter is the window size.",
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td>: Example (cue, response) pairs of free word</td></tr><tr><td>association from the USF data set. #G stands for</td></tr><tr><td>the number of participants serving in the group</td></tr><tr><td>norming the word, while #P denotes the number</td></tr><tr><td>participants producing a particular response.</td></tr></table>",
"text": "",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td>Model</td><td>\u03c1-std</td><td>\u03c1-w</td><td>MRR</td><td>MAP</td><td>NDCG</td></tr><tr><td>LDA-assoc</td><td>0.230</td><td>0.221</td><td>0.153</td><td>0.048</td><td>0.128</td></tr><tr><td>count-ppmi-500d</td><td>0.255</td><td>0.249</td><td>0.294</td><td>0.094</td><td>0.226</td></tr><tr><td>glove-6B-50d</td><td>0.280</td><td>0.277</td><td>0.318</td><td>0.105</td><td>0.249</td></tr><tr><td>glove-6B-300d</td><td>0.337</td><td>0.339</td><td>0.473</td><td>0.183</td><td>0.380</td></tr><tr><td>sgns-pw-bow-w2</td><td>0.263</td><td>0.259</td><td>0.315</td><td>0.098</td><td>0.226</td></tr><tr><td>sgns-pw-bow-w5</td><td>0.283</td><td>0.280</td><td>0.372</td><td>0.122</td><td>0.278</td></tr><tr><td>sgns-pw-deps</td><td>0.240</td><td>0.234</td><td>0.281</td><td>0.081</td><td>0.187</td></tr><tr><td>sgns-8b-bow-w2</td><td>0.322</td><td>0.324</td><td>0.452</td><td>0.169</td><td>0.358</td></tr><tr><td>sympat-500d</td><td>0.194</td><td>0.189</td><td>0.221</td><td>0.069</td><td>0.180</td></tr><tr><td>paragram-25d</td><td>0.222</td><td>0.217</td><td>0.309</td><td>0.092</td><td>0.198</td></tr><tr><td>paragram-300d</td><td>0.302</td><td>0.298</td><td>0.388</td><td>0.138</td><td>0.300</td></tr><tr><td>paragram+cf-300d</td><td>0.265</td><td>0.268</td><td>0.372</td><td>0.067</td><td>0.179</td></tr><tr><td>biskip-256d</td><td>0.255</td><td>0.253</td><td>0.283</td><td>0.091</td><td>0.212</td></tr><tr><td>bicca-512d</td><td>0.311</td><td>0.310</td><td>0.371</td><td>0.132</td><td>0.303</td></tr></table>",
"text": "The effects of reducing the search space V r to speed up the evaluation process. The numbers in parentheses are relative rankings of each model (1-8) according to the particular evaluation metric. The numbers in square brackets report the coverage of each model (the total number of USF queries is 4992).",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}