ACL-OCL / Base_JSON /prefixL /json /L16 /L16-1022.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "L16-1022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:03:56.103076Z"
},
"title": "Exploitation of Co-reference in Distributional Semantics",
"authors": [
{
"first": "Dominik",
"middle": [],
"last": "Schlechtweg",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": "dominik.schlechtweg@ling.uni-stuttgart.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The aim of distributional semantics is to model the similarity of the meaning of words via the words they occur with. Thereby, it relies on the distributional hypothesis implying that similar words have similar contexts. Deducing meaning from the distribution of words is interesting as it can be done automatically on large amounts of freely available raw text. It is because of this convenience that most current state-of-the-art-models of distributional semantics operate on raw text, although there have been successful attempts to integrate other kinds of-e.g., syntactic-information to improve distributional semantic models. In contrast, less attention has been paid to semantic information in the research community. One reason for this is that the extraction of semantic information from raw text is a complex, elaborate matter and in great parts not yet satisfyingly solved. Recently, however, there have been successful attempts to integrate a certain kind of semantic information, i.e., co-reference. Two basically different kinds of information contributed by co-reference with respect to the distribution of words will be identified. We will then focus on one of these and examine its general potential to improve distributional semantic models as well as certain more specific hypotheses.",
"pdf_parse": {
"paper_id": "L16-1022",
"_pdf_hash": "",
"abstract": [
{
"text": "The aim of distributional semantics is to model the similarity of the meaning of words via the words they occur with. Thereby, it relies on the distributional hypothesis implying that similar words have similar contexts. Deducing meaning from the distribution of words is interesting as it can be done automatically on large amounts of freely available raw text. It is because of this convenience that most current state-of-the-art-models of distributional semantics operate on raw text, although there have been successful attempts to integrate other kinds of-e.g., syntactic-information to improve distributional semantic models. In contrast, less attention has been paid to semantic information in the research community. One reason for this is that the extraction of semantic information from raw text is a complex, elaborate matter and in great parts not yet satisfyingly solved. Recently, however, there have been successful attempts to integrate a certain kind of semantic information, i.e., co-reference. Two basically different kinds of information contributed by co-reference with respect to the distribution of words will be identified. We will then focus on one of these and examine its general potential to improve distributional semantic models as well as certain more specific hypotheses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The original aim of distributional semantics is to model the similarity of the meaning-the semantics-of words. The basic assumption underlying this approach is that the semantic similarity of two words is a function of their contexts. That is, in other words, the meaning of words can be inferred from the frequencies of the words they immediately occur with and this can happen in such a way that the degree of similarity of those meanings can be measured. Semantic similarity is a key concept in the modeling of language and thus in computational linguistics. It is crucial in a variety of linguistic applications influencing our everyday life such as search engines. In distributional semantics we represent the meaning of a word by a vector. This vector is an abstraction over the contexts in which we find the particular word in question. Here lies the crux of the matter: known algorithms of distributional semantics consider only those contexts as relevant to the meaning of a target word which are found as contexts of the word-the particular combination of letters, the string-in question. However, there are other, particularly definable, contexts which encode some of the meaning of a target word. Consider the following text example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "When Cesar took on the case of {{Fella}, {an adorable Jack Russell/Italian Greyhound mix}}, {the little dog}'s antics were about to get {his} owner slapped with an eviction notice. At the apartment complex where {Fella} resided, {he} barked nonstop the entire time {his} adoptive mom was at work, ceasing only once she came home at night. 1 Building up a vector representation for the meaning of dog, a standard algorithm of distributional semantics would browse through this text snippet, find the word dog just once, include the information from the immediate context (whose size would be defined previously) of this instance of dog into the vector representation and proceed. However, in the above it seems as though the contexts surrounding the noun phrases that refer to the same referent as the noun phrase which includes the word dog-such as Fella, his and he-are equally suited for contributing to the vector representation of the meaning of dog. Actually, the entire passage above is about a dog. Now, if we want distributional models to use this information, we must make it explicit on the distributional level. Consider the following paragraph where we try to make coreference information, which is implicit in the above text snippet, explicit:",
"cite_spans": [
{
"start": 339,
"end": 340,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "When Cesar took on the case of {Fella / an adorable... / the little dog / his / he}, {Fella / an adorable... / the little dog / his / he}'s antics were about to get {Fella / an adorable... / the little dog / his / he} owner slapped with an eviction notice. At the apartment complex where {Fella / an adorable... / the little dog / his / he} resided, {Fella / an adorable... / the little dog / his / he} barked nonstop the entire time {Fella / an adorable... / the little dog / his / he} adoptive mom was at work, ceasing only once she came home at night.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "We may explicate it by finding co-referent noun phrases and telling the model at every spot a referent is picked up, e.g., by he, by which words it is elsewhere referred to (in all of the co-referent phrases), e.g., by Fella and the little dog. These alternative words used to refer to the same entity can then be put in the context of the current word, he. By this, the model gets access to the previously latent information, to the latent contexts. Recently, there has already been an attempt to integrate co-reference information into models of distributional semantics (Sch\u00fctze and Adel, 2014 ). Yet, the authors use a different kind of information than the one presented above. While we use orthogonal co-reference information, i.e., the contexts of co-referent mentions, Sch\u00fctze and Adel use linear co-reference information, i.e., which mentions are actually co-referent, and consider co-referent mentions as mutual contexts. We will compare standard distributional semantic models to models incorporating the above-described distributionally disguised co-reference information. While additional information gained through new contexts-what we will call here Orthogonal Context Enrichment (OCE)-should help the models in general, it could be particularly helpful for models trained on small data sets, since here the relative enrichment is higher than with bigger training set. The same rationale also applies for rare words: if there are not enough contexts for a word in a training set, then additional contexts gained through OCE may have a stronger impact on the learning of the meaning of the word than for a very frequent word. Also, OCE is expected to have a particular impact on learning the meaning of nouns, because co-reference is a relation holding between noun phrases, and especially proper names, which are very frequent in co-reference chains. Apart from the general impact OCE may have on distributional semantic models, there are certain applications where we may imagine a particular benefit. OCE is, presumably, most helpful when raw text training data is limited, since here we cannot simply gather new contexts by scaling up the amount of training data. Hence, for tasks where we need to build many vectors from many small data points (instead of building one vector from large amounts of training data) we expect a particular benefit from OCE, since for every data point training data is limited. Such tasks occur, e.g., in word sense disambiguation, information retrieval, named-entity recognition or classification tasks such as spam detection. This makes OCE a widely applicable mechanism.",
"cite_spans": [
{
"start": 573,
"end": 596,
"text": "(Sch\u00fctze and Adel, 2014",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In order to give distributional semantic models access to the above-mentioned latent contexts we first need a coreference resolution for a corpus. For this we build on the Annotated English Gigaword v.5 corpus (Gigaword) annotated with syntactic and discourse structure and providing a co-reference resolution with quality of \"current state of the art\" (Napoles et al., 2012) . We work with a subpart of around 50% of the size of the whole corpus encompassing approximately 100 million sentences and 1.9 billion tokens after preprocessing. We then build a computational algorithm headSub replacing pronouns in the corpus with the head noun of the representative (most informative) element inside its co-reference chain, i.e., if the referent of a noun phrase n in the corpus is re-referred to via a pronoun p, then headSub replaces p with the syntactic head of n. By this we aim at enriching the corpus with more distributional information, as described above. 2 As an example the reader may consider the piece of text in (1).",
"cite_spans": [
{
"start": 353,
"end": 375,
"text": "(Napoles et al., 2012)",
"ref_id": "BIBREF9"
},
{
"start": 961,
"end": 962,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "2."
},
{
"text": "(1) Fella chases a squirrel, since he wants to eat it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "2."
},
{
"text": "The co-reference resolution for (1) shall be {(F ella, he), (a squirrel, it)} with the first elements of the chains being the representative element respectively. Now headSub will produce the following output text for (1):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "2."
},
{
"text": "(2) fella chases a squirrel since fella wants to eat squirrel",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "2."
},
{
"text": "We also experiment with different versions of headSub, e.g., we only insert proper nouns (headSub PN ) or lexical nouns (headSub Non -PN ). After this, we train a distributional semantic model on the enriched text (corresponding to (2) in the example above) and for comparison also on the original raw text (corresponding to (1)). For training we use the Skip-gram model from the word2vec toolkit (Mikolov et al., 2013a; Mikolov et al., 2013b) , which was found to be superior to standard count models (Baroni et al., 2014) . Since the relative performance of the different models was found to vary with the variation of these training parameters we vary the maximal window size considered as the context of a token and the minimum count of words considered during training in order to get a broader picture of the impact of OCE. Finally, the resulting vector spaces will be evaluated with respect to their capturing of semantic (attributional) similarity.",
"cite_spans": [
{
"start": 397,
"end": 420,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF7"
},
{
"start": 421,
"end": 443,
"text": "Mikolov et al., 2013b)",
"ref_id": "BIBREF8"
},
{
"start": 502,
"end": 523,
"text": "(Baroni et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "2."
},
{
"text": "We evaluate the quality of the vector spaces on a variety of data sets using two basically different ways of evaluation: (i) human similarity judgments of word pairs and (ii) analogies. The word similarity judgments consist of pairs of words rated by human informants for their semantic similarity. Evaluation here means determining the Spearman's rank correlation coefficient between all similarity judgments for word pairs inside a test set and the cosine similarities of the respective word vectors. We will evaluate the vector spaces on a variety of human similarity judgment test sets including standard benchmarks such as WordSim353 (Finkelstein et al., 2002; Agirre et al., 2009 ) (which will allow us to distinguish between similarity and relatedness) 3 , SimLex-999 (Hill et al., 2014 ) (which will allow us to distinguish between different parts of speech, i.e., nouns, adjectives and verbs) and MEN (Bruni et al., 2014) plus a data set containing mainly rare words, which we will call Rare (Luong et al., 2013) , and a data set containing many proper nouns, which we will call MTurk (Radinsky et al., 2011) . These test sets are found to have a very different constitution concerning the classes of words they contain. While some contain mainly nouns, others contain mainly adjectives or verbs. Also, the use of proper nouns strongly varies; some do not even contain any proper nouns. As this study also indicates, this varying constitution of test sets may lead to very different results testing a model on them.",
"cite_spans": [
{
"start": 639,
"end": 665,
"text": "(Finkelstein et al., 2002;",
"ref_id": "BIBREF3"
},
{
"start": 666,
"end": 685,
"text": "Agirre et al., 2009",
"ref_id": "BIBREF0"
},
{
"start": 775,
"end": 793,
"text": "(Hill et al., 2014",
"ref_id": "BIBREF4"
},
{
"start": 910,
"end": 930,
"text": "(Bruni et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 1001,
"end": 1021,
"text": "(Luong et al., 2013)",
"ref_id": "BIBREF6"
},
{
"start": 1094,
"end": 1117,
"text": "(Radinsky et al., 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3."
},
{
"text": "The second evaluation procedure follows the idea that there are different kinds of similarities between words (Mikolov et al., 2013a) . Evaluation here consists of checking whether a respective vector space captures certain relations via comparing the distances of word pairs with the same relation. For example, the distance between great and greatest should be the same as between smart and smartest. Analogies provide a convenient way of evaluation, since it is easy to construe new sets testing very different relations involving many different kinds of words. We exploit this fact by construing three new analogy test sets which we provide as an additional resource to this study: one set consisting of word2vec's analogy questions (Mikolov et al., 2013a) reconstructed with rare words and two sets for proper nouns, one based on words and the other based on phrases (Dominik Schlechtweg, 2016). These new resources will help us in measuring the quality of a vector space with respect to its ability to capture the similarity properties of rare words and proper nouns, on which we assumed OCE to have a particular impact. Additionally, we evaluate the vector spaces on word2vec's analogy questions.",
"cite_spans": [
{
"start": 110,
"end": 133,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF7"
},
{
"start": 737,
"end": 760,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3."
},
{
"text": "The test set for rare words-exemplified in Table 1 -was construed on the basis of word2vec's analogy questions. We first excluded the (Common) Capital city task and the Man-Woman (family) task from consideration, since the former necessarily contains frequent words, while for the latter we could not imagine enough rare words and did not find a way to search for it in a corpus. For other tasks this was possible. For the City-in-state task we combined English names of Chinese districts with names of their capitals (instead of American states and their capitals in the original file). For the remaining tasks we scanned approximately 1.3 million sentences from the Annotated English Gigaword v.5 corpus. For the Currency task, for instance, we searched for words with the NER-tag MONEY, or for the Adjective to adverb task we searched for words with the respective POStag. Then we worked manually through the rarest words of the respective category (those with frequency below 10) and selected words that seemed well-suited because they were of the specific type required for the task. The selected item, say ghastly, was then combined with the related element according to the target task; for the comparative task this would be ghastlier. This pair was then, in turn, combined with all of the pairs from the same relation in the word2vec questions-words file, and vice versa. By this procedure we always combine one pair which was extracted by us with one pair from the questions-words file, i.e., one rare pair with a more frequent one. In this way we want to avoid the \"adding up\" of the rareness of the word pairs which otherwise may lead to an extreme drop of performance on the tasks. Moreover, this leads to the effect that we get a large number of questions. In this way we get a total of 29,150 questions, nearly 10,000 more than in the questions-words file. 4 We provide this data set as an additional resource to this paper, since it might be a complementary utility to the word2vec questions-words file for further research. ",
"cite_spans": [
{
"start": 1872,
"end": 1873,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 43,
"end": 50,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Rare Words",
"sec_num": "3.1."
},
{
"text": "In order to evaluate the models specifically with respect to the similarity properties of proper nouns we chose four relations involving proper nouns: the leader-country relation and the person-sex relation shall measure similarity properties of names of humans, while the relations buildingcity and river-country shall measure similarity properties of names of things. The word pairs contain mostly names of well-known entities, such as former or present state leaders, nations, presently famous or historically important people, buildings, cities and rivers. The structure of the questionswords-proper-nouns file containing a total of 2,746 questions is depicted in Table 2 . The questions-phrases-propernouns file based on phrases has the same structure. Both files are provided as an additional resource to this paper. 5 ",
"cite_spans": [],
"ref_spans": [
{
"start": 668,
"end": 675,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Proper Nouns",
"sec_num": "3.2."
},
{
"text": "A preliminary study showed varying results across the different tasks we evaluated on. Yet, on analogies one OCEmodel, i.e., that is trained on an enriched text, outperformed the other OCE-models on most evaluation tasks and-at least on analogies-also the baseline model, trained on the original raw text. Surprisingly, this was headSub PN , inserting only proper nouns for pronouns, which we initially did not expect to contribute with so much distributional information. 6 For this, headSub PN was chosen for a deeper analysis. This is not to say that the other OCEmodels (headSub and headSub Non -PN ) are not expected to contribute to the learning of the meaning of certain kinds of words. But, with the present means at hand, focusing on one model which showed the clearest results seemed to be the best option. The reader may note, however, that by this restriction and also by the specific restrictions resulting from the mechanism of headSub and its derivatives we presumably only exploit a small share of the full potential of OCE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Preliminary Study",
"sec_num": "4.1."
},
{
"text": "The reader may consider Table 3 for the results of training skip-gram on the output of headSub PN and raw text (the baseline) respectively with varying training parameters. The models were trained 10 times each. In every iteration the performance on the different test sets was computed at the end of the training period. Here we present the average performance over all iterations. An independent twosample t-test was performed for each set of training parameters between the results of the models trained on the output of headSub PN and the results of the baseline models in order to assure statistical significance of the results. 7 The resulting p-value is given for each test set. Performance values are marked boldly where for a certain combination of training parameters the models trained on one of the two texts (headSub PN or baseline) outperformed the models trained on the other and the difference is statistically significant, i.e., the p-value is below 0.01.",
"cite_spans": [
{
"start": 634,
"end": 635,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Deeper Analysis: Inserting Proper Nouns",
"sec_num": "4.2."
},
{
"text": "Generally, we do not find very strong performance differences. The strongest are around 2%. Yet, the first striking observation considering Table 3 is the different performance of the models on similarity judgments and analogies: while the models trained on raw text (baselinemodels) have significant advantages on many similarity judgment test sets, the situation looks the other way round on analogies where the models trained on the output of headSub PN (headSub PN -models) have significant advantages. Why is that?",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "General Observations",
"sec_num": "4.2.1."
},
{
"text": "We may find the reason for the different performances on the two evaluation methodologies in the distinction between similarity and relatedness. While certain similarity judgment test sets particularly aim at measuring similarity and not relatedness (SimLex-999) or distinguish between similarity and relatedness (WordSim353), the analogy test sets-at least in the particular form at hand here-seem to be more suited to measure relatedness than similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity vs. Relatedness",
"sec_num": "4.2.2."
},
{
"text": "Two words are related if they \"are associated but not [necessarily] actually similar (Freud, psychology)\" (Hill et al., 2014) or if they \"are connected by broader semantic relations\" (Bruni et al., 2014) . The latter is exactly the way in which the analogy test sets were construed, i.e., words with the same relation are checked for equal distances in vector space. (The reader may note that these relations have very different natures.) Hence, we can expect the analogy test sets used here to be a better measure for relatedness rather than the more narrow notion of similarity. This is also supported by the performance of the models on WordSim353. Though not statistically significant yet, we observe advantages of the baseline-models on the similarity measure, while we observe advantages of headSub PNmodels on the relatedness measure. If the analogy test sets used here are more suited to measure relatedness and the similarity judgment test sets rather measure similarity, the different performances on these two methodologies are explained by the advantages of the headSub PN -models in capturing relatedness. 8",
"cite_spans": [
{
"start": 106,
"end": 125,
"text": "(Hill et al., 2014)",
"ref_id": "BIBREF4"
},
{
"start": 183,
"end": 203,
"text": "(Bruni et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity vs. Relatedness",
"sec_num": "4.2.2."
},
{
"text": "The advantage of headSub PN -models for one set of parameters on the noun subset of SimLex-999 indicates thatunder certain circumstances-OCE might be helpful for learning the meaning of nouns. As we already mentioned above, this would not be surprising in general, since co-reference-and thus the mechanism of headSub PNmainly involves insertion of nouns. However, for the particular model used here, i.e., headSub PN , this effect is indeed surprising, since it only inserts proper nouns, but these are not part of the SimLex-999 noun-subset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nouns",
"sec_num": "4.2.3."
},
{
"text": "The results for verbs on similarity judgments are clear: with all training parameter sets we have significant advantages of the baseline-models. On analogies, though, for tasks involving verbs such as Past tense or Present participle the baseline is significantly outperformed for different parameter sets and also on rare words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verbs",
"sec_num": "4.2.4."
},
{
"text": "While for adjectives we have no significant differences on similarity judgments, we do find significant differences on the Adjective to adverb task for different parameter sets and for frequent as well as for rare words in favor of the headSub PN -models. However, for the Nationality adjective task the baseline still significantly outperforms the OCEmodel on one set of training parameters. Table 3 : Performance of skip-gram model trained on output of headSub PN and raw text (the baseline) with varying training parameters. For analogies accuracy values are given, while for similarity judgments we give the Spearman's rank correlation coefficient (multiplied by 100 for better comparison with the accuracy values). Tasks where models show highly different coverage of the data are excluded (marked by \"d. c.\"). The person-sex task was subsequently excluded because errors in the test set led to biased results (marked by \"err.\").",
"cite_spans": [],
"ref_spans": [
{
"start": 393,
"end": 400,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adjectives",
"sec_num": "4.2.5."
},
{
"text": "words (Word2vec rare) the baseline is significantly outperformed for one parameter set, while the performance for the other parameter sets confirms this tendency. The overall advantage of the headSub PN -models on Word2vec rare is comparable to the advantage on the original word2vec questions. A particular improvement for rare words with OCE can thus not be confirmed here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjectives",
"sec_num": "4.2.5."
},
{
"text": "As we use an OCE-model inserting only proper nouns we would expect a particular effect for proper nouns. Also, because animate entities are more likely to be re-referred to we would expect a stronger effect for proper names of human entities. There are indeed significant advantages of the headSub PN -models when it comes to test tasks involving proper names of human entities. 9 This is indicated here by their performance on leader-country: the baseline is significantly outperformed for all training parameters. Further, for most of the tasks in the word2vec analogy set involving proper nouns, such as Common capital city and All capital cities the baseline is outperformed significantly for one set of training parameters, which is supported by the performances with the other parameters. Also, for similarity judgments these observations are confirmed: on MTurk, containing a comparably high number of proper nouns, the baseline is outperformed significantly for one parameter set, where this tendency is confirmed for the other sets. Further, for the subset of proper nouns from MTurk there is-though not clearly significant-a tendency towards advantages of the headSub PN -models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proper Nouns",
"sec_num": "4.2.7."
},
{
"text": "The only task containing pronouns is Man-Woman. Since headSub PN deletes many pronouns we may expect a particular effect here. This is indeed the case. For one set of training parameters the baseline-models significantly outperform the headSub PN -models, which is also supported by the performances on the other parameter sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pronouns",
"sec_num": "4.2.8."
},
{
"text": "In the above we found that one particular way of carrying out OCE-i.e, by replacing pronouns with the syntactic head of the proper nouns they are co-referent with-de facto improves the performance of distributional models on a wide range of analogy tasks and on certain test sets of similarity judgments. We find the clearest results for proper nouns (more specifically, for proper names of human entities), which was what we expected, since only proper nouns were inserted by headSub PN . Yet, also for nouns in general, adjectives and verbs the findings indicate that OCE may have a potential to improve the learning of their semantic similarity-or perhaps relatedness-properties. However, we also found significant disadvantages of the OCE-model used here, especially on test sets of similarity judgments. The initial hypothesis that OCE will help for learning the meaning of rare words could not be confirmed. Whether the different performances on analogies and similarity judgments are indeed due to the distinction between relatedness and similarity has to be examined more deeply. The reader may, however, note that a major downside of OCE is its reliance on co-reference resolution which makes it a computationally costly, supervised and language dependent approach in contrast to standard models of distributional semantics. Also, it is strongly dependent on the quality of co-reference resolution, which is-in the best casearound 60% (F1) for present co-reference resolution algorithms (Lee et al., 2011) . In the end, also, the results obtained above have to be checked, not only because they vary across tasks, but also because the operation carried out by headSub and its derivatives may trigger certain side effects that also may have an influence on the performance of the resulting vector space models. 10 In order to exclude these factors and in order to exploit the full potential of OCE co-reference information shall be integrated directly into the training process of a standard count model. That is, we will retreat from first integrating co-reference information into raw text and then performing training. Instead, we will build a standard count model of distributional semantics sensitive to co-reference information by directly accessing the context of co-referent mentions when encountering a mention which is part of a co-reference chain. Further, a qualitative analysis of the resulting vector spaces has to be carried out in order to explain how the different performances caused by OCE come about. 11 That is also the question whether we can regard OCE as yielding just more data of the same kind as the linear distribution of words or whether we may gather new-otherwise possibly rare-kinds of information. Also, we may evaluate the effect of OCE on smaller data sets. Above that, the best way to make orthogonal information distributionally explicit has to be examined, i.e., we have to find out which are the sets of words to replace and to insert which yield the best results for a certain task; recall that the OCE-model presented here is restricted in many ways and presumably only ex-10 Some of these side effects are window effects. By substituting one or more tokens we may \"push out\" or \"pull in\" other tokens from the training window. Pushing out may happen for instance when we insert more than one token for another token. Pulling in may happen when there was carried out a substitution in the context of a token considered during training but the inserted word was deleted during training, for instance by subsampling or because of the minimum word count.",
"cite_spans": [
{
"start": 1496,
"end": 1514,
"text": "(Lee et al., 2011)",
"ref_id": "BIBREF5"
},
{
"start": 1819,
"end": 1821,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "11 If we assume that the orthogonal distribution (in contexts of co-referent words) of words is generally (on average) the same as their linear distribution, then the performance improvements are just explainable by the fact that we gain more data (more contexts), which convey no information which could not-in principle-be gained by considering more linear contexts. Sure, the rarer the word, the more linear contexts we would have to consider (on average) in order to find the information we search for. Whether this assumption is indeed valid has to be examined in the future. Only if we assumed that certain words or constructions tended to co-occur with pronouns rather than with coreferent richer descriptions, we could say that there is a newcomplementary-type of information gained through OCE explaining differences in performance. ploits a small share of the potential OCE has. Finally, OCE should be evaluated directly with relevant applications as, e.g., information retrieval, classification tasks or sense disambiguation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "Cesar Millan. Cesar's Way. <http://www.cesarsway.com/dogbehavior/barking/What-Your-Dogs-Bark-is-Telling-You>. Last checked on March 10, 2016.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The reader may note that a similar idea was already applied for sentiment analysis in(Pontiveros, 2012).3 Note that we will exclude the word pairs containing proper nouns from WordSim353, since we do not want effects concerning proper nouns to intervene with effects concerning the distinction between similarity and relatedness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that by allowing two rare word pairs in the same question we could further increase this number without any additional effort.5 The reader may note that there is a bias towards male entities in the files. However, in an updated version this bias shall be eliminated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It is not yet clear what the reason for this effect is. The fact that the quality of the co-reference resolution for proper nouns is better than for other kinds of words may have an influence.7 A normal distribution of the results was assumed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "However, it is not clear why the baseline has clear advantages on MEN, of which the authors explicitly claim to measure relatedness. Note, however, that this does not mean that the test set does not measure similarity at all. It rather means that it measures both, since relatedness covers similarity. Thus, a possible explanation would be that there is still a focus on genuine similarity judgments in this test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The results on the person-sex task had to be excluded because they were biased due to errors in the test set. Yet, in previous experiments certain OCE-models constantly outperformed baseline models on this task, in particular with respect to feminine entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "I thank Sandra Herrmann, Sascha Schlechtweg, Stefanie Eckmann, Tatjana Schlechtweg and Veronika Vasileva for intensive last-minute help.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "6."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A study on similarity and relatedness using distributional and wordnet-based approaches",
"authors": [
{
"first": "E",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Alfonseca",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kravalova",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pa\u015fca",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL '09",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agirre, E., Alfonseca, E., Hall, K., Kravalova, J., Pa\u015fca, M., and Soroa, A. (2009). A study on similarity and relatedness using distributional and wordnet-based ap- proaches. In Proceedings of Human Language Technolo- gies: The 2009 Annual Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics, NAACL '09, pages 19-27, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Don't count, predict! A systematic comparison of contextcounting vs. context-predicting semantic vectors",
"authors": [
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kruszewski",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "238--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baroni, M., Dinu, G., and Kruszewski, G. (2014). Don't count, predict! A systematic comparison of context- counting vs. context-predicting semantic vectors. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics, pages 238-247, Bal- timore, Maryland, USA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multimodal distributional semantics",
"authors": [
{
"first": "E",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "N",
"middle": [
"K"
],
"last": "Tran",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Artificial Intelligence Research",
"volume": "49",
"issue": "",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruni, E., Tran, N. K., and Baroni, M. (2014). Multi- modal distributional semantics. Journal of Artificial In- telligence Research, 49:1-47.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Placing search in context: The concept revisited",
"authors": [
{
"first": "L",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Transactions on Information Systems",
"volume": "20",
"issue": "1",
"pages": "116--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finkelstein, L., Gabrilovich, E., Matias, Y., Rivlin, E., Solan, Z., Wolfman, G., and Ruppin, E. (2002). Placing search in context: The concept revisited. ACM Transac- tions on Information Systems, 20(1):116-131.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hill, F., Reichart, R., and Korhonen, A. (2014). Simlex- 999: Evaluating semantic models with (genuine) simi- larity estimation. CoRR, abs/1408.3456.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Stanford's multipass sieve coreference resolution system at the conll-2011 shared task",
"authors": [
{
"first": "H",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Peirsman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "28--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, H., Peirsman, Y., Chang, A., Chambers, N., Sur- deanu, M., and Jurafsky, D. (2011). Stanford's multi- pass sieve coreference resolution system at the conll- 2011 shared task. In Proceedings of the Fifteenth Con- ference on Computational Natural Language Learning: Shared Task, pages 28-34, Portland, Oregon, USA. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Better word representations with recursive neural networks for morphology",
"authors": [
{
"first": "M.-T",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luong, M.-T., Socher, R., and Manning, C. D. (2013). Bet- ter word representations with recursive neural networks for morphology. In CoNLL, Sofia, Bulgaria.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Workshop at ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient estimation of word representations in vector space. In Proceedings of Workshop at ICLR.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G., and Dean, J. (2013b). Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Annotated gigaword",
"authors": [
{
"first": "C",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gormley",
"suffix": ""
},
{
"first": "B",
"middle": [
"V"
],
"last": "Durme",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Webscale Knowledge Extraction",
"volume": "",
"issue": "",
"pages": "95--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Napoles, C., Gormley, M., and Durme, B. V. (2012). An- notated gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web- scale Knowledge Extraction, pages 95-100.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Opinion mining from a large corpora of natural language reviews",
"authors": [
{
"first": "B",
"middle": [
"B F"
],
"last": "Pontiveros",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pontiveros, B. B. F. (2012). Opinion mining from a large corpora of natural language reviews. Master's thesis, LSI, UPC.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A word at a time: computing word relatedness using temporal semantic analysis",
"authors": [
{
"first": "K",
"middle": [],
"last": "Radinsky",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Agichtein",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Markovitch",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 20th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radinsky, K., Agichtein, E., Gabrilovich, E., and Markovitch, S. (2011). A word at a time: comput- ing word relatedness using temporal semantic analysis. In Proceedings of the 20th International Conference on World Wide Web, WWW 2011, Hyderabad, India.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Using mined coreference chains as a resource for a semantic task",
"authors": [
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Adel",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1447--1452",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sch\u00fctze, H. and Adel, H. (2014). Using mined coreference chains as a resource for a semantic task. In Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 1447-1452, Doha, Qatar. ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Analogy questions involving rare words and proper nouns for evaluation of vector space models of distributional semantics",
"authors": [
{
"first": "Dominik",
"middle": [],
"last": "Schlechtweg",
"suffix": ""
}
],
"year": 2016,
"venue": "Dominik Schlechtweg",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominik Schlechtweg. (2016). Analogy questions involv- ing rare words and proper nouns for evaluation of vec- tor space models of distributional semantics. Dominik Schlechtweg, distributed via ELRA, 1.0.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Structure of the questions-words-rare file: word2vec's analogy questions reconstructed with rare words."
},
"TABREF3": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Structure of the questions-words-proper-nouns file."
},
"TABREF4": {
"content": "<table><tr><td>Test set</td><td colspan=\"2\">min 50 , window 5</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>SIMILARITY JUDGMENTS</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>WordSim353</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>similarity</td><td>70.47</td><td>71.20</td><td>0.012</td><td>70.79</td><td>71.30</td><td>0.020</td><td>71.89</td><td>72.24</td><td>0.159</td></tr><tr><td>relatedness</td><td>59.65</td><td>59.17</td><td>0.044</td><td>59.55</td><td>59.19</td><td>0.104</td><td>59.32</td><td>58.73</td><td>0.272</td></tr><tr><td>SimLex-999</td><td>41.43</td><td>42.14</td><td>&lt; 0.01</td><td>41.66</td><td>42.28</td><td>&lt; 0.01</td><td>43.17</td><td>43.40</td><td>0.036</td></tr><tr><td>nouns</td><td>43.04</td><td>43.08</td><td>0.705</td><td>43.39</td><td>43.48</td><td>0.362</td><td>44.17</td><td>43.89</td><td>&lt; 0.01</td></tr><tr><td>adjectives</td><td>57.62</td><td>58.21</td><td>0.021</td><td>57.87</td><td>58.44</td><td>0.057</td><td>59.21</td><td>59.02</td><td>0.592</td></tr><tr><td>verbs</td><td>26.91</td><td>29.59</td><td>&lt; 0.01</td><td>27.02</td><td>29.12</td><td>&lt; 0.01</td><td>30.31</td><td>32.06</td><td>&lt; 0.01</td></tr><tr><td>MEN</td><td>72.28</td><td>72.60</td><td>&lt; 0.01</td><td>72.36</td><td>72.68</td><td>&lt; 0.01</td><td>72.89</td><td>73.08</td><td>&lt; 0.01</td></tr><tr><td>Rare</td><td>51.87</td><td>51.70</td><td>0.158</td><td>48.82</td><td>48.78</td><td>0.754</td><td>52.03</td><td>52.03</td><td>0.954</td></tr><tr><td>MTurk</td><td>68.74</td><td>68.71</td><td>0.896</td><td>68.73</td><td>68.13</td><td>0.060</td><td>68.22</td><td>67.46</td><td>&lt; 0.01</td></tr><tr><td>proper nouns</td><td>69.23</td><td>69.23</td><td>0.994</td><td>68.73</td><td>67.38</td><td>0.065</td><td>69.76</td><td>68.11</td><td>0.011</td></tr><tr><td>ANALOGIES</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Word2vec</td><td>67.39</td><td>67.04</td><td>&lt; 0.01</td><td>66.49</td><td>66.11</td><td>&lt; 0.01</td><td>68.14</td><td>67.98</td><td>0.225</td></tr><tr><td>Common capital city</td><td>90.61</td><td>89.64</td><td>0.043</td><td>89.64</td><td>88.22</td><td>&lt; 0.01</td><td>90.97</td><td>90.55</td><td>0.224</td></tr><tr><td>All capital cities</td><td>90.16</td><td>89.89</td><td>0.213</td><td>88.80</td><td>87.97</td><td>&lt; 0.01</td><td>89.96</td><td>89.85</td><td>0.561</td></tr><tr><td>Currency</td><td>15.75</td><td>16.10</td><td>0.172</td><td>d. c.</td><td>d. c.</td><td>-</td><td>17.64</td><td>17.79</td><td>0.746</td></tr><tr><td>City-in-state</td><td>57.13</td><td>56.55</td><td>0.05</td><td>55.81</td><td>55.91</td><td>0.748</td><td>58.85</td><td>59.31</td><td>0.165</td></tr><tr><td>Man-Woman</td><td>72.93</td><td>73.95</td><td>0.097</td><td>71.13</td><td>73.44</td><td>&lt; 0.01</td><td>73.60</td><td>74.72</td><td>0.127</td></tr><tr><td>Adjective to adverb</td><td>25.39</td><td>24.13</td><td>&lt; 0.01</td><td>24.33</td><td>23.11</td><td>&lt; 0.01</td><td>22.94</td><td>21.98</td><td>0.014</td></tr><tr><td>Opposite</td><td>34.32</td><td>34.40</td><td>0.883</td><td>33.65</td><td>33.13</td><td>0.185</td><td>35.64</td><td>35.69</td><td>0.925</td></tr><tr><td>Comparative</td><td>86.59</td><td>86.79</td><td>0.559</td><td>86.34</td><td>85.92</td><td>0.174</td><td>88.00</td><td>87.76</td><td>0.428</td></tr><tr><td>Superlative</td><td>57.18</td><td>56.48</td><td>0.261</td><td>56.00</td><td>54.16</td><td>0.023</td><td>59.42</td><td>58.49</td><td>0.340</td></tr><tr><td>Present Participle</td><td>61.73</td><td>61.46</td><td>0.598</td><td>63.10</td><td>61.35</td><td>&lt; 0.01</td><td>62.40</td><td>62.56</td><td>0.775</td></tr><tr><td>Nationality adjective</td><td>87.86</td><td>87.78</td><td>0.573</td><td>87.81</td><td>88.27</td><td>&lt; 0.01</td><td>89.17</td><td>89.27</td><td>0.599</td></tr><tr><td>Past tense</td><td>62.92</td><td>62.00</td><td>0.036</td><td>62.96</td><td>61.66</td><td>&lt; 0.01</td><td>64.00</td><td>64.74</td><td>0.030</td></tr><tr><td>Plural nouns</td><td>68.53</td><td>68.46</td><td>0.872</td><td>66.98</td><td>66.78</td><td>0.718</td><td>69.11</td><td>69.07</td><td>0.925</td></tr><tr><td>Plural verbs</td><td>49.31</td><td>48.37</td><td>0.136</td><td>48.77</td><td>47.70</td><td>0.022</td><td>50.07</td><td>49.41</td><td>0.120</td></tr><tr><td>Word2vec rare</td><td>36.96</td><td>36.77</td><td>0.217</td><td>35.14</td><td>34.54</td><td>&lt; 0.01</td><td>37.69</td><td>37.41</td><td>0.054</td></tr><tr><td>Capital city</td><td>76.49</td><td>76.06</td><td>0.368</td><td>75.01</td><td>74.03</td><td>0.303</td><td>75.56</td><td>75.83</td><td>0.481</td></tr><tr><td>Currency</td><td>12.63</td><td>12.87</td><td>0.279</td><td>d. c.</td><td>d. c.</td><td>-</td><td>14.75</td><td>14.71</td><td>0.895</td></tr><tr><td>Chinese city-in-state</td><td>98.00</td><td>97.70</td><td>0.254</td><td>98.28</td><td>98.12</td><td>0.614</td><td>96.12</td><td>95.50</td><td>0.253</td></tr><tr><td>Adjective to adverb</td><td>18.02</td><td>17.33</td><td>&lt; 0.01</td><td>15.32</td><td>15.17</td><td>0.478</td><td>16.73</td><td>16.80</td><td>0.770</td></tr><tr><td>Opposite</td><td>13.50</td><td>13.66</td><td>0.722</td><td>11.76</td><td>11.49</td><td>0.282</td><td>13.94</td><td>13.40</td><td>0.189</td></tr><tr><td>Comparative</td><td>65.24</td><td>65.45</td><td>0.801</td><td>65.41</td><td>65.38</td><td>0.969</td><td>68.32</td><td>69.50</td><td>0.104</td></tr><tr><td>Superlative</td><td>41.49</td><td>42.20</td><td>0.562</td><td>42.43</td><td>41.37</td><td>0.426</td><td>48.88</td><td>47.38</td><td>0.115</td></tr><tr><td>Present Participle</td><td>42.62</td><td>42.60</td><td>0.972</td><td>41.21</td><td>40.22</td><td>0.018</td><td>43.43</td><td>43.71</td><td>0.519</td></tr><tr><td>Nationality adjective</td><td>75.86</td><td>75.87</td><td>0.991</td><td>77.10</td><td>75.27</td><td>0.013</td><td>79.34</td><td>77.93</td><td>0.029</td></tr><tr><td>Past tense</td><td>37.37</td><td>36.68</td><td>0.075</td><td>36.51</td><td>34.70</td><td>&lt; 0.01</td><td>38.34</td><td>37.31</td><td>0.012</td></tr><tr><td>Plural nouns</td><td>41.10</td><td>41.56</td><td>0.372</td><td>38.59</td><td>38.29</td><td>0.528</td><td>41.82</td><td>41.29</td><td>0.144</td></tr><tr><td>Plural verbs</td><td>30.04</td><td>29.66</td><td>0.295</td><td>28.96</td><td>29.01</td><td>0.942</td><td>31.19</td><td>31.05</td><td>0.737</td></tr><tr><td>Proper nouns</td><td>24.11</td><td>23.31</td><td>0.053</td><td>22.60</td><td>22.38</td><td>0.426</td><td>25.72</td><td>24.75</td><td>0.061</td></tr><tr><td>leader-country</td><td>22.65</td><td>20.99</td><td>&lt; 0.01</td><td colspan=\"3\">4.2.6. Rare Words 22.21 20.33 &lt; 0.01</td><td>26.14</td><td>24.30</td><td>&lt; 0.01</td></tr><tr><td>person-sex</td><td>err.</td><td>err.</td><td>-</td><td colspan=\"6\">On Rare we do not find significant differences. However, err. err. -err. err. -</td></tr><tr><td>building-city</td><td>30.69</td><td>30.97</td><td>0.824</td><td colspan=\"6\">on the word2vec analogy questions reconstructed with rare 27.64 28.61 0.392 29.58 30.70 0.272</td></tr><tr><td>river-country</td><td>23.68</td><td>23.74</td><td>0.945</td><td>21.52</td><td>22.91</td><td>0.044</td><td>23.57</td><td>23.08</td><td>0.546</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "min 25 , window 5 min 50 , window 3 headSub PN baseline p-value headSub PN baseline p-value headSub PN baseline p-value"
}
}
}
}