ACL-OCL / Base_JSON /prefixH /json /humeval /2021.humeval-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:29:12.492827Z"
},
"title": "Eliciting explicit knowledge from domain experts in direct intrinsic evaluation of word embeddings for specialized domains",
"authors": [
{
"first": "Goya",
"middle": [],
"last": "Van Boven",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Utrecht University",
"location": {}
},
"email": "j.g.vanboven@students.uu.nl"
},
{
"first": "Jelke",
"middle": [],
"last": "Bloem",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {}
},
"email": "j.bloem@uva.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We evaluate the use of direct intrinsic word embedding evaluation tasks for specialized language. Our case study is philosophical text: human expert judgements on the relatedness of philosophical terms are elicited using a synonym detection task and a coherence task. Uniquely for our task, experts must rely on explicit knowledge and cannot use their linguistic intuition, which may differ from that of the philosopher. We find that inter-rater agreement rates are similar to those of more conventional semantic annotation tasks, suggesting that these tasks can be used to evaluate word embeddings of text types for which implicit knowledge may not suffice.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We evaluate the use of direct intrinsic word embedding evaluation tasks for specialized language. Our case study is philosophical text: human expert judgements on the relatedness of philosophical terms are elicited using a synonym detection task and a coherence task. Uniquely for our task, experts must rely on explicit knowledge and cannot use their linguistic intuition, which may differ from that of the philosopher. We find that inter-rater agreement rates are similar to those of more conventional semantic annotation tasks, suggesting that these tasks can be used to evaluate word embeddings of text types for which implicit knowledge may not suffice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Philosophical research often relies on the close reading of texts, which is a slow and precise process, allowing for the analysis of a few texts only. Supporting philosophical research with distributional semantic (DS) models (Bengio et al., 2003; Turney and Pantel, 2010; Erk, 2012; Mikolov et al., 2013) has been proposed as a way to speed up the process (van Wierst et al., 2016; Ginammi et al., in press; Herbelot et al., 2012) , and could increase the number of analysed texts, decreasing reliance on a canon of popular texts (cf. addressing the great unread, Cohen, 1999) . However, we cannot evaluate semantic models of philosophical text using a general English gold standard, as philosophical concepts often have a very specific meaning. For example, the term abduction, usually meaning a kidnapping, denotes a specific type of inference in philosophy (Douven, 2017) . Therefore, models must be evaluated in a domain-specific way.",
"cite_spans": [
{
"start": 226,
"end": 247,
"text": "(Bengio et al., 2003;",
"ref_id": "BIBREF1"
},
{
"start": 248,
"end": 272,
"text": "Turney and Pantel, 2010;",
"ref_id": "BIBREF30"
},
{
"start": 273,
"end": 283,
"text": "Erk, 2012;",
"ref_id": "BIBREF12"
},
{
"start": 284,
"end": 305,
"text": "Mikolov et al., 2013)",
"ref_id": "BIBREF19"
},
{
"start": 357,
"end": 382,
"text": "(van Wierst et al., 2016;",
"ref_id": "BIBREF32"
},
{
"start": 383,
"end": 408,
"text": "Ginammi et al., in press;",
"ref_id": null
},
{
"start": 409,
"end": 431,
"text": "Herbelot et al., 2012)",
"ref_id": "BIBREF17"
},
{
"start": 565,
"end": 577,
"text": "Cohen, 1999)",
"ref_id": "BIBREF6"
},
{
"start": 861,
"end": 875,
"text": "(Douven, 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The critical difference between the general case and the philosophy case is the following. It is easy to find native speakers of e.g. British English who have a good intuition of the meaning of its terms in general use, and the relations between them. This yields e.g. the SimLex-999 word similarity dataset (Hill et al., 2015) , covering frequent words and their typical senses. More difficult is finding 'native speakers' who have an intuition of the meaning of the terms used by a particular philosopher. The only candidate would be that philosopher themselves, and even then, the meaning of some of the terms used is the result of explicit analysis and definition rather than implicit language knowledge of the philosopher. Uncommon terms with highly specific meanings are explicitly defined and debated, leading them to differ between philosophers or even within the works of a single philosopher. Any accurate evaluation or annotation would require expert knowledge, and methods that can incorporate explicit knowledge, rather than judgements based on implicit knowledge of a standard language or jargon by one of its speakers.",
"cite_spans": [
{
"start": 308,
"end": 327,
"text": "(Hill et al., 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We test two direct evaluation methods for DS models described by Schnabel et al. (2015) on our case study, the works of Willard V. O. Quine, a 20th century American philosopher. Instead of native English speaking crowdworkers, we selected expert participants who have studied this philosopher extensively. We aim to test whether these methods produce reliable results when participants need to use explicit rather than implicit knowledge, and consider the methods to be successful if inter-rater agreement matches that of other semantic evaluations. More broadly, our methodological findings apply to evaluation of DS models for specialized domains, language for specific purposes (LSP), historical language varieties or other language (varieties) for which no native annotators are available.",
"cite_spans": [
{
"start": 65,
"end": 87,
"text": "Schnabel et al. (2015)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most intrinsic evaluations compare word embedding similarities (e.g. in terms of cosine distance) to premade datasets of human similarity or relatedness judgements. Sets of words are created and evaluated on semantic relations by participants, and the similarity between the assessments and an embedding space is used as a measure of performance. In specific domains, examples of such datasets of term ratings can be found for identifier names in source code (Wainakh et al., 2019) , in the medical domain (Pakhomov et al., 2010 (Pakhomov et al., , 2011 Pedersen et al., 2007) and in geosciences both for English (Padarian and Fuentes, 2019) and Portuguese (Gomes et al., 2021) . The last two studies compare domain-specific embeddings to general domain embeddings and both find that the former perform better. A problem of these indirect datasets is that only naturally occurring, often high-frequency terms without any spelling variations, are evaluated, while DS models include many more variations (Batchkarov et al., 2016) .",
"cite_spans": [
{
"start": 459,
"end": 481,
"text": "(Wainakh et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 506,
"end": 528,
"text": "(Pakhomov et al., 2010",
"ref_id": "BIBREF23"
},
{
"start": 529,
"end": 553,
"text": "(Pakhomov et al., , 2011",
"ref_id": "BIBREF24"
},
{
"start": 554,
"end": 576,
"text": "Pedersen et al., 2007)",
"ref_id": "BIBREF25"
},
{
"start": 613,
"end": 641,
"text": "(Padarian and Fuentes, 2019)",
"ref_id": "BIBREF22"
},
{
"start": 646,
"end": 677,
"text": "Portuguese (Gomes et al., 2021)",
"ref_id": null
},
{
"start": 1002,
"end": 1027,
"text": "(Batchkarov et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Direct intrinsic evaluation methods, where participants respond directly to the output of models, can be categorized as absolute and comparative intrinsic evaluation (Schnabel et al., 2015) . The former method evaluates embeddings individually and compares their final scores, while in the latter participants directly express their preferences between models. To our best knowledge, the only example of a domain-specific direct human evaluation is Dynomant et al. (2019) who evaluate French embeddings of health care terms by a human evaluation in which two medical doctors rate the relevance of the first five nearest neighbours of target terms from models trained on in-domain text.",
"cite_spans": [
{
"start": 166,
"end": 189,
"text": "(Schnabel et al., 2015)",
"ref_id": "BIBREF28"
},
{
"start": 449,
"end": 471,
"text": "Dynomant et al. (2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In the philosophical domain some evaluations have been conducted with other methods, sometimes incorporating expert explicit knowledge, but none are direct. In each of these studies the work of Quine is utilized as data. Firstly, Bloem et al. (2019) propose a method of evaluating word embedding quality by measuring model consistency, not making use of expert knowledge. Secondly, Oortwijn et al. (2021a) construct a conceptual network which serves as a ground truth of expert knowledge. They compare the similarity of embeddings for target philosophical terms to their position in the manually created network. Here, the conceptual relatedness between terms is restricted to the property of sharing hypernyms, and only terms that were predefined in the ground truth can be considered for evaluation. Betti et al. (2020) introduce a more elaborate ground truth that is concept-focused, including more types of conceptual relations and including irrelevant as well as relevant terms for better evaluation of model precision. Still, evaluation remains restricted to terms in the ground truth. Only using direct evaluation methods we can attempt to evaluate all model output.",
"cite_spans": [
{
"start": 230,
"end": 249,
"text": "Bloem et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 802,
"end": 821,
"text": "Betti et al. (2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We perform a synonym detection task and a coherence task. In these tasks, participants are asked to judge model-generated candidate terms that semantic models deem closest to a target term. In the synonym detection task, participants select the most similar word to target term t out of a set of options: the k-nearest neighbours of the target term in each model that is being compared. In the coherence task, the participant selects a semantical outlier in a set of words, where one of the words is not close to t in the model. We refer to Schnabel et al. 2015for details and a comparison to other tasks for general semantic evaluation. Our participant instructions are based on Schnabel et al., who use the instructions of the WordSim-353 dataset (Finkelstein et al., 2001 ). But as this study focuses on explicit knowledge, several adjustments are needed.",
"cite_spans": [
{
"start": 749,
"end": 774,
"text": "(Finkelstein et al., 2001",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task description",
"sec_num": "3"
},
{
"text": "Although explicit knowledge is easier to verbalize than implicit knowledge, it involves controlled rather than automatic processing (Bowles, 2011; Ellis, 2004 Ellis, , 2005 , so our version of the task might take longer. Yet in order to retain the required focus, the test should not take too long. We therefore conduct a pilot study in which response times are measured to estimate task durations, and we adapt the size of the main study accordingly.",
"cite_spans": [
{
"start": 132,
"end": 146,
"text": "(Bowles, 2011;",
"ref_id": "BIBREF4"
},
{
"start": 147,
"end": 158,
"text": "Ellis, 2004",
"ref_id": null
},
{
"start": 159,
"end": 172,
"text": "Ellis, , 2005",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task description",
"sec_num": "3"
},
{
"text": "The original task instructions do not define similarity, while other studies define it as co-hyponymy (Turney and Pantel, 2010) or synonymy (Hill et al., 2015) . According to Batchkarov et al. (2016) defining similarity is difficult as it depends on the context and downstream application in which the terms are used. We keep a consistent context, both training and evaluating in the domain of a particular philosopher, although the concern of capturing the multidimensional concept of similarity in a single number is valid also in this context. Gladkova and Drozd (2016) claim participants are likely to prefer synonyms when asked to select the most similar word. In this study we are looking to find any relationship present, rather than a specific one, and expect the experts to explicitly consider this, so we ask for relatedness. Gladkova and Drozd further argue that when asked for relatedness participants must choose between various relations present, a choice that can be subjective or random, and might reflect other factors such as \"frequency, speed of association, and possibly the order of presentation of words\". The first two factors are alleviated in this study as the participants must take Quine's definitions of words into account rather than their own. This forces participants to think their answers through, which should reduce the association effects typical of fast-paced online studies. To account for effects of order of presentation, we randomize the order of the options. In our instructions, we define relatedness as synonymy, meronymy, hyponymy, and co-hyponymy, and provide examples. Participants are also allowed to base their judgements on other types of relations.",
"cite_spans": [
{
"start": 140,
"end": 159,
"text": "(Hill et al., 2015)",
"ref_id": "BIBREF18"
},
{
"start": 175,
"end": 199,
"text": "Batchkarov et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 547,
"end": 572,
"text": "Gladkova and Drozd (2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task description",
"sec_num": "3"
},
{
"text": "After the experiment we present a post-test survey, querying the types of relation participants based their judgements on, and task difficulty. Furthermore, we change the option I don't know the meaning of one (or several) of the words in the synonym detection task to be None of these words is even remotely related, and also include a similar option in the coherence task, namely No coherent group can be formed from these words. This is done to avoid any random selection of words when there are no meaningful relations, making the responses more accurate. As we aim to gather explicit knowledge, participants are allowed to look up relevant information on presented words. For reproducibility, our instructions (and results) are included in the supplementary materials. 1 As the tasks require participants to be experts on the work of Quine, the number of possible participants is limited. Although participants are philosophers trained to work precisely and make consistent judgements, subjectivity can be a risk as participant must choose the relation they deem most important, while lacking context. We use inter-rater agreement to evaluate this. We report joint probability of agreement (percentage of agreement) as we have added the none options to avoid chance agreement. As joint probabilities cannot be compared across studies, we also report Cohen's \u03ba.",
"cite_spans": [
{
"start": 774,
"end": 775,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task description",
"sec_num": "3"
},
{
"text": "All experiments 2 are conducted on the survey platform Qualtrics 3 . Participants are asked to exe- 1 To be found at https://github.com/ gvanboven/direct-intrinsic-evaluation 2 Experiments were approved by The Ethics Committee of the Faculty of Humanities of the University of Amsterdam.",
"cite_spans": [
{
"start": 100,
"end": 101,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task description",
"sec_num": "3"
},
{
"text": "3 www.qualtrics.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task description",
"sec_num": "3"
},
{
"text": "cute the experiments in a silent environment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task description",
"sec_num": "3"
},
{
"text": "We make use of the QUINE corpus (v0.5, Betti et al., 2020) , which includes 228 philosophical books, articles and bundles Quine authored, consisting of 2.15 million tokens in total. As target terms for evaluation, we use Bloem et al.'s (2019) test set for the the book Word & Object (Quine, 1960) , one of Quine's most influential works. It consists of 55 terms that were selected from the book's index. We used 10 of these terms in the pilot study, 25 in the synonym detection task, 14 in the coherence task and 6 in both experiments. 4 One Quine expert participated in the pilot study. The pilot study consists of short versions of the two tasks, both testing five target terms. In the synonym detection task, each target term has six candidate related terms from the models, that the participant should choose between. Each term is tested three times with candidates of differing similarity from the model (nearest neighbour ranks k \u2208 {1, 5, 50}). The pilot coherence (outlier) task has ten questions. The average response time for the synonym detection task was 109.5s and 42.1s for the coherence task. Because for the first task this was higher than anticipated, we reduced the number of ranks to two and divided the task across two separate surveys.",
"cite_spans": [
{
"start": 39,
"end": 58,
"text": "Betti et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 221,
"end": 242,
"text": "Bloem et al.'s (2019)",
"ref_id": null
},
{
"start": 283,
"end": 296,
"text": "(Quine, 1960)",
"ref_id": "BIBREF27"
},
{
"start": 536,
"end": 537,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Case study for philosophy",
"sec_num": "4"
},
{
"text": "Three experts on the work of Quine, including the participant of the pilot study, participated in this experiment. They all hold a Master's degree in philosophy and have studied the philosopher extensively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1: Synonym detection task",
"sec_num": "4.1"
},
{
"text": "This task includes 31 target words, which are all tested on two ranks k, with k \u2208 {1, 10}, resulting in 62 questions. Of the 45 test set terms not used in the pilot study, we took the fifteen highest frequency terms (n > 275) and the sixteen lowest (n < 84). The experiment was conducted through two surveys, each consisting of 31 questions, lasting around 50 minutes, with a break halfway. 5 The data from one of the participants was excluded, as the participant indicated that the test was too difficult and that their expertise on the work of Quine did not suffice. Moreover the response times of this participant were a lot lower than for the other participants. For this experiment, the overall inter-rater agreement was 58.1%, with \u03ba = 0.492.",
"cite_spans": [
{
"start": 391,
"end": 392,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1: Synonym detection task",
"sec_num": "4.1"
},
{
"text": "Two of the participants from the previous experiments also participated in this study. 20 target words are used: the 14 test set terms not used in the pilot or Exp. 1, and the 3 highest and lowest frequency terms from Exp. 1. We divide these into eleven low frequency words (n < 142) and nine high frequency words (n > 187). Using 3 DS models this results in 60 questions, the test takes approximately 40 minutes with a break. The interrater agreement was 56.7%, with \u03ba = 0.345.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: Coherence task",
"sec_num": "4.2"
},
{
"text": "To assess whether the method was successful we discuss some reliability metrics and examine disagreement examples. First of all, the fact that the data from one participant had to be excluded confirms the high standard of expertise required for participating in our version of the tasks. The results might have differed had there been more or different participants. However, other studies on expert explicit knowledge also execute tasks with two (Dynomant et al., 2019) or three (Padarian and Fuentes, 2019; Gomes et al., 2021) participants.",
"cite_spans": [
{
"start": 447,
"end": 470,
"text": "(Dynomant et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 480,
"end": 508,
"text": "(Padarian and Fuentes, 2019;",
"ref_id": "BIBREF22"
},
{
"start": 509,
"end": 528,
"text": "Gomes et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Inter-rater agreement scores for the two tasks were 58.1% (\u03ba = 0.492) and 56.7% (\u03ba = 0.345), indicating moderate or fair agreement. Batchkarov et al. (2016) found the average inter-rater agreement of two raters of the WordSim-353 (Finkelstein et al., 2001 ) and MEN (Bruni et al., 2014) dataset to lie between \u03ba = 0.21 and \u03ba = 0.62. Thus, agreement scores in this study are not lower than that of commonly used similarity datasets, despite participants having to agree on another person's semantics and including a None option.",
"cite_spans": [
{
"start": 132,
"end": 156,
"text": "Batchkarov et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 218,
"end": 255,
"text": "WordSim-353 (Finkelstein et al., 2001",
"ref_id": null
},
{
"start": 266,
"end": 286,
"text": "(Bruni et al., 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Both experiments yield lower inter-rater agree-ment for the None option than for the other choices, shown in Figure 1(a) and (b). Response times were also higher for the None option in both tasks (Table 1 ), suggesting this choice is more difficult. Most disagreement thus concerned the presence of a semantic relationship, but if the annotators agreed there was one, they mostly preferred the same relation. This suggests a None option increases annotation quality in general. In the coherence task, there was more agreement on low than high frequency words, which may be due to their lesser ambiguity. According to the post-test survey, participants mostly based their judgements on sharing the same super term. Relationships that were used without being listed in the instructions were antonymy, forming a technical bigram term together, having the same stem and being used in the same context by Quine. We see this reflected when examining some examples of disagreement. In Table 2 , we see disagreement on the related term for adjectives because both chosen terms have a relation to this target term, but these are two different relations. We see agreement for information, as collateral information is a meaningful bigram in Quine's thought experiment on radical translation. In Table 3 we see disagreement on the ambiguity outlier. While believe has a tenuous relation to ambiguity, participant 2 may have considered this relation too tenuous and went for none. One expert stated that unclear boundaries of the none option were the reason for many none disagreements. The sense datum disagreement was guessed to be over a rare non-mathematics sense of divisibility that one participant remembered but the other might not have.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 120,
"text": "Figure 1(a)",
"ref_id": "FIGREF0"
},
{
"start": 196,
"end": 205,
"text": "(Table 1",
"ref_id": null
},
{
"start": 979,
"end": 986,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1286,
"end": 1293,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "In the post-test survey, participants commented that it was sometimes difficult to select the most related None None Table 3 : Example of disagreement and agreement in the coherence task. Bolded/marked terms were chosen by participants to be outliers, underlined terms were model outliers with lower word embedding similarity.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 124,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "word, as different relations were present and selecting the most important one is partly a matter of preference. Such ambiguity is prevalent in any semantic annotation task in which context is unspecified, and in other language annotation tasks in which no explicit choice is made in the guidelines among possible competing valid interpretations (Plank et al., 2014) . As noted by Sommerauer et al. (2020), justified disagreement is possible, though detecting it requires meta-annotation and this is in itself a difficult task. However, it might yield additional insights, i.e. that certain DS models might prioritize certain relation types in their nearest neighbours, and that these are equally valid because the experts disagreed on them. Disagreement can also be caused by poorly specified tasks and insufficient conceptual alignment among annotators, especially when the goal is creating a ground truth (Oortwijn et al., 2021b) or otherwise annotating for a specific theory or interpretation.",
"cite_spans": [
{
"start": 346,
"end": 366,
"text": "(Plank et al., 2014)",
"ref_id": "BIBREF26"
},
{
"start": 908,
"end": 932,
"text": "(Oortwijn et al., 2021b)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "In future experiments, more specific instructions on when to consider a relation to be relevant, or guidelines on prioritizing certain relations over others, can reduce the difficulty of the task. Our expert participants used many semantic relation types in their interpretation with no clear hierarchy among them. However, applying this to DS model evaluation may require more insight into what exactly the geometric relationships between embeddings in a DS model capture. It may also be interesting for philosophers to make use of models trained to represent particular relations, such as antonymy (Dou et al., 2018) . With more specific instructions explicitly directing participants to prioritize or ignore specific relations, our evaluation approach can be adapted to evaluate such models and we expect higher agreement in this type of task. In other cases different interpretations can be desirable, e.g. where there is no hierarchy of relations and a model should capture relatedness in a broad sense. For this purpose, we should consider allowing multiple answers -while a forced choice helps to elicit implicit knowledge, explicit knowledge may not always support a categorical decision, though this adds the complication of deciding when an option is relevant enough, similar to the none option.",
"cite_spans": [
{
"start": 600,
"end": 618,
"text": "(Dou et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Our results show that absolute and comparative intrinsic evaluation tasks can be used to agree on semantic relatedness between word embeddings even when the target language variety is highly specific. By instructing domain experts to perform the evaluation task using explicit expert knowledge rather than implicit knowledge, inter-rater agreement rates similar to other semantic annotation tasks can be reached. Due to the inherent lack of context in evaluating type-based non-contextual word embeddings, participants struggled with the generality of the task. Based on our analysis and post-test survey, we expect more specific guidelines on word relatedness to increase the reliability of the annotators' judgements, while limiting their generalizability. The addition of a None option seemed particularly beneficial for obtaining more reliable annotations based on explicit knowledge. We expect these findings to apply in the context of other domains for which no 'native' annotators are available -for example, language for specific purposes (LSP), historical language varieties or idiolects. In future work, the absolute and comparative intrinsic evaluation tasks we have described can be used to compare the quality of the representations of different word embedding models on these specialized language varieties.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Listed in the supplemental materials, with frequencies 5 Example surveys and raw results for each participant are included in the supplementary materials.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to Yvette Oortwijn, Thijs Ossenkoppele and Arianna Betti for their input as Quine domain experts. This research was supported by VICI grant e-Ideas (277-20-007), financed by the Dutch Research Council (NWO).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A critique of word similarity as a method for evaluating distributional semantic models",
"authors": [
{
"first": "Miroslav",
"middle": [],
"last": "Batchkarov",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kober",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Reffin",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2502"
]
},
"num": null,
"urls": [],
"raw_text": "Miroslav Batchkarov, Thomas Kober, Jeremy Reffin, Julie Weeds, and David Weir. 2016. A critique of word similarity as a method for evaluating distribu- tional semantic models. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representa- tions for NLP, pages 7-12.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Jauvin",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of machine learning research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of machine learning research, 3(Feb):1137-1155.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Expert concept-modeling ground truth construction for word embeddings evaluation in concept-focused domains",
"authors": [
{
"first": "Arianna",
"middle": [],
"last": "Betti",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Reynaert",
"suffix": ""
},
{
"first": "Thijs",
"middle": [],
"last": "Ossenkoppele",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Oortwijn",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Salway",
"suffix": ""
},
{
"first": "Jelke",
"middle": [],
"last": "Bloem",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6690--6702",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.586"
]
},
"num": null,
"urls": [],
"raw_text": "Arianna Betti, Martin Reynaert, Thijs Ossenkoppele, Yvette Oortwijn, Andrew Salway, and Jelke Bloem. 2020. Expert concept-modeling ground truth construction for word embeddings evaluation in concept-focused domains. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6690-6702.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Evaluating the consistency of word embeddings from small data",
"authors": [
{
"first": "Jelke",
"middle": [],
"last": "Bloem",
"suffix": ""
},
{
"first": "Antske",
"middle": [],
"last": "Fokkens",
"suffix": ""
},
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "Herbelot",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "132--141",
"other_ids": {
"DOI": [
"10.26615/978-954-452-056-4_016"
]
},
"num": null,
"urls": [],
"raw_text": "Jelke Bloem, Antske Fokkens, and Aur\u00e9lie Herbelot. 2019. Evaluating the consistency of word embed- dings from small data. In Proceedings of the In- ternational Conference on Recent Advances in Natu- ral Language Processing (RANLP 2019), pages 132- 141.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Measuring implicit and explicit linguistic knowledge: What can heritage language learners contribute? Studies in second language acquisition",
"authors": [
{
"first": "A",
"middle": [],
"last": "Melissa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bowles",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "247--271",
"other_ids": {
"DOI": [
"10.1017/S0272263110000756"
]
},
"num": null,
"urls": [],
"raw_text": "Melissa A Bowles. 2011. Measuring implicit and ex- plicit linguistic knowledge: What can heritage lan- guage learners contribute? Studies in second lan- guage acquisition, pages 247-271.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multimodal distributional semantics",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Nam-Khanh",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Artificial Intelligence Research",
"volume": "49",
"issue": "",
"pages": "1--47",
"other_ids": {
"DOI": [
"10.1613/jair.4135"
]
},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Ar- tificial Intelligence Research, 49:1-47.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The sentimental education of the novel",
"authors": [
{
"first": "Margaret",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Margaret Cohen. 1999. The sentimental education of the novel. Princeton University Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improving word embeddings for antonym detection using thesauri and SentiWordNet",
"authors": [
{
"first": "Zehao",
"middle": [],
"last": "Dou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2018,
"venue": "CCF International Conference on Natural Language Processing and Chinese Computing",
"volume": "",
"issue": "",
"pages": "67--79",
"other_ids": {
"DOI": [
"10.1007/978-3-319-99501-4_6"
]
},
"num": null,
"urls": [],
"raw_text": "Zehao Dou, Wei Wei, and Xiaojun Wan. 2018. Improv- ing word embeddings for antonym detection using thesauri and SentiWordNet. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 67-79. Springer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The Stanford Encyclopedia of Philosophy",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Igor Douven. 2017. Abduction. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy, summer 2017 edition. Metaphysics Research Lab, Stanford University.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Word embedding for the French natural language in health care: comparative study",
"authors": [
{
"first": "Emeric",
"middle": [],
"last": "Dynomant",
"suffix": ""
},
{
"first": "Romain",
"middle": [],
"last": "Lelong",
"suffix": ""
},
{
"first": "Badisse",
"middle": [],
"last": "Dahamna",
"suffix": ""
},
{
"first": "Cl\u00e9ment",
"middle": [],
"last": "Massonnaud",
"suffix": ""
},
{
"first": "Ga\u00e9tan",
"middle": [],
"last": "Kerdelhu\u00e9",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Grosjean",
"suffix": ""
},
{
"first": "St\u00e9phane",
"middle": [],
"last": "Canu",
"suffix": ""
},
{
"first": "Stefan",
"middle": [
"J"
],
"last": "Darmoni",
"suffix": ""
}
],
"year": 2019,
"venue": "JMIR medical informatics",
"volume": "7",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.2196/12310"
]
},
"num": null,
"urls": [],
"raw_text": "Emeric Dynomant, Romain Lelong, Badisse Dahamna, Cl\u00e9ment Massonnaud, Ga\u00e9tan Kerdelhu\u00e9, Julien Grosjean, St\u00e9phane Canu, and Stefan J Darmoni. 2019. Word embedding for the French natural lan- guage in health care: comparative study. JMIR med- ical informatics, 7(3):e12310.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The definition and measurement of L2 explicit knowledge",
"authors": [],
"year": 2004,
"venue": "Language learning",
"volume": "54",
"issue": "2",
"pages": "227--275",
"other_ids": {
"DOI": [
"10.1111/j.1467-9922.2004.00255.x"
]
},
"num": null,
"urls": [],
"raw_text": "Rod Ellis. 2004. The definition and measurement of L2 explicit knowledge. Language learning, 54(2):227- 275.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Measuring implicit and explicit knowledge of a second language: A psychometric study. Studies in second language acquisition",
"authors": [],
"year": 2005,
"venue": "",
"volume": "27",
"issue": "",
"pages": "141--172",
"other_ids": {
"DOI": [
"10.1017/S0272263105050096"
]
},
"num": null,
"urls": [],
"raw_text": "Rod Ellis. 2005. Measuring implicit and explicit knowledge of a second language: A psychomet- ric study. Studies in second language acquisition, 27(2):141-172.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Vector space models of word meaning and phrase meaning: A survey",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2012,
"venue": "Language and Linguistics Compass",
"volume": "6",
"issue": "10",
"pages": "635--653",
"other_ids": {
"DOI": [
"10.1002/lnco.362"
]
},
"num": null,
"urls": [],
"raw_text": "Katrin Erk. 2012. Vector space models of word mean- ing and phrase meaning: A survey. Language and Linguistics Compass, 6(10):635-653.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Placing search in context: The concept revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Yossi",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Zach",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "Gadi",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "Eytan",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 10th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "406--414",
"other_ids": {
"DOI": [
"10.1145/503104.503110"
]
},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th inter- national conference on World Wide Web, pages 406- 414.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Jelke Bloem, and Arianna Betti. in press. Bolzano, Kant, and the traditional theory of concepts: A computational investigation",
"authors": [
{
"first": "Annapaola",
"middle": [],
"last": "Ginammi",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Koopman",
"suffix": ""
},
{
"first": "Shenghui",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": null,
"venue": "The Dynamics of Science: Computational Frontiers in History and Philosophy of Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annapaola Ginammi, Rob Koopman, Shenghui Wang, Jelke Bloem, and Arianna Betti. in press. Bolzano, Kant, and the traditional theory of concepts: A com- putational investigation. In The Dynamics of Sci- ence: Computational Frontiers in History and Phi- losophy of Science. Pittsburgh University Press.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Intrinsic evaluations of word embeddings: What can we do better?",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Gladkova",
"suffix": ""
},
{
"first": "Aleksandr",
"middle": [],
"last": "Drozd",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "36--42",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2507"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Gladkova and Aleksandr Drozd. 2016. Intrinsic evaluations of word embeddings: What can we do better? In Proceedings of the 1st Workshop on Eval- uating Vector-Space Representations for NLP, pages 36-42.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Silvia Moraes, and Alexandre Gon\u00e7alves Evsukoff. 2021. Portuguese word embeddings for the oil and gas industry: Development and evaluation",
"authors": [
{
"first": "",
"middle": [],
"last": "Diogo Da Silva Magalh\u00e3es",
"suffix": ""
},
{
"first": "F\u00e1bio",
"middle": [],
"last": "Gomes",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Corr\u00eaa Cordeiro",
"suffix": ""
},
{
"first": "Nikolas Lacerda",
"middle": [],
"last": "Scapini Consoli",
"suffix": ""
},
{
"first": "Viviane",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Renata",
"middle": [],
"last": "Pereira Moreira",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vieira",
"suffix": ""
}
],
"year": null,
"venue": "Computers in Industry",
"volume": "124",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.compind.2020.103347Get"
]
},
"num": null,
"urls": [],
"raw_text": "Diogo da Silva Magalh\u00e3es Gomes, F\u00e1bio Corr\u00eaa Cordeiro, Bernardo Scapini Consoli, Nikolas Lac- erda Santos, Viviane Pereira Moreira, Renata Vieira, Silvia Moraes, and Alexandre Gon\u00e7alves Evsukoff. 2021. Portuguese word embeddings for the oil and gas industry: Development and evaluation. Comput- ers in Industry, 124:103347.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Distributional techniques for philosophical enquiry",
"authors": [
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "Herbelot",
"suffix": ""
},
{
"first": "Eva",
"middle": [
"Von"
],
"last": "Redecker",
"suffix": ""
},
{
"first": "Johanna",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 6th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities",
"volume": "",
"issue": "",
"pages": "45--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aur\u00e9lie Herbelot, Eva Von Redecker, and Johanna M\u00fcller. 2012. Distributional techniques for philo- sophical enquiry. In Proceedings of the 6th Work- shop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pages 45-54. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "4",
"pages": "665--695",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00237"
]
},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representa- tions in vector space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Challenging distributional models with a conceptual network of philosophical terms",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Oortwijn",
"suffix": ""
},
{
"first": "Jelke",
"middle": [],
"last": "Bloem",
"suffix": ""
},
{
"first": "Pia",
"middle": [],
"last": "Sommerauer",
"suffix": ""
},
{
"first": "Francois",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Antske",
"middle": [],
"last": "Fokkens",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvette Oortwijn, Jelke Bloem, Pia Sommerauer, Fran- cois Meyer, Wei Zhou, and Antske Fokkens. 2021a. Challenging distributional models with a conceptual network of philosophical terms. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). In press.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Interrater disagreement resolution: A systematic procedure to reach consensus in annotation tasks",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Oortwijn",
"suffix": ""
},
{
"first": "Thijs",
"middle": [],
"last": "Ossenkoppele",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Betti",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the First Workshop on Human Evaluation of NLP Systems (HumEval)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvette Oortwijn, Thijs Ossenkoppele, and Arianna Betti. 2021b. Interrater disagreement resolution: A systematic procedure to reach consensus in annota- tion tasks. In Proceedings of the First Workshop on Human Evaluation of NLP Systems (HumEval).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Word embeddings for application in geosciences: development, evaluation, and examples of soil-related concepts",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Padarian",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Fuentes",
"suffix": ""
}
],
"year": 2019,
"venue": "Soil",
"volume": "5",
"issue": "2",
"pages": "177--187",
"other_ids": {
"DOI": [
"10.5194/soil-5-177-2019"
]
},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Padarian and Ignacio Fuentes. 2019. Word embed- dings for application in geosciences: development, evaluation, and examples of soil-related concepts. Soil, 5(2):177-187.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Semantic similarity and relatedness between clinical terms: an experimental study",
"authors": [
{
"first": "Serguei",
"middle": [],
"last": "Pakhomov",
"suffix": ""
},
{
"first": "Bridget",
"middle": [],
"last": "Mcinnes",
"suffix": ""
},
{
"first": "Terrence",
"middle": [],
"last": "Adam",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Genevieve",
"middle": [
"B"
],
"last": "Melton",
"suffix": ""
}
],
"year": 2010,
"venue": "AMIA annual symposium proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Serguei Pakhomov, Bridget McInnes, Terrence Adam, Ying Liu, Ted Pedersen, and Genevieve B Melton. 2010. Semantic similarity and relatedness between clinical terms: an experimental study. In AMIA an- nual symposium proceedings, page 572. American Medical Informatics Association.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Towards a framework for developing semantic relatedness reference standards",
"authors": [
{
"first": "V",
"middle": [
"S"
],
"last": "Serguei",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Pakhomov",
"suffix": ""
},
{
"first": "Bridget",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Genevieve",
"middle": [
"B"
],
"last": "Mcinnes",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Melton",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"G"
],
"last": "Ruggieri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chute",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of biomedical informatics",
"volume": "44",
"issue": "2",
"pages": "251--265",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2010.10.004"
]
},
"num": null,
"urls": [],
"raw_text": "Serguei VS Pakhomov, Ted Pedersen, Bridget McInnes, Genevieve B Melton, Alexander Ruggieri, and Christopher G Chute. 2011. Towards a frame- work for developing semantic relatedness refer- ence standards. Journal of biomedical informatics, 44(2):251-265.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Measures of semantic similarity and relatedness in the biomedical domain",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "V",
"middle": [
"S"
],
"last": "Serguei",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Pakhomov",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"G"
],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chute",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of biomedical informatics",
"volume": "40",
"issue": "3",
"pages": "288--299",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2006.06.004"
]
},
"num": null,
"urls": [],
"raw_text": "Ted Pedersen, Serguei VS Pakhomov, Siddharth Pat- wardhan, and Christopher G Chute. 2007. Mea- sures of semantic similarity and relatedness in the biomedical domain. Journal of biomedical informat- ics, 40(3):288-299.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Linguistically debatable or just plain wrong?",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "507--511",
"other_ids": {
"DOI": [
"10.3115/v1/P14-2083"
]
},
"num": null,
"urls": [],
"raw_text": "Barbara Plank, Dirk Hovy, and Anders S\u00f8gaard. 2014. Linguistically debatable or just plain wrong? In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 507-511.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Word and object",
"authors": [
{
"first": "Willard",
"middle": [],
"last": "Van Orman Quine",
"suffix": ""
}
],
"year": 1960,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Willard Van Orman Quine. 1960. Word and object. MIT Press.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Evaluation methods for unsupervised word embeddings",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Schnabel",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Labutov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "298--307",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1036"
]
},
"num": null,
"urls": [],
"raw_text": "Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proceedings of the 2015 conference on Empirical Methods in Natu- ral Language Processing, pages 298-307.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Would you describe a leopard as yellow? Evaluating crowd-annotations with justified and informative disagreement",
"authors": [
{
"first": "Pia",
"middle": [],
"last": "Sommerauer",
"suffix": ""
},
{
"first": "Antske",
"middle": [],
"last": "Fokkens",
"suffix": ""
},
{
"first": "Piek",
"middle": [],
"last": "Vossen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4798--4809",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.422"
]
},
"num": null,
"urls": [],
"raw_text": "Pia Sommerauer, Antske Fokkens, and Piek Vossen. 2020. Would you describe a leopard as yellow? Eval- uating crowd-annotations with justified and informa- tive disagreement. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 4798-4809.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of artificial intelligence research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of artificial intelligence research, 37:141-188.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Evaluating semantic representations of source code",
"authors": [
{
"first": "Yaza",
"middle": [],
"last": "Wainakh",
"suffix": ""
},
{
"first": "Moiz",
"middle": [],
"last": "Rauf",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Pradel",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaza Wainakh, Moiz Rauf, and Michael Pradel. 2019. Evaluating semantic representations of source code. CoRR, abs/1910.05177.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Phil@Scale: Computational Methods within Philosophy",
"authors": [
{
"first": "Pauline",
"middle": [],
"last": "Van Wierst",
"suffix": ""
},
{
"first": "Sanne",
"middle": [],
"last": "Vrijenhoek",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Schlobach",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Betti",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Third Conference on Digital Humanities in Luxembourg with a Special Focus on Reading Historical Sources in the Digital Age. CEUR Workshop Proceedings, CEUR-WS.org",
"volume": "1681",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pauline van Wierst, Sanne Vrijenhoek, Stefan Schlobach, and Arianna Betti. 2016. Phil@Scale: Computational Methods within Philosophy. In Proceedings of the Third Conference on Digital Humanities in Luxembourg with a Special Focus on Reading Historical Sources in the Digital Age. CEUR Workshop Proceedings, CEUR-WS.org, volume 1681, Aachen.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Inter-rater agreement in different conditions of (a) the synonym detection task and (b) the coherence task",
"num": null
},
"TABREF2": {
"type_str": "table",
"text": "Example of disagreement and agreement in the synonym detection task. To be read vertically, with target terms in italics. Bolded/marked model terms were chosen by participants to be related to the target term.",
"content": "<table><tr><td colspan=\"2\">ambiguity objects</td><td>sense datum</td></tr><tr><td>parts</td><td>object</td><td>prediction 1</td></tr><tr><td colspan=\"3\">phoneme physical construction</td></tr><tr><td>believe 1</td><td colspan=\"2\">them 12 divisibility 2</td></tr><tr><td>None 2</td><td/><td/></tr></table>",
"html": null,
"num": null
}
}
}
}