ACL-OCL / Base_JSON /prefixS /json /semdeep /2021.semdeep-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:48:52.094235Z"
},
"title": "Word Sense Disambiguation with Transformer Models",
"authors": [
{
"first": "Pierre-Yves",
"middle": [],
"last": "Vandenbussche",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Elsevier Labs",
"location": {
"addrLine": "Radarweg 29",
"postCode": "1043 NX",
"settlement": "Amsterdam",
"country": "Netherlands"
}
},
"email": ""
},
{
"first": "Tony",
"middle": [],
"last": "Scerri",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Elsevier",
"location": {
"addrLine": "Labs 1 Appold Street",
"postCode": "EC2A 2UT",
"settlement": "London",
"country": "UK"
}
},
"email": ""
},
{
"first": "Ron",
"middle": [],
"last": "Daniel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Elsevier Labs",
"location": {
"addrLine": "230 Park Avenue",
"postCode": "10169",
"settlement": "New York",
"region": "NY",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we tackle the task of Word Sense Disambiguation (WSD). We present our system submitted to the Word-in-Context Target Sense Verification challenge, part of the SemDeep workshop at IJCAI 2020 (Breit et al., 2020). That challenge asks participants to predict if a specific mention of a word in a text matches a pre-defined sense. Our approach uses pre-trained transformer models such as BERT that are fine-tuned on the task using different architecture strategies. Our model achieves the best accuracy and precision on Subtask 1-make use of definitions for deciding whether the target word in context corresponds to the given sense or not. We believe the strategies we explored in the context of this challenge can be useful to other Natural Language Processing tasks.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we tackle the task of Word Sense Disambiguation (WSD). We present our system submitted to the Word-in-Context Target Sense Verification challenge, part of the SemDeep workshop at IJCAI 2020 (Breit et al., 2020). That challenge asks participants to predict if a specific mention of a word in a text matches a pre-defined sense. Our approach uses pre-trained transformer models such as BERT that are fine-tuned on the task using different architecture strategies. Our model achieves the best accuracy and precision on Subtask 1-make use of definitions for deciding whether the target word in context corresponds to the given sense or not. We believe the strategies we explored in the context of this challenge can be useful to other Natural Language Processing tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word Sense Disambiguation (WSD) is a fundamental and long-standing problem in Natural Language Processing (NLP) (Navigli, 2009) . It aims at clearly identifying which specific sense of a word is being used in a text. As illustrated in Table 1 , in the sentence I spent my spring holidays in Morocco., the word spring is used in the sense of the season of growth, and not in other senses involving coils of metal, sources of water, the act of jumping, etc.",
"cite_spans": [
{
"start": 112,
"end": 127,
"text": "(Navigli, 2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 235,
"end": 242,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Word-in-Context Target Sense Verification challenge (WiC-TSV) (Breit et al., 2020) structures WSD tasks in particular ways in order to make the competition feasible. In Subtask 1, the system is provided with a sentence, also known as the context, the target word, and a definition also known as word sense. The system is to decide if the use of the target word matches the sense given by the definition. Note that Table 1 contains a Hypernym column. In Subtask 2 system is to decide if the use of the target in the context is a hyponym of the given hypernym. In Subtask 3 the system can use both the sentence and the hypernym in making the decision.",
"cite_spans": [
{
"start": 66,
"end": 86,
"text": "(Breit et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 418,
"end": 425,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The dataset provided with the WiC-TSV challenge has relatively few sense annotated examples (< 4, 000) and with a single target sense per word. This makes pre-trained Transformer models well suited for the task since the small amount of data would limit the learning ability of a typical supervised model trained from scratch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Thanks to the recent advances made in language models such as BERT (Devlin et al., 2018) or XL-Net (Yang et al., 2019) trained on large corpora, neural language models have established the stateof-the-art in many NLP tasks. Their ability to capture context-sensitive semantic information from text would seem to make them particularly well suited for this challenge. In this paper, we explore different fine-tuning architecture strategies to answer the challenge. Beyond the results of our system, our main contribution comes from the intuition and implementation around this set of strategies that can be applied to other NLP tasks.",
"cite_spans": [
{
"start": 67,
"end": 88,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 99,
"end": 118,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Word-in-Context Target Sense Verification dataset consists of more than 3800 rows. As shown in Table 1 , each row contains a target word, a context sentence containing the target, and both hypernym(s) and a definition giving a sense of the term. There are both positive and negative examples, the dataset provides a label to distinguish them. Table 2 shows some statistics about the training, dev, and test splits within the dataset. Note the substantial differences between the test set and the training and dev sets. The longer length of the context sentences and definitions in the test set may have an impact on a model trained solely on the given training and dev sets. This is a known ",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 106,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 347,
"end": 354,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data Analysis",
"sec_num": "2"
},
{
"text": "Word Sense Disambiguation is a long-standing task in NLP because of its difficulty and subtlety. One way the WiC-TSV challenge has simplified the problem is by reducing it to a binary yes/no decision over a single sense for a single pre-identified target. This is in contrast to most prior work that provides a pre-defined sense inventory, typically WordNet, and requires the system to both identify the terms and find the best matching sense from the inventory. WordNet provides extremely finegrained senses which have been shown to be difficult for humans to accurately select (Hovy et al., 2006) . Coupled with this is the task of even selecting the term in the presence of multi-word expressions and negations.",
"cite_spans": [
{
"start": 579,
"end": 598,
"text": "(Hovy et al., 2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Description and Related Work",
"sec_num": "3"
},
{
"text": "Since the introduction of the transformer selfattention-based neural architecture and its ability to capture complex linguistic knowledge (Vaswani et al., 2017) , their use in resolving WSD has received considerable attention (Loureiro et al., 2020) . A common approach consists in fine-tuning a single pre-trained transformer model to the WSD downstream task. The pre-trained model is provided with the task-specific inputs and further trained for several epochs with the task's objective and negative examples of the objective.",
"cite_spans": [
{
"start": 138,
"end": 160,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 226,
"end": 249,
"text": "(Loureiro et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Description and Related Work",
"sec_num": "3"
},
{
"text": "Our system is inspired from the work of Huang et al. (2019) where the WSD task can be seen as a binary classification problem. The system is given the target word in context (input sentence) and one . This configuration was originally used to predict whether two sentences follow each other in a text. But the learning power of the transformer architecture lets us learn this new task by simply changing the meaning of the fields in the input data while keeping the structure the same. We add a fully connected layer on top of the transformer model's layers with classification function to predict whether the target word in context matches the definition. This approach is particularly well suited for weak supervision and can generalise to word/sense pairs not previously seen in the training set. This overcomes the limitation of multi-class objective models, e.g. (Vial et al., 2019) that use a predefined sense inventory (as described above) and can't generalise to unseen word/sense pairs. An illustration of our system is provided in Figure 1 .",
"cite_spans": [
{
"start": 40,
"end": 59,
"text": "Huang et al. (2019)",
"ref_id": "BIBREF4"
},
{
"start": 868,
"end": 887,
"text": "(Vial et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 1041,
"end": 1049,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "System Description and Related Work",
"sec_num": "3"
},
{
"text": "The system described in the previous section was adapted in several ways as we tested alternatives. We first considered different transformer models, such as BERT v. XLNet. We then concentrated our efforts on one transformer model, BERT-baseuncased, and performed other experiments to improve performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "All experiments were run five times with differ- ent random seeds. We report the mean and standard deviation of the system's performance on the metrics of accuracy and F1. We believe this is more informative than a single 'best' number. All models in these experiments are trained on the training set and evaluated on the dev set. In addition to the experiments whose results are reported here, we tried a variety of other things such as pooling methods (layers, ops), a Siamese network with shared encoders for two input sentences, and alternative loss calculations. None of them gave better results in the time available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "We compared the following pre-trained transformer models from the HuggingFace transformers library: XLNet (Yang et al., 2019) , BERT (Devlin et al., 2018) , and derived models including RoBERTa (Liu et al., 2019) or DistilBERT (Sanh et al., 2019) .",
"cite_spans": [
{
"start": 106,
"end": 125,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 133,
"end": 154,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 194,
"end": 212,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 227,
"end": 246,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alternative Transformer Models",
"sec_num": "4.1"
},
{
"text": "Following standard practice, those pretrained models were used as feature detectors for a finetuning pass using the fully-connected head layer. The results for those models are given in Table 3 . The BERT-base-uncased model performed the best so it was the basis for further experiments described in the next section.",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 193,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Alternative Transformer Models",
"sec_num": "4.1"
},
{
"text": "It is worth mentioning that no attempt was made to perform a hyperparameter optimization for each model. Instead, a single set of hyperparameters was used for all the models being compared. Table 4 : Influence of strategies on model performance. We note in bold those that had a positive impact on the performance",
"cite_spans": [],
"ref_spans": [
{
"start": 190,
"end": 197,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Alternative Transformer Models",
"sec_num": "4.1"
},
{
"text": "Having selected the BERT-base-uncased pretrained transformer model, and staying with a single set of hyperparameters (learning rate = 5e \u22125 and 3 training epochs), there are still many different strategies that could be used to try to improve performance. The individual strategies are discussed below. The results for all the strategies are presented in Table 4 4",
"cite_spans": [],
"ref_spans": [
{
"start": 355,
"end": 362,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Alternative BERT Strategies",
"sec_num": "4.2"
},
{
"text": "We wondered if the context of the target word was sufficient for the model to predict whether the definition is correct. By masking the target word from the input sentence, we test the ability of the model to learn solely from the contextual words. We hoped this might improve its generalisation. Masking led to a small decrease in performance. This small delta indicates that the non-target words in the context have strong influence on the model's prediction of the correct sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".2.1 Masking the target word",
"sec_num": null
},
{
"text": "We wondered about the impact of taking the opposite tack and calling out the target word. As illustrated in Figure 1 , some transformer models make use of token type ids (segment token indices) to indicate the first and second portion of the inputs.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 116,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Emphasising the word of interest",
"sec_num": "4.2.2"
},
{
"text": "We set the token(s) type of the target word in the input sentence to match that of the definition. Applying this strategy leads to a slight improvement in accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emphasising the word of interest",
"sec_num": "4.2.2"
},
{
"text": "The community has developed several common ways to select the input for the head binary classification layer. We compare the performance using the dedicated [CLS] token vector v. mean/maxpooling methods applied to the sequence hidden states of selected layers of the transformer model. Applying mean-pooling to the last layer gave the best accuracy and F1 of the configurations tested.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CLS vs. pooling over token sequence",
"sec_num": "4.2.3"
},
{
"text": "Another strategy centers on whether, and how, to update the pre-trained model parameters during fine-tuning, in addition to the training of the newly initialized fully connected head layer. Updating the pre-trained model would allow it to specialize on our downstream task but might lead to \"catastrophic forgetting\" where we destroy the benefit of the pre-trained model. One strategy the community has evolved (Bevilacqua and Navigli, 2020) first freezes the transformer model's parameters for several epochs while the head layer receives the updates. Later the pre-trained parameters are unfrozen and updated too. This strategy provides some improvements in accuracy and F1.",
"cite_spans": [
{
"start": 411,
"end": 441,
"text": "(Bevilacqua and Navigli, 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weight Training vs. Freeze-then-Thaw",
"sec_num": "4.2.4"
},
{
"text": "Due to the small size of the training dataset, we experimented with data augmentation techniques while using only the data provided for the challenge. For each word in context/target sense pair, we generated:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data augmentation",
"sec_num": "4.2.5"
},
{
"text": "\u2022 one positive example by replacing the target word with a random hypernym, if any exist.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data augmentation",
"sec_num": "4.2.5"
},
{
"text": "\u2022 one negative example by associating the target word to a random definition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data augmentation",
"sec_num": "4.2.5"
},
{
"text": "This strategy triples the size of the training dataset. This strategy gave the greatest improvement (3.6%) of all those tested. Further work could test the effect of more negative examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data augmentation",
"sec_num": "4.2.5"
},
{
"text": "For the WiC-TSV challenge's Subtask 3 , the system can use the additional information of hypernyms of the target word. We simply concatenate the hypernyms to the definition. This strategy leads Model Acc. Prec. Recall F1 Baseline (BERT) .753 .717 .849 .777",
"cite_spans": [
{
"start": 230,
"end": 236,
"text": "(BERT)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using Hypernyms (Subtask 3)",
"sec_num": "4.2.6"
},
{
"text": ".775 .804 .736 .769",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Run1",
"sec_num": null
},
{
"text": ".778 .819 .722 .768 Table 5: Model's Results on the Subtask 1 of the WiC-TSV challenge to a slight performance improvement, presumably because the hypernym indirectly emphasizes the intended sense of the target word.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 86,
"text": "Table 5: Model's Results on the Subtask 1 of the WiC-TSV challenge",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Run2",
"sec_num": null
},
{
"text": "The challenge allowed each participant to submit two results per task. However there was no clear winner from the strategies above; most led to a minimal improvement with a substantial standard deviation. We therefore selected our system for submitted results by a grid search over common hyperparameter values including the strategies mentioned previously. We use the train set for training and dev set to measure the performance of each model in the grid search. We chose accuracy as the main evaluation metric. For Subtask 1 we opted for the following parameters:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenge Submission",
"sec_num": "5"
},
{
"text": "\u2022 Run1: BERT-base-uncased model trained for 3 epochs using the augmented dataset, with a learning rate of 7e \u22126 and emphasising the word of interest. Other parameters include: max sequence length of 256; train batch size of 32.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenge Submission",
"sec_num": "5"
},
{
"text": "\u2022 Run2: we kept the parameters from the previous run, updating the learning rate to 1e \u22125 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenge Submission",
"sec_num": "5"
},
{
"text": "The results on the private test set of the Subtask 1 are presented in Table 5 . The Run2 of our system demonstrated a 3.3% accuracy and 14.2% precision improvements compared to the baseline. For Subtask 3 we arrived at the following parameters:",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Challenge Submission",
"sec_num": "5"
},
{
"text": "\u2022 Run1: BERT-base-uncased model trained for 3 epochs using the original dataset, with a learning rate of 1e \u22125 . Other parameters include: max sequence length of 256; train batch size of 32.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenge Submission",
"sec_num": "5"
},
{
"text": "\u2022 Run2: we kept the parameters from the previous run, extending the number of training epochs to 5. Figure 2 : Influence of training data size on model performance. We used the augmented dataset to reach a proportion of 3. Parameters from Subtask 1 Run2 were used for this comparison.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Challenge Submission",
"sec_num": "5"
},
{
"text": "The results on the private test set of the SubTask 3 are presented in Table 6 . Compared to using the sentence and definition alone, our naive approach to handling hypernyms hurt performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Challenge Submission",
"sec_num": "5"
},
{
"text": "We applied transformer models to tackle a Word Sense Disambiguation challenge. As in much of the current NLP research, pre-trained transformer models demonstrated a good ability to learn from few examples with high accuracy. Using different architecture modifications, and in particular the use of the token type id to flag the word of interest along with automatically augmented data, our system demonstrated the best accuracy and precision in the competition and third-best F1. There is still a noticeable gap to human performance on this dataset (85.3 acc.), but the level of effort required to create these kinds of systems is easily within reach of small groups or individuals. Despite the test set having a very different distribution than the training/development sets, our system demonstrated similar performance on both the development and test sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
},
{
"text": "An analysis of the errors produced by our best performing model on the dev set (Subtask 1, Run2) is presented in Table 7 . It shows a mix of obvious errors and more ambiguous ones where it has been difficult for the model to draw conclusions from the limited context provided by the sentence. For instance, the short sentence it's my go could very well correspond to the associated definition a usually brief attempt of the target word go.",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 120,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
},
{
"text": "As motivated by the construction of an augmented dataset, we believe that increasing the size of the training dataset would probably lead to improved performance, even without system changes. To test this hypothesis we measured the performance of our best model with increasing fractions of the training data. The results in Figure 2 show improvement as the fraction of the training dataset grows.",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 333,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
},
{
"text": "As a counterbalance to the positive note above, we must note that this challenge set up WSD as a binary classification problem. This is a considerable simplification from the more general sense inventory approach. Further work will be needed to obtain similar accuracy in that regime.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Breaking through the 80% glass ceiling: Raising the state of the art in word sense disambiguation by incorporating knowledge graph information",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Bevilacqua",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2854--2864",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Bevilacqua and Roberto Navigli. 2020. Break- ing through the 80% glass ceiling: Raising the state of the art in word sense disambiguation by incor- porating knowledge graph information. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2854-2864.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Wic-tsv: An evaluation benchmark for target sense verification of words in context",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Breit",
"suffix": ""
},
{
"first": "Artem",
"middle": [],
"last": "Revenko",
"suffix": ""
},
{
"first": "Kiamehr",
"middle": [],
"last": "Rezaee",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.15016"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Breit, Artem Revenko, Kiamehr Rezaee, Moham- mad Taher Pilehvar, and Jose Camacho-Collados. 2020. Wic-tsv: An evaluation benchmark for tar- get sense verification of words in context. arXiv preprint, arXiv:2004.15016.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Ontonotes: The 90 In HLT-NAACL",
"authors": [
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Hovy, M. Marcus, Martha Palmer, L. Ramshaw, and R. Weischedel. 2006. Ontonotes: The 90 In HLT- NAACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Glossbert: Bert for word sense disambiguation with gloss knowledge",
"authors": [
{
"first": "Luyao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Chi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.07245"
]
},
"num": null,
"urls": [],
"raw_text": "Luyao Huang, Chi Sun, Xipeng Qiu, and Xuanjing Huang. 2019. Glossbert: Bert for word sense dis- ambiguation with gloss knowledge. arXiv preprint arXiv:1908.07245.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Language models and word sense disambiguation: An overview and analysis",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Loureiro",
"suffix": ""
},
{
"first": "Kiamehr",
"middle": [],
"last": "Rezaee",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.11608"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Loureiro, Kiamehr Rezaee, Mohammad Taher Pilehvar, and Jose Camacho-Collados. 2020. Lan- guage models and word sense disambiguation: An overview and analysis. arXiv preprint arXiv:2008.11608.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Word sense disambiguation: A survey",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM computing surveys (CSUR)",
"volume": "41",
"issue": "",
"pages": "1--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM computing surveys (CSUR), 41(2):1- 69.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sense vocabulary compression through the semantic knowledge of wordnet for neural word sense disambiguation",
"authors": [
{
"first": "Lo\u00efc",
"middle": [],
"last": "Vial",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Lecouteux",
"suffix": ""
},
{
"first": "Didier",
"middle": [],
"last": "Schwab",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.05677"
]
},
"num": null,
"urls": [],
"raw_text": "Lo\u00efc Vial, Benjamin Lecouteux, and Didier Schwab. 2019. Sense vocabulary compression through the se- mantic knowledge of wordnet for neural word sense disambiguation. arXiv preprint arXiv:1905.05677.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural in- formation processing systems, pages 5753-5763.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "System overview sense of the word separated by a special token ([SEP])"
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"text": "Examples of training data.",
"content": "<table><tr><td>issue whose roots are explained in the dataset au-</td></tr><tr><td>thor's paper (Breit et al., 2020). The training and</td></tr><tr><td>development sets come from WordNet and Wik-</td></tr><tr><td>tionary while the test set incorporates both gen-</td></tr><tr><td>eral purpose sources WordNet and Wiktionary, and</td></tr><tr><td>domain-specific examples from Cocktails, Medical</td></tr><tr><td>Subjects and Computer Science. The difference in</td></tr><tr><td>the distributions of the test set from the training</td></tr><tr><td>and dev sets, the short length of the definitions and</td></tr><tr><td>hypernyms, and the relatively small number of ex-</td></tr><tr><td>amples all combine to provide a good challenge for</td></tr><tr><td>the language models.</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"html": null,
"num": null,
"text": "Dataset statistics. Values with \u00b1 are mean and SD .030 .666 \u00b1 .020 DistilBERT-base-uncased .612 \u00b1 .017 .665 \u00b1 .017 RoBERTa-base .635 \u00b1 .074 .717 \u00b1 .030 BERT-base-uncased .723 \u00b1 .023 .751 \u00b1 .023",
"content": "<table><tr><td>Model</td><td>Accuracy</td><td>F1</td></tr><tr><td>XLNet-base-cased</td><td>.522 \u00b1</td><td/></tr></table>"
},
"TABREF4": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table/>"
},
"TABREF7": {
"type_str": "table",
"html": null,
"num": null,
"text": "Model's Results on the Subtask 3 of the WiC-TSV challenge",
"content": "<table><tr><td>Metric value</td><td>0.6 0.65 0.7 0.75</td><td>F1 Accuracy</td></tr><tr><td/><td>0 . 2 5 0 . 5 0 . 7 5 1</td><td>3</td></tr><tr><td/><td colspan=\"2\">Percentage of training data</td></tr></table>"
}
}
}
}