{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:58:02.577625Z" }, "title": "ChiSquareX at TextGraphs 2020 Shared Task: Leveraging Pre-trained Language Models for Explanation Regeneration *", "authors": [ { "first": "Aditya", "middle": [ "Girish" ], "last": "Pawate", "suffix": "", "affiliation": {}, "email": "pawate@gmail.com" }, { "first": "Devansh", "middle": [], "last": "Chandak", "suffix": "", "affiliation": {}, "email": "dchandak99@gmail.com" }, { "first": "Varun", "middle": [], "last": "Madhavan", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this work, we describe the system developed by a group of undergraduates from the Indian Institutes of Technology, for the Shared Task at TextGraphs-14 on Multi-Hop Inference Explanation Regeneration (Jansen and Ustalov, 2020). The shared task required participants to develop methods to reconstruct gold explanations for elementary science questions from the WorldTree Corpus (Xie et al., 2020). Although our research was not funded by any organization and all the models were trained on freely available tools like Google Colab which restricted our computational capabilities, we have managed to achieve noteworthy results placing ourselves in the 4th place with a MAP score of 0.4902 1 in the evaluation leaderboard and 0.5062 MAP score on the post-evaluation-phase leaderboard using RoBERTa. We incorporated some of the methods proposed in the previous edition of Textgraphs-13 (Chia et al., 2019), which proved to be very effective, improved upon them, and built a model on top of it using powerful state-of-the-art pre-trained language models like RoBERTa (", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In this work, we describe the system developed by a group of undergraduates from the Indian Institutes of Technology, for the Shared Task at TextGraphs-14 on Multi-Hop Inference Explanation Regeneration (Jansen and Ustalov, 2020). The shared task required participants to develop methods to reconstruct gold explanations for elementary science questions from the WorldTree Corpus (Xie et al., 2020). Although our research was not funded by any organization and all the models were trained on freely available tools like Google Colab which restricted our computational capabilities, we have managed to achieve noteworthy results placing ourselves in the 4th place with a MAP score of 0.4902 1 in the evaluation leaderboard and 0.5062 MAP score on the post-evaluation-phase leaderboard using RoBERTa. We incorporated some of the methods proposed in the previous edition of Textgraphs-13 (Chia et al., 2019), which proved to be very effective, improved upon them, and built a model on top of it using powerful state-of-the-art pre-trained language models like RoBERTa (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The Shared Task is aimed at Multi-hop Inference for Explanation Regeneration. Participants are required to develop new and improve existing methods to reconstruct gold explanations for the WorldTree Corpus (Xie et al., 2020) of elementary science questions, their answers, and explanations.", "cite_spans": [ { "start": 206, "end": 224, "text": "(Xie et al., 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Question: Which of the following is an example of an organism taking in nutrients? (A) a dog burying a bone (B) a girl eating an apple (C) an insect crawling on a leaf (D) a boy planting tomatoes Answer: (B) a girl eating an apple Gold Explanation Facts: 1) A girl means a human girl: Grounding 2) Humans are living organisms: Grounding 3) Eating is when an organism takes in nutrients in the form of food: Central 4) Fruits are kinds of foods: Grounding 5) An apple is a kind of fruit: Grounding Irrelevant Explanation Facts: 1) Some flowers become fruits. 2) Fruit contains seeds. 3) living things live in their habitat. 4) Consumers eat other organisms The example highlights an instance for this task, where systems need to perform multi-hop inference to combine diverse information and identify relevant explanation sentences required to answer the specific question. The task provides a new and more challenging corpus of 9029 explanations and a set of gold explanations for each question and correct answer pair. TG 2019 TG 2020 Questions 1680 4367 Explanations 4950 9029 Tables 62 81 Table 3 : Dataset Comparison", "cite_spans": [], "ref_spans": [ { "start": 1020, "end": 1110, "text": "TG 2019 TG 2020 Questions 1680 4367 Explanations 4950 9029 Tables 62 81 Table 3", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The dataset is the WorldTree Corpus V2.1 (Xie et al., 2020) of Explanation Graphs and Inference Patterns supporting Multi-hop Inference (Februrary 2020 snapshot). It is a newer version of the dataset used in the TextGraphs-2019 (Jansen and Ustalov, 2019) . The comparison between the two datasets is shown in Table 3 .", "cite_spans": [ { "start": 41, "end": 59, "text": "(Xie et al., 2020)", "ref_id": "BIBREF11" }, { "start": 228, "end": 254, "text": "(Jansen and Ustalov, 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 309, "end": 316, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "The problem statement requires participants to build a system that, given a question and its answer choices, can identify the sentences that explain the answer given the question. This is a challenging task due to the presence of other irrelevant sentences in the corpora for the given question, which have equally significant lexical and semantic overlap as the correct ones (Fried et al., 2015) . When a more classical graph theory approach using the semantic overlap of explanations and questions is tried, it leads to the problem of semantic drift (Jansen, 2018) . More classic graph methods were attempted in (Kwon et al., 2018) , where the challenge of semantic drift in multi-hop inference was analyzed, and the effectiveness of information extraction methods was demonstrated. Also, approaching the question as a language generation task is not effective and the current state-of-the-art models (Du\u0161ek et al., 2020) are not capable of generating the exact explanations as required by this task. So this task can easily be transformed into a sentence ranking problem in which we need to rank the relevant facts over all other given facts present in the corpus. The evaluation metric used for the task is the widely used and robust mean average precision (MAP) metric. We have explained a few initial experiments that were undertaken in Section 4.1, followed by the pre-processing methods we incorporated in Section 4.2. We have then discussed our models in Sections 4.3 through 4.6. We have finally shown all our results and discussions in Section 5 followed by the conclusion and acknowledgments.", "cite_spans": [ { "start": 376, "end": 396, "text": "(Fried et al., 2015)", "ref_id": "BIBREF3" }, { "start": 552, "end": 566, "text": "(Jansen, 2018)", "ref_id": "BIBREF6" }, { "start": 614, "end": 633, "text": "(Kwon et al., 2018)", "ref_id": "BIBREF7" }, { "start": 903, "end": 923, "text": "(Du\u0161ek et al., 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Problem Review", "sec_num": "3" }, { "text": "We used the pure textual form of each explanation, problem and correct answer, rather than using a semi-structured form given in the column-oriented files provided in the dataset. Initially, we just reduced the original text of the questions that included all the answer choices. This was done by removing the incorrect answers, which thereby resulted in an improvement in performance. This is similar to what was seen in the previous edition of the task. Taking the TFIDF baseline with the basic pre-processing we got a MAP score of 0.3065 on the hidden test set. Taking this as the starting point, we built a SentenceBERT Model in which we converted all questions and explanations into contextual word embedding vectors and ranked the explanations in descending order of cosine similarity of the embedded vectors. We observed a drop in the model's performance with the MAP score of 0.2427 on the test dataset, which is worse than the simple TFIDF ranker. We realized that it was the semantic overlap between the question and the irrelevant explanations that caused such an unexpected performance drop on further inspection. So we noted that we should not use contextual word embeddings, but instead, we must improve the simple but effective information retrieval technique of TFIDF for the ranker. We then used the Sublinear TFIDF 2 and Binary TFIDF. The optimized Sublinear TFIDF vectorizer gave a boost in the score: 0.3254 MAP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initial Experiments", "sec_num": "4.1" }, { "text": "It was seen that the TFIDF algorithm was very sensitive to keywords, so we applied the pre-processing and optimization techniques mentioned in (Chia et al., 2019) . For each of these, we performed Penn-Treebank tokenization, followed by lemmatization using the lemmatization files provided with the dataset. 3 We used NLTK for tokenization to reduce the vocabulary size needed by combining the different forms of the same keyword. We also removed stopwords, which thereby removed noise in the texts. A simple TFIDF based ranker along with the above pre-processing returned a MAP score of 0.3850. Substituting Sublinear TFIDF, we noticed that the score increased to 0.4080 MAP. With some experimentation, we were able to further improve the MAP score to 0.426 using Binary TFIDF. Finally, we applied Recursive TFIDF as proposed in this paper (Chia et al., 2019) , in which the authors treated the TFIDF vector as a representation of the current chain of reasoning, each successive iteration built on the representation to accumulate a sequence of explanations. We optimized all the other variables like normalization, maxlen, hops, scale. We found the MAP to be completely independent of the normalization used. For maxlen = {128, 125, 144}, we found maxlen = 128 to be most efficient. For number of hops = {1,2,3}, we found 1 to be best. This may be because semantic drift creeps in as we explore the nodes further away from the current node. A scaling factor was used in each successive explanation as it is added to the TFIDF vector. For the downscaling factors= {1.25, 1.3, 1.35}, we found 1.25 was optimum. We got a slight improvement in the score 0.4430 MAP when used along with Binary TFIDF. All these steps were done as a part of the pre-processing step.", "cite_spans": [ { "start": 143, "end": 162, "text": "(Chia et al., 2019)", "ref_id": "BIBREF1" }, { "start": 308, "end": 309, "text": "3", "ref_id": null }, { "start": 841, "end": 860, "text": "(Chia et al., 2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "4.2" }, { "text": "After doing all the pre-processing steps, we tried to apply a simple pure language model based approach that has shown good performance in Text Classification tasks. We took each processed question and concatenated each of the 9029 explanations to it one by one. Then for each of these question + explanation pairs, we used a simple language model based BERT classifier (BERT F orSequenceClassif ication) to predict whether the explanation was one of the gold explanations for that question. The result for this was 0.4116 MAP, which was lesser than we expected. We deduced that there are two major problems with this simplistic approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pure Language Model approach", "sec_num": "4.3" }, { "text": "\u2022 Class imbalance: Most of the question-explanations would be labeled 0 (False), since out of the 9029 total explanations only a few would actually be gold explanations for the question. This causes the classifier to output the 0 label almost all the time and prevents it from learning the true relations between the gold explanations and the question. It is possible that the class imbalance could be mitigated by searching for better hyperparameter values; however, we weren't able to do that with the available resources, so we applied a different technique to address this. \u2022 Non-scalability: This approach would require inferences equal to the number of explanations in the corpus for every question. While it's possible to do this for this relatively small corpus of 9029 explanations, as the number of explanations becomes larger, this approach would no longer be feasible; requiring too much time for training and, more importantly, for inference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pure Language Model approach", "sec_num": "4.3" }, { "text": "To address the above problems, we applied the optimal TFIDF vectorizer (TFIDF binary + recursive) obtained in the pre-processing step to first obtain the most relevant explanations for a given question based on the lexical overlap between the question and the explanations. The number of explanations retrieved by this initial ranker (top k) was a tuned parameter. This technique was very effective at retrieving the gold explanations for a question. We have shown the fraction of gold explanations retrieved when we Table 5 . We can see that almost 87% of the gold explanations are retrieved when we that top 100 explanations from TFIDF, and almost 99% of gold explanations are retrieved when we took the top 500 explanations. This saves our computation as we now need to only train the model for at max 500 explanations per question instead of 9029 explanations. Now top k retrieved explanations are concatenated to the questions as in the previous approach, and the classifier model is trained to classify whether a given explanation among the top k explanations is the right explanation for the question or not. The MAP score using the BERT F orSequenceClassif ication model was 0.4365 MAP. We inferred that the score was low because the model was predicting the 0 label for almost all inputs since there was still a significant imbalance in the training dataset (though significantly less than before).", "cite_spans": [], "ref_spans": [ { "start": 517, "end": 524, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Using TFIDF to retrieve relevant explanations", "sec_num": "4.4" }, { "text": "To address the class imbalance in question explanation pairs, we applied a simple approach of oversampling of the minority class (Positive or '1' label). We simply repeated the gold explanations during training such that for each question, the number of positive and negative labeled explanations would be equal (equal to top k/2). Hence the explanations for a given question were the top k/2 negatively labeled explanations plus the positively labeled explanations retrieved by TFIDF repeated top k/2 times. This was only applied while training, not during inference in the validation and test datasets. Using this simple technique, we were able to get a significant boost in the performance for the baseline of BERT with 0.4506 MAP score. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Addressing class imbalance", "sec_num": "4.5" }, { "text": "We tried out all pre-trained language models available for sequence classification. We optimized the following hyperparameters: top k, num train epochs, batch size, learning rate, epsilon, gradient accumulation steps, max grad norm, weight decay. The batch size is dependent on the GPU RAM available. The parameters top k and num train epochs are a function of training time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-trained Language Models", "sec_num": "4.6" }, { "text": "Since we needed to optimize the training time, we first trained all models with top k as 100 with 3 epochs to get a preliminary model performance. Then we took the best models and trained it for a higher top k (500 or 300 whichever was feasible) to get a boost in score. Our best performing model took close to 8 hours to complete the training. Further details of the models are given in the supplementary. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-trained Language Models", "sec_num": "4.6" }, { "text": "We present the scores in the table given below. We got our best performance from RoBERTa. When we observe the results, we see that there is only a slight variation in the final scores of most pre-trained language models. We observe that the models overfit the given data. We could not perform a grid search to optimize all parameters due to computational constraints and had to manually search for the best hyperparameters due to which the performance of any given model may not be optimal. Further, we have trained only RoBERTa and BART for top k 500 explanations and not other models because they had a long training time or a higher RAM requirement. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and discussion", "sec_num": "5" }, { "text": "We have given a system description of our team ChiSquareX which stood 4th place in the evaluation phase leaderboard with a MAP score of 0.4902. We have presented a system with optimized preprocessing of the dataset followed by an optimized TFIDF information retrieval scheme to obtain initial ranks, and then further pre-trained language model based re-ranker to rank the final explanations. Despite the computational constraints, just by leveraging Google Colab and other open-source tools, we have managed to fine-tune state-of-the-art pre-trained language models like RoBERTa, BART and ELECTRA on the (Xie et al., 2020) dataset and achieve a reasonable MAP score.", "cite_spans": [ { "start": 604, "end": 622, "text": "(Xie et al., 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Full, replicable code is available on Github for all methods described here, at https://github.com/dchandak99/ TextGraphs-2020 This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http:// creativecommons.org/licenses/by/4.0/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://nlp.stanford.edu/IR-book/html/htmledition/sublinear-tf-scaling-1.html 3 PTB tokenization and stopwords from the NLTK package", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would firstly like to thank the organizers Peter Jansen and Dmitry Ustalov for holding this shared task. It was a great learning experience for us. We would also like to thank the participants of TextGraphs-2019; their work was a great source of inspiration for us as to how to proceed with the task. (Chia et al., 2019) in particular, was a source of a number of simple but effective text pre-processing techniques to greatly improve performance. Additionally, we would like to extend a big thanks to the makers and maintainers of the excellent HuggingFace (Wolf et al., 2020) repository, without which most of our research would have been impossible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Scibert: A pretrained language model for scientific text", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2019, "venue": "EMNLP/IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text. In EMNLP/IJCNLP.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Red dragon AI at TextGraphs 2019 shared task: Language model assisted explanation generation", "authors": [ { "first": "Ken", "middle": [], "last": "Yew", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Chia", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Witteveen", "suffix": "" }, { "first": "", "middle": [], "last": "Andrews", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)", "volume": "", "issue": "", "pages": "85--89", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yew Ken Chia, Sam Witteveen, and Martin Andrews. 2019. Red dragon AI at TextGraphs 2019 shared task: Language model assisted explanation generation. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13), pages 85-89, Hong Kong, November. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Evaluating the State-of-the-Art of End-to-End Natural Language Generation: The E2E NLG Challenge", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Du\u0161ek", "suffix": "" }, { "first": "Jekaterina", "middle": [], "last": "Novikova", "suffix": "" }, { "first": "Verena", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2020, "venue": "Computer Speech & Language", "volume": "59", "issue": "", "pages": "123--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Du\u0161ek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the State-of-the-Art of End-to-End Natural Language Generation: The E2E NLG Challenge. Computer Speech & Language, 59:123-156, January.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Higher-order lexical semantic models for non-factoid answer reranking", "authors": [ { "first": "Daniel", "middle": [], "last": "Fried", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Gustave", "middle": [], "last": "Hahn-Powell", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "0", "pages": "197--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Fried, Peter Jansen, Gustave Hahn-Powell, Mihai Surdeanu, and Peter Clark. 2015. Higher-order lexical semantic models for non-factoid answer reranking. Transactions of the Association for Computational Linguis- tics, 3(0):197-210.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "TextGraphs 2019 Shared Task on Multi-Hop Inference for Explanation Regeneration", "authors": [ { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Ustalov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)", "volume": "", "issue": "", "pages": "63--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Jansen and Dmitry Ustalov. 2019. TextGraphs 2019 Shared Task on Multi-Hop Inference for Explanation Regeneration. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13), pages 63-77, Hong Kong. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "TextGraphs 2020 Shared Task on Multi-Hop Inference for Explanation Regeneration", "authors": [ { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Ustalov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Jansen and Dmitry Ustalov. 2020. TextGraphs 2020 Shared Task on Multi-Hop Inference for Explanation Regeneration. In Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs). Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multi-hop inference for sentence-level TextGraphs: How challenging is meaningfully combining information for science question answering?", "authors": [ { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Twelfth Workshop on Graph-Based Methods for Natural Language Processing", "volume": "", "issue": "", "pages": "12--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Jansen. 2018. Multi-hop inference for sentence-level TextGraphs: How challenging is meaningfully com- bining information for science question answering? In Proceedings of the Twelfth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-12), pages 12-17, New Orleans, Louisiana, USA, June. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Controlling information aggregation for complex question answering", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Kwon", "suffix": "" }, { "first": "Harsh", "middle": [], "last": "Trivedi", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Niranjan", "middle": [], "last": "Balasubramanian", "suffix": "" } ], "year": 2018, "venue": "Advances in Information Retrieval", "volume": "", "issue": "", "pages": "750--757", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeyoung Kwon, Harsh Trivedi, Peter Jansen, Mihai Surdeanu, and Niranjan Balasubramanian. 2018. Control- ling information aggregation for complex question answering. In Gabriella Pasi, Benjamin Piwowarski, Leif Azzopardi, and Allan Hanbury, editors, Advances in Information Retrieval, pages 750-757, Cham. Springer International Publishing.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Abdelrahman", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7871--7880", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural lan- guage generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online, July. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Y", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "M", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Mariama Drame, Quentin Lhoest, and Alexander M", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "", "middle": [], "last": "Gugger", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexan- der M. Rush. 2020. Huggingface's transformers: State-of-the-art natural language processing.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "WorldTree v2: A corpus of science-domain structured explanations and inference patterns supporting multi-hop inference", "authors": [ { "first": "Zhengnan", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Thiem", "suffix": "" }, { "first": "Jaycie", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Wainwright", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Marmorstein", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "5456--5473", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhengnan Xie, Sebastian Thiem, Jaycie Martin, Elizabeth Wainwright, Steven Marmorstein, and Peter Jansen. 2020. WorldTree v2: A corpus of science-domain structured explanations and inference patterns supporting multi-hop inference. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 5456- 5473, Marseille, France, May. European Language Resources Association.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "The Overall Flow of the Model Figure 2: Inside the Classification Model", "num": null }, "TABREF0": { "html": null, "content": "", "text": "An Example for Explanation Regeneration", "type_str": "table", "num": null }, "TABREF2": { "html": null, "content": "
", "text": "Final LeaderboardAttributes", "type_str": "table", "num": null }, "TABREF4": { "html": null, "content": "
top k Fraction Retrieved
100.484
300.676
500.767
800.837
1000.865
3000.965
5000.987
: Inital scores using only pre-processing
and TFIDF
", "text": "", "type_str": "table", "num": null }, "TABREF5": { "html": null, "content": "", "text": "Value of parameter top k vs Fraction of gold explanations retrieved consider the top k explanations in", "type_str": "table", "num": null }, "TABREF7": { "html": null, "content": "
", "text": "Hyper Parameter Values", "type_str": "table", "num": null }, "TABREF9": { "html": null, "content": "
", "text": "Final Results", "type_str": "table", "num": null } } } }