| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T10:23:46.782450Z" |
| }, |
| "title": "NITK NLP at FinCausal-2020 Task 1 Using BERT and Linear models", |
| "authors": [ |
| { |
| "first": "Anand", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National Institute of Technology Karnataka", |
| "location": {} |
| }, |
| "email": "manandkumar@nitk.edu.in" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "FinCausal-2020 is the shared task which focuses on the causality detection of factual data for financial analysis. The financial data facts don't provide much explanation on the variability of these data. This paper aims to propose an efficient method to classify the data into one which is having any financial cause or not. Many models were used to classify the data, out of which SVM model gave an F-Score of 0.9435, BERT with specific fine-tuning achieved best results with F-Score of 0.9677.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "FinCausal-2020 is the shared task which focuses on the causality detection of factual data for financial analysis. The financial data facts don't provide much explanation on the variability of these data. This paper aims to propose an efficient method to classify the data into one which is having any financial cause or not. Many models were used to classify the data, out of which SVM model gave an F-Score of 0.9435, BERT with specific fine-tuning achieved best results with F-Score of 0.9677.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The important aspect as far as the financial news is the variability and the impact which it causes. In the information retrieval process, causality is an essential and well-known topic. Several NLP methods can be used to find the relationship between financial data and its effect. The main focus of this work is to come out with a better solution for the FinCausal-2020 shared task (Mariko et al., 2020) . This shared task mainly focuses on determining causality associated with the financial object's transformation in quantified facts.", |
| "cite_spans": [ |
| { |
| "start": 384, |
| "end": 405, |
| "text": "(Mariko et al., 2020)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We have applied classification for the financial data using Linear Model and Deep Learning BERT (Devlin et al., 2018) model. In the case of Linear model, an SVM classifier is used, which is further fine tuned to produce better result. The fine-tuned BERT base uncased version was used as a deep learning model. This paper is presented as follows; the details about the data being used are explained in section 2, system description is being explained in section 3, results and discussion in section 4, which follows the conclusions in the last section 5.", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 117, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The task organisers provided the data for the shared task as CSV files, namely trial, practice, and evaluation. These data where extracted from a corpus of 2019 financial news provided by Quam. The original data being HTML pages corresponding to the daily financial news feed is extracted. These raw set is being arranged with the column as Index, Text, and Category. Initially, the trial and practice dataset were released to build and train the model, which consists of data as shown in the table 1. The trial data had 8580 sentences with labels indicating whether there is any causality(1) or not(0), similarly 13478 sentences for practice data. The evaluation data had 7386 sentences without any labels and needed to be evaluated and appended with the prediction labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Example Sentences from the Dataset:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Virtually free comprehensive medical care would lead to big increases in the demand for services:0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Transat loss more than doubles as it works to complete Air Canada deal:1 Data # of Sentences Category 0(No causality) 1 (Causality) Trial 8580 8011 569 Practice 13478 12468 1010 Table 1 : FinCausal 2020 Dataset details", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 122, |
| "end": 197, |
| "text": "(Causality) Trial 8580 8011 569 Practice 13478 12468 1010 Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We have developed both linear as well as deep learning models. The SVM classifier was used as a linear model and, BERT was used as a deep learning model. The hardware used for the experiments were Colaboratory by Google with GPU ranging from 10GB to 16GB(Tesla K80/Tesla P100). The description of each system is explained one by one below. The Linear model SVM classifier was given as a baseline by the task organiser. We have tried with NBSVM model (Wang and Manning, 2012) , To apply SVM model for the textual data, some basic preprocessing like removing URL, HTML tags (Richardson, 2007) , special symbols, and accented characters were done. Further, these texts were converted to lowercase and TF-IDF vectorizer (Salton and McGill, 1986) was applied. The experiment was conducted in two phases, the first one being training the model using the trial dataset and testing with the practice dataset. The second one was training the model using practice dataset and testing with the trial dataset. The same steps were followed for both the phases. The prediction for the evaluation data provided by the organiser is done by building a model which was trained on both practice and train data.", |
| "cite_spans": [ |
| { |
| "start": 450, |
| "end": 474, |
| "text": "(Wang and Manning, 2012)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 572, |
| "end": 590, |
| "text": "(Richardson, 2007)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 716, |
| "end": 741, |
| "text": "(Salton and McGill, 1986)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The first phase of the experiment was done by splitting (Pedregosa et al., 2011) the trial data into 85% and 15% for train and validation, respectively. The TF-IDF was experimented with different n-grams and fixed a range of (1-5). The minimum and the maximum number of occurrences (min df and max df) of words to be considered to make a vocabulary were also altered. After grid search, the maximum and minimum occurrences were fixed at 90% (maximum number of sentences) and 2(least number of sentences), which gave better scores for the metrics as in table 2. The same process was repeated for the practice data, which was used to predict the trial data.", |
| "cite_spans": [ |
| { |
| "start": 56, |
| "end": 80, |
| "text": "(Pedregosa et al., 2011)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The practice and trial data were combined to predict the evaluation data whose scores are also given in table 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Trained The BERT model (Devlin et al., 2018) was used as a deep learning model for the classification task which is based on transformers (Wolf et al., 2019 ). Here we have used the BERT-base uncased pretrained model and trained FinCausal data on top of it. As the BERT model don't require any preprocessing, it wasn't done. Initially, during the evaluation phase, the BERT model was directly applied with the practice and trial data without any fine-tuning which gave us a result lower than that of the SVM model as shown in table 3.", |
| "cite_spans": [ |
| { |
| "start": 23, |
| "end": 44, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 138, |
| "end": 156, |
| "text": "(Wolf et al., 2019", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task", |
| "sec_num": null |
| }, |
| { |
| "text": "Trained The results shown above are done before the evaluation deadline. Post evaluation, the model was fine-tuned with changing important parameters as given below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Batch Size: kept as 6 for both training and validation", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Epochs: varied with early stopping keeping lesser validation loss", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Neither attention nor segments were maintained", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Learning rate: Kept at 2e \u22125 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 The LR parameter was tried with one fit one cycle and auto-fit learning rate using learning policies (Smith, 2017).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task", |
| "sec_num": null |
| }, |
| { |
| "text": "Trained using F1 Recall Precision Predict Evaluation Data Both Practice and Trial 0.967770 0.967100 0.968712 Table 4 : BERT Model Result Post Evaluation Deadline", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 109, |
| "end": 116, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Task", |
| "sec_num": null |
| }, |
| { |
| "text": "The cyclic learning rate policy was used as mentioned in (Smith, 2017) . This method was adopted for evaluation data, which helped to tune the learning rate and showed improved result than the other two models as shown in table 4.", |
| "cite_spans": [ |
| { |
| "start": 57, |
| "end": 70, |
| "text": "(Smith, 2017)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task", |
| "sec_num": null |
| }, |
| { |
| "text": "Here we will explain the results obtained for the two phases of experiments conducted and the result that the model has given for the evaluation data. As shown in tables 2 and 3, both the models with modified parameters gave better results. These models were used to predict the evaluation data. The leaderboard after the evaluation deadline along with our updated score is as given in table 5. Our result was at 9 th position, which was the result obtained from the linear SVM model, as mentioned earlier. On further exploring the BERT model, we could get better results, which shows our proposed fine-tuned BERT model score near the 6 th position in the final leaderboard.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "F1 Recall Precision NITK NLP 1 0.967770 (6) 0.967100 (6) 0.968712 (6) NITK NLP 0.943532 (9) 0.948687 (9) 0.943193 (9) Hence, the results show that BERT model performed if the hyperparameters were well-tuned as it had a large corpus of pretrained data. After the evaluation, it was evident that the model could perform better as the accuracy difference with the first place and the accuracy improvement over the earlier BERT.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Team", |
| "sec_num": null |
| }, |
| { |
| "text": "The FinCausal-2020 shared task was mainly aimed to come up with a model that could analyze the text and say whether it belongs to a particular financial causality or not. The main challenge was the imbalanced dataset and from which we need to develop a model that could produce an accurate result. As per the experiments being conducted and observing the results, the fine-tuned BERT model could perform well for the blind dataset using the k nown data than BERT and the liner SVM model. If we could come up with more balanced data with sampling methods, BERT would outperform most of the existing models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http: //creativecommons.org/licenses/by/4.0/.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1810.04805" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Hugues de Mazancourt, and Mahmoud El-Haj. 2020. The Financial Document Causality Detection Shared Task (FinCausal 2020)", |
| "authors": [ |
| { |
| "first": "Dominique", |
| "middle": [], |
| "last": "Mariko", |
| "suffix": "" |
| }, |
| { |
| "first": "Hanna", |
| "middle": [], |
| "last": "Abi Akl", |
| "suffix": "" |
| }, |
| { |
| "first": "Estelle", |
| "middle": [], |
| "last": "Labidurie", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephane", |
| "middle": [], |
| "last": "Durfort", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "The 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation (FNP-FNS 2020", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dominique Mariko, Hanna Abi Akl, Estelle Labidurie, Stephane Durfort, Hugues de Mazancourt, and Mah- moud El-Haj. 2020. The Financial Document Causality Detection Shared Task (FinCausal 2020). In The 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation (FNP-FNS 2020, Barcelona, Spain.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Scikit-learn: Machine learning in python", |
| "authors": [ |
| { |
| "first": "Fabian", |
| "middle": [], |
| "last": "Pedregosa", |
| "suffix": "" |
| }, |
| { |
| "first": "Ga\u00ebl", |
| "middle": [], |
| "last": "Varoquaux", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Gramfort", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Michel", |
| "suffix": "" |
| }, |
| { |
| "first": "Bertrand", |
| "middle": [], |
| "last": "Thirion", |
| "suffix": "" |
| }, |
| { |
| "first": "Olivier", |
| "middle": [], |
| "last": "Grisel", |
| "suffix": "" |
| }, |
| { |
| "first": "Mathieu", |
| "middle": [], |
| "last": "Blondel", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Prettenhofer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ron", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Dubourg", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of machine learning research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2825--2830", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Math- ieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct):2825-2830.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Beautiful soup documentation", |
| "authors": [ |
| { |
| "first": "Leonard", |
| "middle": [], |
| "last": "Richardson", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leonard Richardson. 2007. Beautiful soup documentation. April.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Introduction to modern information retrieval", |
| "authors": [ |
| { |
| "first": "Gerard", |
| "middle": [], |
| "last": "Salton", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Michael", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mcgill", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gerard Salton and Michael J McGill. 1986. Introduction to modern information retrieval.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Cyclical learning rates for training neural networks", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Leslie", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "2017 IEEE Winter Conference on Applications of Computer Vision (WACV)", |
| "volume": "", |
| "issue": "", |
| "pages": "464--472", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leslie N Smith. 2017. Cyclical learning rates for training neural networks. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 464-472. IEEE.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Baselines and Bigrams: Simple, Good Sentiment and Topic Classification", |
| "authors": [ |
| { |
| "first": "Sida", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sida Wang and Christopher D Manning. 2012. Baselines and Bigrams: Simple, Good Sentiment and Topic Classification. Technical report.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Huggingface's transformers: State-of-the-art natural language processing", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| }, |
| { |
| "first": "Lysandre", |
| "middle": [], |
| "last": "Debut", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Sanh", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Chaumond", |
| "suffix": "" |
| }, |
| { |
| "first": "Clement", |
| "middle": [], |
| "last": "Delangue", |
| "suffix": "" |
| }, |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Moi", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierric", |
| "middle": [], |
| "last": "Cistac", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Rault", |
| "suffix": "" |
| }, |
| { |
| "first": "R\u00e9mi", |
| "middle": [], |
| "last": "Louf", |
| "suffix": "" |
| }, |
| { |
| "first": "Morgan", |
| "middle": [], |
| "last": "Funtowicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Joe", |
| "middle": [], |
| "last": "Davison", |
| "suffix": "" |
| }, |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Shleifer", |
| "suffix": "" |
| }, |
| { |
| "first": "Clara", |
| "middle": [], |
| "last": "Patrick Von Platen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yacine", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Jernite", |
| "suffix": "" |
| }, |
| { |
| "first": "Canwen", |
| "middle": [], |
| "last": "Plu", |
| "suffix": "" |
| }, |
| { |
| "first": "Teven", |
| "middle": [ |
| "Le" |
| ], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Sylvain", |
| "middle": [], |
| "last": "Scao", |
| "suffix": "" |
| }, |
| { |
| "first": "Mariama", |
| "middle": [], |
| "last": "Gugger", |
| "suffix": "" |
| }, |
| { |
| "first": "Quentin", |
| "middle": [], |
| "last": "Drame", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [ |
| "M" |
| ], |
| "last": "Lhoest", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rush", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cis- tac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF1": { |
| "content": "<table/>", |
| "num": null, |
| "text": "", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF3": { |
| "content": "<table/>", |
| "num": null, |
| "text": "", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF4": { |
| "content": "<table/>", |
| "num": null, |
| "text": "Leaderboard Positions", |
| "type_str": "table", |
| "html": null |
| } |
| } |
| } |
| } |