| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:14:12.862487Z" |
| }, |
| "title": "IBM MNLP IE at CASE 2021 Task 2: NLI Reranking for Zero-Shot Text Classification", |
| "authors": [ |
| { |
| "first": "Ken", |
| "middle": [], |
| "last": "Barker", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "IBM Research AI Yorktown Heights", |
| "location": { |
| "postCode": "10598", |
| "region": "NY" |
| } |
| }, |
| "email": "kjbarker@us.ibm.com" |
| }, |
| { |
| "first": "Parul", |
| "middle": [], |
| "last": "Awasthy", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "IBM Research AI Yorktown Heights", |
| "location": { |
| "postCode": "10598", |
| "region": "NY" |
| } |
| }, |
| "email": "awasthyp@us.ibm.com" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Ni", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "IBM Research AI Yorktown Heights", |
| "location": { |
| "postCode": "10598", |
| "region": "NY" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Radu", |
| "middle": [], |
| "last": "Florian", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "IBM Research AI Yorktown Heights", |
| "location": { |
| "postCode": "10598", |
| "region": "NY" |
| } |
| }, |
| "email": "raduf@us.ibm.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Supervised models can achieve very high accuracy for fine-grained text classification. In practice, however, training data may be abundant for some types but scarce or even nonexistent for others. We propose a hybrid architecture that uses as much labeled data as available for fine-tuning classification models, while also allowing for types with little (fewshot) or no (zero-shot) labeled data. In particular, we pair a supervised text classification model with a Natural Language Inference (NLI) reranking model. The NLI reranker uses a textual representation of target types that allows it to score the strength with which a type is implied by a text, without requiring training data for the types. Experiments show that the NLI model is very sensitive to the choice of textual representation, but can be effective for classifying unseen types. It can also improve classification accuracy for the known types of an already highly accurate supervised model. 1", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Supervised models can achieve very high accuracy for fine-grained text classification. In practice, however, training data may be abundant for some types but scarce or even nonexistent for others. We propose a hybrid architecture that uses as much labeled data as available for fine-tuning classification models, while also allowing for types with little (fewshot) or no (zero-shot) labeled data. In particular, we pair a supervised text classification model with a Natural Language Inference (NLI) reranking model. The NLI reranker uses a textual representation of target types that allows it to score the strength with which a type is implied by a text, without requiring training data for the types. Experiments show that the NLI model is very sensitive to the choice of textual representation, but can be effective for classifying unseen types. It can also improve classification accuracy for the known types of an already highly accurate supervised model. 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "1 Task 2: Fine-grained Classification of Socio-political Events", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Fine-grained text classification assigns a type label to a text passage from an extended set of specific types. The types are often domain-specific and more focused than generic, coarse-grained topics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Creating an exhaustive list of such types for a domain or task a priori is challenging. Fine-grained type systems for a particular domain and task often evolve, with new, previously unseen types emerging. So some types may have many labeled examples available, some types may have few or none.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In such a scenario, a flexible text classifier should be able to use training data when available, but also employ few-shot and zero-shot techniques when training data is limited or absent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The Fine-grained Classification of Sociopolitical Events task (Haneczok et al., 2021) at the CASE 2021 workshop (H\u00fcrriyetoglu et al., 2021) simulates the scenario of text classification with an evolving, fine-grained type system. There are 25 core, fine-grained event types capturing political violence, demonstrations, and other politically important events. The 25 types are from the ACLED (Armed Conflict Location & Event Data Project) event taxonomy (Raleigh et al., 2010) . Copious training data exists for these types. Subtask 1 evaluates classification to these \"seen\" or \"known\" types only. Beyond the core types, subtask 2 identifies three new \"unseen\" types with no training data. Definitions of these three types were provided to task participants early, allowing exploration of zeroshot techniques on the types, but also few-shot techniques since the development window would allow enough time to annotate a handful of examples. Finally, subtask 3 introduces two additional unseen types revealed only at evaluation time. Subtask 3 evaluates true zero-shot techniques. Table 1 lists the type labels and names for the three subtasks.", |
| "cite_spans": [ |
| { |
| "start": 62, |
| "end": 85, |
| "text": "(Haneczok et al., 2021)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 112, |
| "end": 139, |
| "text": "(H\u00fcrriyetoglu et al., 2021)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 454, |
| "end": 476, |
| "text": "(Raleigh et al., 2010)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1080, |
| "end": 1087, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For the task evaluation, organizers shared an evaluation set of 1,019 short texts (mostly one or two sentences each) with index numbers. 829 of the texts had gold labels from the 25 ACLED event types; 118 had gold labels from the three subtask2 types; 72 had gold labels from the two unseen sub-task3 types. These numbers were not known at submission time. Submissions consisted of pairs of an index number with a single event type prediction for the text associated with the index. For scoring, the organizers removed entries whose gold event type was not among those being tested for the given subtask, and computed micro, macro, and weighted Precision, Recall, and F1-score. Weighted scores are the average of the per-type scores (like macro averages), but with the type scores weighted by the number of gold instances for the type. Table 1 : Type labels (some truncated) and names for the 25 known types of subtask 1, the three unseen types of subtask 2, and the two unseen types of subtask 3", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 836, |
| "end": 843, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Our approach to fine-grained text classification mirrors the evolving type system scenario: a hybrid system that is fine-tuned with labeled data when available, but one that can also classify text with types having little or no labeled data. Our approach is first to apply a supervised text classification model to produce a ranked list of predicted, known types. The highest scoring types from the classification model are combined with any unseen types and passed to a Natural Language Inference (NLI) reranking model. The NLI reranker rescores the types on the extent to which they are implied by the input text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We experimented with many different combinations of supervised, few-shot, and zero-shot techniques and submitted multiple such combinations for each of the Case Task 2 subtasks. Despite their differences, all submissions are built on the same cascaded architecture of a supervised neural classification model followed by a neural NLI-based reranking model. The submissions differ on the exact combination of classification model and reranking model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Architecture", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For each sentence, the classification model produces a ranked list of predicted types with scores, but only for the types the model was trained on. If the score of the top-ranked predicted type is below threshold, or if the top-ranked predicted type is OTHER, the top N predicted types P N are passed to the reranking model along with all unseen types U . The reranker independently scores each of the types in P N \u222a U . The highest scoring type is the final prediction for the sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Architecture", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For each of the known and unseen types P N \u222a U to be submitted to the reranker, we generate a textual representation, based only on the definition of the type (not labeled examples). See section 4.2.1 for details on how we generate textual representations. The NLI reranking model scores each of these representations on the extent to which they are implied by the input text. Figure 1 illustrates the general architecture.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 377, |
| "end": 385, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "System Architecture", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For the text classifier we used a standard transformer-based sequence classification model (Vaswani et al., 2017 ) with a pre-trained language model. Based on previous experience with such text classification systems, we chose RoBERTa (Liu et al., 2019) . Specifically, we started with the roberta-large model from Hugging Face (Wolf et al., 2020) .", |
| "cite_spans": [ |
| { |
| "start": 91, |
| "end": 112, |
| "text": "(Vaswani et al., 2017", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 235, |
| "end": 253, |
| "text": "(Liu et al., 2019)", |
| "ref_id": null |
| }, |
| { |
| "start": 328, |
| "end": 347, |
| "text": "(Wolf et al., 2020)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised Sequence Classification for Seen Types", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For the 25 known ACLED event types, we used the ACLED-C-III dataset derived from the ACLED source data by (Piskorski et al., 2020 models (including roberta-FT28, see section 3.2), we used the 10% partition of the fold 1 training data.", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 129, |
| "text": "(Piskorski et al., 2020", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In addition to the base classifier, we experimented with few-shot supervision for classifying the three unseen event types of subtask 2. We manually created a small training corpus of short texts (one or two sentences each) from Wikipedia and Wikinews. We found relevant documents by browsing the Wikipedia/Wikinews Categories hierarchies (for example, https://en.wikipedia. org/wiki/Category:Natural_disasters and https://en.wikinews.org/wiki/Category: Crime_and_law). Within relevant documents, we chose short text passages that described a single event of the target type. These passages were often simply the first one or two sentences in the document. An example appears in Figure 2 . We originally collected 142 texts total: 56 texts for MAN MADE DISASTER; 55 texts for NATU-RAL DISASTER; 31 texts for ORG CRIME. For tuning experiments we created an additional test corpus of 20 short texts each for the three types. For the classification model used for final submissions on the challenge evaluation data, we combined both small corpora (along with an additional 20 texts for ORG CRIME) for training on a total of 222 texts balanced among the three types.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 679, |
| "end": 687, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We fine-tuned roberta-large for sequence classification (with a linear classification layer) on the data described in section 3.1 to produce two Search and rescue workers in Arkansas continue to search the Little Missouri and Caddo Rivers for survivors of Friday's flash flood. At least nineteen people were killed when the flood swept through the Albert Pike Recreation Area campground in the Ouachita National Forest in the southwestern portion of the state. classification models:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u2022 roberta-FT25: fine-tuned on the 471,152 instances of ACLED training data for the 25 base ACLED event types", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u2022 roberta-FT28: fine-tuned on the 10% partition of the ACLED training data (47,115 instances) plus additional instances for the three unseen types of subtask 2 (142 instances during development, 222 for the final models)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Given the disparity in the number of training instances, we consider roberta-FT28 to be supervised for the 25 types and few-shot supervised for the three unseen types.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Clock time for training roberta-FT25 was 30 hours on four v100 GPUs. The time to train roberta-FT28 was 4.6 hours on four v100 GPUs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Experiments in (Piskorski et al., 2020) Table 2 : Comparison of micro F 1 score, macro F 1 score, and weighted F 1 score on test fold 1 of the ACLED-C-III dataset. Piskorkski refers to the bert-base model fine-tuned by Piskorski et al. (2020) , which was the top performer on test fold 1 in their experiments. Test scores for the Piskorski model trained on 10% of the training data are estimated from the graphs in Piskorski et al. (2020) . metrics reported on test fold 1. So those results can be compared directly with our models. Performance of our model is consistent with their results fine-tuning bert-base (see Table 2 ).", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 39, |
| "text": "(Piskorski et al., 2020)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 219, |
| "end": 242, |
| "text": "Piskorski et al. (2020)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 415, |
| "end": 438, |
| "text": "Piskorski et al. (2020)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 40, |
| "end": 47, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 618, |
| "end": 625, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Pretrained language models (PLMs) have proven to be very powerful for downstream NLP tasks when they are fine-tuned on data specific to the task. Recently, the research community has begun to observe that PLMs fine-tuned on large amounts of data for complex end-to-end tasks can often be leveraged for new tasks without further fine-tuning. Fine-tuning PLMs on complex tasks such as Question Answering (QA) and Natural Language Inference (NLI) infuses models with high-level knowledge useful for other tasks. By choosing an appropriate task representation, QA models and NLI models can be used as \"pre-tuned\" models for fewshot (Schick and Sch\u00fctze, 2021) or even zero-shot (Yin et al., 2019) ", |
| "cite_spans": [ |
| { |
| "start": 628, |
| "end": 654, |
| "text": "(Schick and Sch\u00fctze, 2021)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 673, |
| "end": 691, |
| "text": "(Yin et al., 2019)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NLI for Unseen Types", |
| "sec_num": "4" |
| }, |
| { |
| "text": "text classification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NLI for Unseen Types", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Typically, an NLI model takes two texts (sen-tence1 and sentence2) and predicts whether sen-tence1 implies sentence2, with a given confidence score. To re-purpose an NLI model for zero-shot text classification, sentence1 is the text to be classified and sentence2 is some textual representation of a type. The classification score for each type is the NLI score, which represents the extent to which the textual representation of the type is implied by sentence1. Determining implication is not just based on surface lexical overlap between the sentences. In training, the models learn encodings for both sentences, supervised by a large corpus of hand-labeled textual entailment pairs (such as the 433k sentence pairs in the multi-genre RepEval corpus (Williams et al., 2018) ).", |
| "cite_spans": [ |
| { |
| "start": 753, |
| "end": 776, |
| "text": "(Williams et al., 2018)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NLI for Unseen Types", |
| "sec_num": "4" |
| }, |
| { |
| "text": "For the current work, we explored using large, pre-tuned NLI models for few-shot and zero-shot classification. We experimented with NLI extensions to both BART (Lewis et al., 2020) and RoBERTa (Liu et al., 2019) Language Models. For both of these, large models fine-tuned for the NLI task are available from Hugging Face: bart-large-mnli and roberta-large-mnli.", |
| "cite_spans": [ |
| { |
| "start": 160, |
| "end": 180, |
| "text": "(Lewis et al., 2020)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 193, |
| "end": 211, |
| "text": "(Liu et al., 2019)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NLI for Unseen Types", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Our experiments tended to favor RoBERTa (see section 4.3.3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NLI for Unseen Types", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We also experimented with further fine-tuning the NLI models for the 25 subtask 1 types and the three subtask 2 types (sections 4.3.1 and 4.3.2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NLI for Unseen Types", |
| "sec_num": "4" |
| }, |
| { |
| "text": "A crucial design choice when using NLI for zeroshot text classification is the choice of representation of the types. We experimented with full English descriptions, keywords, and type names. Examples of each representation for a sample sentence appear in Table 3 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 256, |
| "end": 263, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Type Representations", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The full English descriptions are the type definitions taken verbatim from source documentation. For the original 25 types, these were taken from ACLED directly. for the five unseen types, definitions were provided by the organizers. The type names were taken from the same source documentation. The keywords were extracted manually from the definitions, with no editing other than deleting text from the definition. The disjoint keywords are exactly the same as the keywords, the difference being how they are submitted to the NLI model. Table 4 shows how critical the choice of type representation is. The superiority of the name representation over definition and keywords was initially surprising, since it contains so much less information. We hypothesized that having multiple distinct terms covering more of the breadth of a type could help the NLI model, but that specific terms irrelevant to a given text were more harmful in confusing the model than the underspecificity of a single generic phrase. For example, the presence of terms such as \"pandemic\", \"volcano\", and \"wildfire\" would be distracting to a model when trying to determine whether a sentence about avalanches implies a natural disaster. To test the hypothesis, we considered a fourth type representation: disjunctive keywords. Rather than a single type representation in which all keywords appear together, with disjunctive keywords, each keyword for a type is considered an independent representation of the sentence1", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 539, |
| "end": 546, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Type Representations", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "A hacker group called DarkSide is behind the cyberattack on Colonial Pipeline that shut down a major oil pipeline over the weekend.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Type Representations", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Organized crime: This event type covers incidents related to activities of criminal groups, excluding conflict between such groups: smuggling, human trafficking, counterfeit products, property crime, cyber crime, assassination (for criminal purposes), corruption, etc. (list is non-exhaustive). name organized crime keywords organized crime, smuggling, human trafficking, counterfeit products, property crime, cyber crime, assassination, corruption disj-kw organized crime | smuggling | human trafficking | counterfeit products | property crime | cyber crime | assassination | corruption type. The extent to which a text implies a type t is the maximum score produced by the NLI model for any of t's keywords kw(t) i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Type Rep'n sentence2 definition", |
| "sec_num": null |
| }, |
| { |
| "text": "The original 25 ACLED event types include the type OTHER, which indicates a closed-world assumption: any event that is not one of the 24 main event types is of type OTHER. In the evolving type system scenario, the closed-world assumption does not hold. Texts labeled OTHER in the classifier training data may describe events of the unseen types. For this reason, we trained our classifiers (section 3) on all known types (including OTHER), but remove OTHER from the top N types submitted to the reranker. We also exclude OTHER instances from any training data used to fine-tune the reranking models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "To compare zero-shot reranking and reranking fine-tuned to known types, we prepared two new datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For the 24 ACLED event types (ignoring OTHER), we created a training dataset derived from a subset of 10% of Piskorki's fold 1 train-ing set (section 3.1). This initial data gave 46,886 positive instances. We added two negative examples for each positive example giving 140,658 total instances. The process for generating both positive and negative examples is described below in section 4.2.1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "To fine tune a few-shot supervised NLI model for the three unseen types of subtask 2, we created a dataset derived from the small dataset described in section 3.1. The 222 instances from that dataset provided the positive examples, to which we again added two negatives each, giving 666 total instances.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The labeled data for classification (section 3.1) consists of sentences with gold event type labels (REM EXPLOS, ORG CRIME, etc.). We need to adapt the data in two ways to use it for fine-tuning NLI models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Labeling NLI Data", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "First, the labels must be replaced by a textual representation of the class, which will be used as sentence2 by the NLI model. We chose a disjunctive keyword representation as described in section 4.1. To create positive examples, we paired each sentence with each of the disjunctive keywords of the gold label. We then used the untuned roberta-large-mnli model to score each of these pairs to determine which of the disjunctive keywords was most strongly implied by the sentence. This gave us the best single keyword to use as sentence 2 for each positive training sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Labeling NLI Data", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Second, fine-tuning the model requires negative examples. For each positive example, we created two negative examples by replacing the gold type positive s1 A hacker group called DarkSide is behind the cyberattack on Colonial Pipeline that shut down a major oil pipeline over the weekend. s2 cyber crime random negative s1 A hacker group called DarkSide is behind the cyberattack on Colonial Pipeline that shut down a major oil pipeline over the weekend. s2 forced disappearance imposter negative s1 A hacker group called DarkSide is behind the cyberattack on Colonial Pipeline that shut down a major oil pipeline over the weekend. s2 attack Table 5 : Example positive and negative instances generated from a sentence whose gold event type label is ORG CRIME. Sentence 2 (s2) for the positive instance is the keyword from the gold event type keywords scored highest by roberta-large-mnli. Sentence 2 for the imposter instance is the highest-scoring keyword from the non-gold event types.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 642, |
| "end": 649, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Labeling NLI Data", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "representation with the type representation of a different event type: one random and one imposter. The random negative example is the original sentence paired with a random keyword selected from all of the keywords of types other than the gold type. The imposter is the keyword most likely to confuse the model. We paired each sentence with each of the disjunctive keywords for all of the types other than the gold type. We used the top scoring pair (the most strongly implied incorrect type representation) as the imposter example. Table 5 shows positive and negative training instances created for an example sentence.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 534, |
| "end": 541, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Labeling NLI Data", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Using an NLI model as described in section 4 is needed for unseen types, since the classification model cannot predict types it was not trained on. But NLI reranking might also improve predictions on the known types. To explore this possibility, we fine-tuned the bart-large-mnli model for the 24 non-OTHER ACLED event types using the 140,658 NLI training pairs from the dataset described in section 4.2. We then ran our finetuned RoBERTa classifier (roberta-FT25) on the 117,788 instances of the ACLED development set and collected its ranked list of predictions for all of the instances where its top-scoring prediction was wrong (6,002 instances, accuracy = 0.0000).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification + Reranking", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "For the 6,002 instances, the average classifier score for the top (wrong) prediction was 0.9644. The average position of the correct type within the list ranked by classifier score was 3.1. When the classifier predicted the correct type, the average classifier top score was 0.9960. When the classifier score was above 0.99, its accuracy was 0.9562. Below that threshold, accuracy was 0.4851.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification + Reranking", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "To explore the benefit of reranking even for known types, we passed the top N classifier predictions for the 6,002 incorrectly classified instances to our fine-tuned BART NLI model for reranking. We first set N to the position of the correct type for each sentence, guaranteeing that the correct type was in the list to be reranked (we refer to this as setting N to \"gold\"). The fine-tuned NLI model predicted the correct type for 2,271 instances (37.8% of the incorrectly classified instances). Setting N to 5 (meaning for some sentences the correct type would not be among the top N), the fine-tuned NLI model predicted 2,027 instances correctly (33.7%). See Table 6 . Surprisingly, using bart-large-mnli without fine tuning performed even better on the incorrectly classified instances when the correct type is always in the top N, recovering 2,600 of the correct types (43.3%). When N was 5, however, the untuned model did not perform as well as the tuned model (29.6% recovered). As with other experiments, the untuned roberta-large-mnli model performs slightly better than the BART model.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 661, |
| "end": 668, |
| "text": "Table 6", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Classification + Reranking", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "Based on this experiment, it makes sense to pair a fine-tuned classifier with an NLI reranker (finetuned or not) even for known types. In practice, we invoke the reranker with the top 5 predictions from the classifier when the classifier's top prediction score is less than 0.99.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification + Reranking", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "We also compared a fine-tuned NLI model to an untuned model on unseen types. To simu- late the unseen types scenario, we randomly selected three of the original ACLED types and removed their instances (6,080) from the NLI fine-tuning dataset. We then fine-tuned the bart-large-mnli model on the instances for the remaining 21 types. Table 7 shows that finetuning an NLI model significantly improves accuracy for types it was tuned on over an untuned NLI model. Fine-tuning an NLI model on known types and then applying to unseen types performs catastrophically worse than even the untuned model. The fine-tuned model is clearly overfitting the 21 types in a way that significantly degrades performance on unseen types. We tried reducing the size of the fine-tuning set and balancing it to 1,000 instances per type to avoid overfitting. The resulting model performed worse by 5-6 micro-averaged accuracy points in all experiments. Based on this experiment, we conclude that using a fine-tuned NLI model improves reranking for the types on which it was fine-tuned, but for unseen types, it is preferable to use an NLI reranking model not fine-tuned on other types.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 333, |
| "end": 340, |
| "text": "Table 7", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Zero-Shot vs. Fine-Tuned Reranking on Unseen Types", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "We conducted two additional experiments to compare bart-large-mnli and roberta-large-mnli (no fine tuning),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BART vs. RoBERTa", |
| "sec_num": "4.3.3" |
| }, |
| { |
| "text": "457-all 3heldout", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": null |
| }, |
| { |
| "text": "bart-large-mnli 0.2998 0.9104 roberta-large-mnli 0.3129 0.9533 Table 8 : Accuracy of BART vs. RoBERTa NLI models with no fine-tuning on two ACLED datasets: 457 random instances covering all event types and 6,080 instances covering three event types.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 63, |
| "end": 70, |
| "text": "Table 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": null |
| }, |
| { |
| "text": "using keywords as the textual representation of the types. The first dataset was 457 examples from the ACLED test set, converted to the format needed for the NLI models (section 4.2.1). The 457 examples cover all 24 non-OTHER ACLED types, making this a particularly challenging task for reranking. The second dataset was the 6,080 instances covering the three held out types described in the previous section (4.3.2). Both experiments show a preference for RoBERTa. The experiments also show that the models are much better at distinguishing among smaller numbers of types. This supports the approach of using NLI models as rerankers on the top N classifier predictions (for small N) instead of using them as classifiers themselves on the full inventory of types.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": null |
| }, |
| { |
| "text": "Based on the lessons learned from our development experiments, we ultimately fine-tuned two NLI models on the data described in section 4.2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "\u2022 roberta-mnli-FT24: roberta-large-mnli fine-tuned on the 140,658 instance training set for the base ACLED event types", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "\u2022 roberta-mnli-FT27: roberta-large-mnli fine-tuned on the 140,658 ACLED instances plus 666 instances for the three unseen types of subtask 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Using the roberta-mnli-FT24 fine-tuned reranker in a system configuration makes that system supervised on the 24 ACLED types of subtask 1. Using roberta-mnli-FT27 makes a system supervised on the ACLED types and few-shot supervised on the three unseen types of subtask 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Official task results appear in Table 10 . The table shows how little difference there was between the top scoring team submissions. Full results and analysis are presented in Haneczok et al. (2021) . Table 9 : System configurations for each of the eleven submissions. The combination of classification model and reranking model determines whether classification and reranking are supervised (sup), few-shot (few), or zero-shot (zero) for each category of event types (the 25 seen types, the 3 unseen types of subtask 2, or the 2 unseen types of subtask 3).", |
| "cite_spans": [ |
| { |
| "start": 176, |
| "end": 198, |
| "text": "Haneczok et al. (2021)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 32, |
| "end": 40, |
| "text": "Table 10", |
| "ref_id": null |
| }, |
| { |
| "start": 201, |
| "end": 208, |
| "text": "Table 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Challenge Submissions and Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Our Score Other Score Rank 1 0.839 0.832 1 2 0.797 (0.785) 0.782 1 3 0.756 (0.746) 0.771 2 Table 10 : Official challenge results (weighted F 1 scores). Top score is boldface. For subtasks 2 and 3, our highest scoring submission was few-shot fine-tuned for the subtask 2 event types and zero-shot for the subtask 3 event types. The score for our best true zero-shot submission appears in parentheses. Best Other Score is the highest scoring submission from another team.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 91, |
| "end": 99, |
| "text": "Table 10", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Best Our task", |
| "sec_num": null |
| }, |
| { |
| "text": "We now turn to a more detailed discussion of our submissions and more detailed scores. Combining classifiers and rerankers, we arrived at eleven system configurations for submission to the Challenge evaluation. Table 9 lists these configurations, grouped by Challenge subtask. The table specifies which classification model and which reranking model were used for each submission, as well as an indication of whether the configuration was supervised, few-shot, or zero-shot for each of the subsets of event types.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 211, |
| "end": 218, |
| "text": "Table 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Best Our task", |
| "sec_num": null |
| }, |
| { |
| "text": "In every case, the classification model for the known 25 ACLED event types was supervised. For the first three submissions of subtask 2, classification of the three unseen types was few-shot, trained on our small corpus (see Section 3.1). For the fourth subtask 2 submission, we used the output of submission 1.3 as the \"classifier\". Since submission 1.3 was tuned on data for the 25 known types only, submission 2.4 is zero-shot for the three un-seen types. No training data was used for the two new unseen types of subtask 3, so those types were always introduced during the reranking stage only (no classification). These were always zero-shot.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Best Our task", |
| "sec_num": null |
| }, |
| { |
| "text": "Reranking of the 25 original types was supervised when using the RoBERTa NLI model finetuned on the ACLED data (rob-mnli-FT24 in the table. When using roberta-large-mnli off-the-shelf, reranking the 25 was zero-shot. For the 3 unseen types of subtask 2 (and subtask 3), only submission 2.3 reranked using the NLI model fine-tuned on the small amount of data for these types, and is considered few-shot. Otherwise, all reranking of the 3 unseen types and the 2 unseen types of subtask 3 were zero-shot. Table 11 shows results on the official, Task 2 evaluation. For subtask 1, fine-tuning the reranker did better (submission 1.3) than using an untuned reranker (1.2). This is the same result that we saw when passing the top 5 classification predictions to the reranker in the development experiments.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 502, |
| "end": 510, |
| "text": "Table 11", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Best Our task", |
| "sec_num": null |
| }, |
| { |
| "text": "The best performing configuration for subtask 2 overall was supervised classification for all 28 types with untuned (zero-shot) reranking. In particular, the zero-shot reranking (submission 2.2) outperformed the reranker tuned on the three unseen types. This runs counter to what we saw in the development experiments. The configuration that was most successful on the original 25 types was the one whose classifier was the 1.3 submission (which had a fine-tuned reranker). Isolating Table 11 : Detailed scores for the eleven submissions. The \"Sup\" column denotes whether the submission overall should be considered supervised, few-shot, or zero-shot. \u00b5F is micro-averaged F 1 score; macF is macro-averaged F 1 score; wF is the weighted average F 1 score (see section 1). The highest scores for a given subtask and type subset are shown boldface.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 484, |
| "end": 492, |
| "text": "Table 11", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Observations and Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "the scores on the three unseen types versus the 25 known types shows strong performance in the fewshot case, but significantly weaker performance with zero-shot (2.4). For subtask 3, submissions 3.1 and 3.3 produced identical predictions (not just scores), as did submissions 3.2 and 3.4. The configurations themselves are not equivalent, with the input to the 3.3 and 3.4 rerankers having already been reranked by the 1.3 and 2.2 rerankers. Interestingly, the configurations built on zero-shot rerankers only performed best, again suggesting NLI models can be used without fine-tuning for reranking classification for both known and unseen types. Performance on the two unseen types of subtask 3 (zero-shot) is significantly weaker than the zero-shot scenario of subtask 2 (2.4). It is possible that the two new types are inherently more difficult to recognize. But we suspect that tweaks to the textual representations for these two types might improve performance. Given the extreme differences that different representations produce (section 4.1), we expect that more carefully chosen representations would help.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Observations and Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The CASE 2021 Task 2 challenge accurately simulates a realistic, fine-grained, text classification scenario in which many types in the type inventory have abundant labeled data, some types are recently new and may have a small amount of labeled data, and some types are completely new and have no labeled data. Within these constraints, we proposed a hybrid system that combines supervised classification with NLI-based reranking that can be used in supervised, few-shot, and zero-shot settings. Our results show strong performance on known types with weaker results on unseen types. Nevertheless, the experiments for this challenge have produced some interesting conclusions. First, we confirm that NLI models are useful for zeroshot text classification, but only when distinguishing between a small number of target types. Second, even in a fully supervised scenario, where ample training data can produce classification models with extremely high accuracy, untuned NLI-based reranking can improve classification performance on known types. Third, the choice of textual representation to transform a classification problem into one amenable to an untuned NLI model greatly affects performance. In future work we hope to explore more rigorously what makes a good representation for NLI-based zero-shot text classification, and how to generate these representations more automatically.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Distribution Statement \"A\" (Approved for Public Release, Distribution Unlimited)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA) under Contract No. FA8750-19-C-0206. The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments and Disclaimer", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Fine-grained event classification in news-like text snippets -shared task 2, CASE 2021", |
| "authors": [ |
| { |
| "first": "Jacek", |
| "middle": [], |
| "last": "Haneczok", |
| "suffix": "" |
| }, |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Jacquet", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakub", |
| "middle": [], |
| "last": "Piskorski", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicolas", |
| "middle": [], |
| "last": "Stefanovitch", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021), online. Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacek Haneczok, Guillaume Jacquet, Jakub Piskorski, and Nicolas Stefanovitch. 2021. Fine-grained event classification in news-like text snippets -shared task 2, CASE 2021. In Proceedings of the Workshop on Challenges and Applications of Automated Ex- traction of Socio-political Events from Text (CASE 2021), online. Association for Computational Lin- guistics (ACL).", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Challenges and applications of automated extraction of socio-political events from text (CASE 2021): Workshop and shared task report", |
| "authors": [ |
| { |
| "first": "Ali", |
| "middle": [], |
| "last": "H\u00fcrriyetoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hristo", |
| "middle": [], |
| "last": "Tanev", |
| "suffix": "" |
| }, |
| { |
| "first": "Vanni", |
| "middle": [], |
| "last": "Zavarella", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakub", |
| "middle": [], |
| "last": "Piskorski", |
| "suffix": "" |
| }, |
| { |
| "first": "Reyyan", |
| "middle": [], |
| "last": "Yeniterzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Erdem", |
| "middle": [], |
| "last": "Y\u00f6r\u00fck", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Sociopolitical Events from Text (CASE 2021), online. Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ali H\u00fcrriyetoglu, Hristo Tanev, Vanni Zavarella, Jakub Piskorski, Reyyan Yeniterzi, and Erdem Y\u00f6r\u00fck. 2021. Challenges and applications of automated ex- traction of socio-political events from text (CASE 2021): Workshop and shared task report. In Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio- political Events from Text (CASE 2021), online. As- sociation for Computational Linguistics (ACL).", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Yinhan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Naman", |
| "middle": [], |
| "last": "Goyal ; Abdelrahman Mohamed", |
| "suffix": "" |
| }, |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "7871--7880", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.acl-main.703" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "New benchmark corpus and models for fine-grained event classification: To BERT or not to BERT?", |
| "authors": [ |
| { |
| "first": "Jakub", |
| "middle": [], |
| "last": "Piskorski", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacek", |
| "middle": [], |
| "last": "Haneczok", |
| "suffix": "" |
| }, |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Jacquet", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 28th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "6663--6678", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jakub Piskorski, Jacek Haneczok, and Guillaume Jacquet. 2020. New benchmark corpus and mod- els for fine-grained event classification: To BERT or not to BERT? In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 6663-6678, Barcelona, Spain (Online). Inter- national Committee on Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Introducing acled: an armed conflict location and event dataset: special data feature", |
| "authors": [ |
| { |
| "first": "Clionadh", |
| "middle": [], |
| "last": "Raleigh", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Linke", |
| "suffix": "" |
| }, |
| { |
| "first": "H\u00e5vard", |
| "middle": [], |
| "last": "Hegre", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Karlsen", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Journal of peace research", |
| "volume": "47", |
| "issue": "5", |
| "pages": "651--660", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Clionadh Raleigh, Andrew Linke, H\u00e5vard Hegre, and Joakim Karlsen. 2010. Introducing acled: an armed conflict location and event dataset: special data fea- ture. Journal of peace research, 47(5):651-660.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Exploiting cloze-questions for few-shot text classification and natural language inference", |
| "authors": [ |
| { |
| "first": "Timo", |
| "middle": [], |
| "last": "Schick", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", |
| "volume": "", |
| "issue": "", |
| "pages": "255--269", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Timo Schick and Hinrich Sch\u00fctze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the As- sociation for Computational Linguistics: Main Vol- ume, pages 255-269, Online. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "Lukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "6000--6010", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 6000-6010.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A broad-coverage challenge corpus for sentence understanding through inference", |
| "authors": [ |
| { |
| "first": "Adina", |
| "middle": [], |
| "last": "Williams", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikita", |
| "middle": [], |
| "last": "Nangia", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Bowman", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "1112--1122", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Transformers: State-of-the-art natural language processing", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| }, |
| { |
| "first": "Lysandre", |
| "middle": [], |
| "last": "Debut", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Sanh", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Chaumond", |
| "suffix": "" |
| }, |
| { |
| "first": "Clement", |
| "middle": [], |
| "last": "Delangue", |
| "suffix": "" |
| }, |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Moi", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierric", |
| "middle": [], |
| "last": "Cistac", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Rault", |
| "suffix": "" |
| }, |
| { |
| "first": "Remi", |
| "middle": [], |
| "last": "Louf", |
| "suffix": "" |
| }, |
| { |
| "first": "Morgan", |
| "middle": [], |
| "last": "Funtowicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Joe", |
| "middle": [], |
| "last": "Davison", |
| "suffix": "" |
| }, |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Shleifer", |
| "suffix": "" |
| }, |
| { |
| "first": "Clara", |
| "middle": [], |
| "last": "Patrick Von Platen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yacine", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Jernite", |
| "suffix": "" |
| }, |
| { |
| "first": "Canwen", |
| "middle": [], |
| "last": "Plu", |
| "suffix": "" |
| }, |
| { |
| "first": "Teven", |
| "middle": [ |
| "Le" |
| ], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Sylvain", |
| "middle": [], |
| "last": "Scao", |
| "suffix": "" |
| }, |
| { |
| "first": "Mariama", |
| "middle": [], |
| "last": "Gugger", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Drame", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "38--45", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.emnlp-demos.6" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach", |
| "authors": [ |
| { |
| "first": "Wenpeng", |
| "middle": [], |
| "last": "Yin", |
| "suffix": "" |
| }, |
| { |
| "first": "Jamaal", |
| "middle": [], |
| "last": "Hay", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "3914--3923", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D19-1404" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In Proceedings of the 2019 Conference on Empiri- cal Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3914-3923, Hong Kong, China. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "A short NATURAL DISASTER text consisting of the first two sentences of a Wikinews article under the Category https://en.wikinews.org/ wiki/Category:Disasters_and_accidents.", |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF2": { |
| "text": "also trained models on the ACLED-C-III data, with", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>model</td><td colspan=\"2\">trdata \u00b5F macF wF</td></tr><tr><td>Piskorski</td><td colspan=\"2\">100% 94.3 86.0 94.2</td></tr><tr><td colspan=\"3\">roberta-FT25 100% 94.7 87.3 94.7</td></tr><tr><td>Piskorski</td><td>10%</td><td>>90 >70 >90</td></tr><tr><td>roberta-FT28</td><td>10%</td><td>93.5 77.1 93.5</td></tr></table>" |
| }, |
| "TABREF3": { |
| "text": "Example type representations for a sentence of the type ORG CRIME. For the disj-kw representation, each phrase in the pipe-delimited list is given separately to the NLI model as sentence2.", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td colspan=\"2\">Type rep'n \u00b5Acc macAcc</td></tr><tr><td>definition</td><td>0.5461 0.5672</td></tr><tr><td colspan=\"2\">keywords 0.4610 0.5243</td></tr><tr><td>name</td><td>0.8723 0.8456</td></tr><tr><td>disj-kw</td><td>0.8936 0.9048</td></tr></table>" |
| }, |
| "TABREF4": { |
| "text": "", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>" |
| }, |
| "TABREF6": { |
| "text": "Accuracy of NLI models on the top N predictions from the roberta-FT25 classifier when its top prediction is wrong (6,002 instances). N=gold means that N is the position of the correct type in the ranked list, guaranteeing that it is available to the reranker.", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Model</td><td colspan=\"2\">Tune \u00b5Acc macAcc</td></tr><tr><td colspan=\"3\">bart-large-mnli none 0.9104 0.7977</td></tr><tr><td>bart-mnli-FT24</td><td>24</td><td>0.9806 0.9708</td></tr><tr><td>bart-mnli-FT21</td><td>21</td><td>0.2396 0.2540</td></tr></table>" |
| }, |
| "TABREF7": { |
| "text": "", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>: Comparison of three NLI models tested on</td></tr><tr><td>data for three held-out types: an untuned model, a</td></tr><tr><td>model tuned on all types (including the held-out types),</td></tr><tr><td>and a model tuned on all types except the held-out</td></tr><tr><td>types.</td></tr></table>" |
| } |
| } |
| } |
| } |