ACL-OCL / Base_JSON /prefixC /json /case /2021.case-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:13:54.414524Z"
},
"title": "IBM MNLP IE at CASE 2021 Task 1: Multigranular and Multilingual Event Detection on Protest News",
"authors": [
{
"first": "Parul",
"middle": [],
"last": "Awasthy",
"suffix": "",
"affiliation": {},
"email": "awasthyp@us.ibm.com"
},
{
"first": "Jian",
"middle": [],
"last": "Ni",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ken",
"middle": [],
"last": "Barker",
"suffix": "",
"affiliation": {},
"email": "kjbarker@us.ibm.com"
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": "",
"affiliation": {},
"email": "raduf@us.ibm.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present the event detection models and systems we have developed for Multilingual Protest News Detection-Shared Task 1 at CASE 2021. 1 The shared task has 4 subtasks which cover event detection at different granularity levels (from document level to token level) and across multiple languages (English, Hindi, Portuguese and Spanish). To handle data from multiple languages, we use a multilingual transformer-based language model (XLM-R) as the input text encoder. We apply a variety of techniques and build several transformer-based models that perform consistently well across all the subtasks and languages. Our systems achieve an average F 1 score of 81.2. Out of thirteen subtask-language tracks, our submissions rank 1 st in nine and 2 nd in four tracks.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present the event detection models and systems we have developed for Multilingual Protest News Detection-Shared Task 1 at CASE 2021. 1 The shared task has 4 subtasks which cover event detection at different granularity levels (from document level to token level) and across multiple languages (English, Hindi, Portuguese and Spanish). To handle data from multiple languages, we use a multilingual transformer-based language model (XLM-R) as the input text encoder. We apply a variety of techniques and build several transformer-based models that perform consistently well across all the subtasks and languages. Our systems achieve an average F 1 score of 81.2. Out of thirteen subtask-language tracks, our submissions rank 1 st in nine and 2 nd in four tracks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Event detection aims to detect and extract useful information about certain types of events from text. It is an important information extraction task that discovers and gathers knowledge about past and ongoing events hidden in huge amounts of textual data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The CASE 2021 workshop (H\u00fcrriyetoglu et al., 2021b) focuses on socio-political and crisis event detection. The workshop defines 3 shared tasks. In this paper we describe our models and systems developed for \"Multilingual Protest News Detection -Shared Task 1\" (H\u00fcrriyetoglu et al., 2021a) . Shared task 1 in turn has 4 subtasks:",
"cite_spans": [
{
"start": 23,
"end": 51,
"text": "(H\u00fcrriyetoglu et al., 2021b)",
"ref_id": "BIBREF9"
},
{
"start": 260,
"end": 288,
"text": "(H\u00fcrriyetoglu et al., 2021a)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Subtask 1 -Document Classification: determine whether a news article (document) contains information about a past or ongoing event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Subtask 2 -Sentence Classification: determine whether a sentence expresses information about a past or ongoing event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Subtask 3 -Event Sentence Coreference Identification: determine which event sentences refer to the same event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Subtask 4 -Event Extraction: extract event triggers and the associated arguments from event sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Event extraction on news has long been popular, and benchmarks such as ACE (Walker et al., 2006) and ERE (Song et al., 2015) annotate event triggers, arguments and coreference. Most previous work has addressed these tasks separately. H\u00fcrriyetoglu et al. (2020) also focused on detecting social-political events, but CASE 2021 has added more subtasks and languages. CASE 2021 addresses event information extraction at different granularity levels, from the coarsest-grained document level to the finestgrained token level. The workshop enables participants to build models for these subtasks and compare similar methods across the subtasks.",
"cite_spans": [
{
"start": 75,
"end": 96,
"text": "(Walker et al., 2006)",
"ref_id": "BIBREF30"
},
{
"start": 105,
"end": 124,
"text": "(Song et al., 2015)",
"ref_id": "BIBREF27"
},
{
"start": 234,
"end": 260,
"text": "H\u00fcrriyetoglu et al. (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task is multilingual, making it even more challenging. In a globally-connected era, information about events is available in many different languages, so it is important to develop models that can operate across the language barriers. The common languages for all CASE Task 1 subtasks are English, Spanish, and Portuguese. Hindi is an additional language for subtask 1. Some of these languages are zero-shot (Hindi), or low resource (Portuguese and Spanish) for certain subtasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe our multilingual transformer-based models and systems for each of the subtasks. We describe the data for the subtasks in section 2. We use XLM-R (Conneau et 2020) as the input text encoder, described in section 3. For subtasks 1 (document classification) and 2 (sentence classification), we apply multilingual and monolingual text classifiers with different window sizes (Sections 4 and 5). For subtask 3 (event sentence coreference identification), we use a system with two modules: a classification module followed by a clustering module (section 6). For subtask 4 (event extraction), we apply a sequence labeling approach and build both multilingual and monolingual models (section 7). We present the final evaluation results in section 8. Our models have achieved consistently high performance scores across all the subtasks and languages.",
"cite_spans": [
{
"start": 172,
"end": 183,
"text": "(Conneau et",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The data for this task has been created using the method described in H\u00fcrriyetoglu et al. (2021) . The task is multilingual but the data distribution across languages is not the same. In all subtasks there is significantly more data for English than for Portuguese and Spanish. There is no training data provided for Hindi. As there are no official train and development splits, we have created our own splits. The details are summarized in Table 1 . For most task-language pairs, we randomly select 80% or 90% of the provided data as the training data and keep the remaining as the development data. Since there is much less data for Spanish and Portuguese, for some subtasks, such as subtask 3, we use the Spanish and Portuguese data for development only; and for sub-task 4, we use the entire Spanish and Portuguese data as training for the multilingual model. For the final submissions, we use all the provided data, and train various types of models (multilingual, monolingual, weakly supervised, zero-shot) with details provided in the appropriate sections.",
"cite_spans": [
{
"start": 70,
"end": 96,
"text": "H\u00fcrriyetoglu et al. (2021)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 441,
"end": 448,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "For all the subtasks we use transformer-based language models (Vaswani et al., 2017) as the input text encoder. Recent studies show that deep transformer-based language models, when pretrained on a large text corpus, can achieve better generalization performance and attain state-ofthe-art performance for many NLP tasks (Devlin et al., 2019; Liu et al., 2019; Conneau et al., 2020) . One key success of transformer-based models is a multi-head self-attention mechanism that can model global dependencies between tokens in input and output sequences. Due to the multilingual nature of this shared task, we have applied several multilingual transformerbased language models, including multilingual BERT (mBERT) (Devlin et al., 2019) , XLM-RoBERTa (XLM-R) (Conneau et al., 2020) , and multilingual BART (mBART) (Liu et al., 2020) . Our preliminary experiments showed that XLM-R based models achieved better accuracy than other models. Hence we decided to use XLM-R as the text encoder. We use HuggingFace's pytorch implementation of transformers (Wolf et al., 2019) .",
"cite_spans": [
{
"start": 62,
"end": 84,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF28"
},
{
"start": 321,
"end": 342,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 343,
"end": 360,
"text": "Liu et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 361,
"end": 382,
"text": "Conneau et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 710,
"end": 731,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 754,
"end": 776,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 809,
"end": 827,
"text": "(Liu et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 1044,
"end": 1063,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Transformer-Based Framework",
"sec_num": "3"
},
{
"text": "XLM-R was pre-trained with unlabeled Wikipedia text and the CommonCrawl Corpus of 100 languages. It uses the SentencePiece tokenizer (Kudo and Richardson, 2018 ) with a vocabulary size of 250,000. Since XLM-R does not use any cross-lingual resources, it belongs to the unsupervised representation learning framework. For this work, we fine-tune the pre-trained XLM-R model on a specific task by training all layers of the model.",
"cite_spans": [
{
"start": 133,
"end": 159,
"text": "(Kudo and Richardson, 2018",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Transformer-Based Framework",
"sec_num": "3"
},
{
"text": "To detect protest events at the document level, the problem can be formulated as a binary text classification problem where a document is assigned label \"1\" if it contains one or more protest event(s) and label \"0\" otherwise. Various models have been developed for text classification in general and also for this particular task (H\u00fcrriyetoglu et al., 2019) . In our approach we apply multilingual transformerbased text classification models.",
"cite_spans": [
{
"start": 330,
"end": 357,
"text": "(H\u00fcrriyetoglu et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 1: Document Classification",
"sec_num": "4"
},
{
"text": "In our architecture, the input sequence (document) is mapped to subword embeddings, and the embeddings are passed to multiple transformer layers. A special token is added to the beginning of the input sequence. This BOS token is <s> for XLM-R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-R Based Text Classification Models",
"sec_num": "4.1"
},
{
"text": "The final hidden state of this token, h s , is used as the summary representation of the whole sequence, which is passed to a softmax classification layer that returns a probability distribution over the possible labels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-R Based Text Classification Models",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p = softmax(Wh s + b)",
"eq_num": "(1)"
}
],
"section": "XLM-R Based Text Classification Models",
"sec_num": "4.1"
},
{
"text": "XLM-R has L = 24 transformer layers, with hidden state vector size H = 1024, number of attention heads A = 16, and 550M parameters. We learn the model parameters using Adam (Kingma and Ba, 2015), with a learning rate of 2e-5. We train the models for 5 epochs. Clock time was 90 minutes to train a model with training data from all the languages on a single NVIDIA V100 GPU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-R Based Text Classification Models",
"sec_num": "4.1"
},
{
"text": "The evaluation of subtask 1 is based on macro-F 1 scores of the developed models on the test data in 4 languages: English, Spanish, Portuguese, and Hindi. We are provided with training data in English, Spanish and Portuguese, but not in Hindi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-R Based Text Classification Models",
"sec_num": "4.1"
},
{
"text": "The sizes of the train/dev/test sets are shown in Table 1 . Note that English has much more training data (\u223c10k examples) than Spanish or Portuguese (\u223c1k examples), while Hindi has no training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "XLM-R Based Text Classification Models",
"sec_num": "4.1"
},
{
"text": "We build two types of XLM-R based text classification models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-R Based Text Classification Models",
"sec_num": "4.1"
},
{
"text": "\u2022 multilingual model: a model is trained with data from all three languages, denoted by XLM-R (en+es+pt);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-R Based Text Classification Models",
"sec_num": "4.1"
},
{
"text": "\u2022 monolingual models: a separate model is trained with data from each of the three lan-guages, denoted by XLM-R (en), XLM-R (es), and XLM-R (pt).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-R Based Text Classification Models",
"sec_num": "4.1"
},
{
"text": "The results of various models on the development sets are shown in Table 2 . We observe that:",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "XLM-R Based Text Classification Models",
"sec_num": "4.1"
},
{
"text": "\u2022 A monolingual XLM-R model trained with one language can achieve good zero-shot performance on other languages. For example, XLM-R (en), trained with English data only, achieves 72.1 and 82.3 F 1 score on Spanish and Portuguese development sets. This is consistent with our observations for other information extraction tasks such as relation extraction ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-R Based Text Classification Models",
"sec_num": "4.1"
},
{
"text": "\u2022 Adding a small amount of training data from other languages, the multilingual model can further improve the performance for those languages. For example, with \u223c1k additional training examples from Spanish and Portuguese, XLM-R (en+es+pt) improves the performance by 3.1 and 6.1 F 1 points on the Spanish and Portuguese development sets, compared with XLM-R (en).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-R Based Text Classification Models",
"sec_num": "4.1"
},
{
"text": "For English, Spanish and Portuguese, here are the three submissions we prepared for the evaluation: S1: We trained five XLM-R based document classification models initialized with different random seeds using provided training data from all three languages (multilingual models). The final output for submission 1 is the majority vote of the outputs of the five multilingual models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Submissions",
"sec_num": "4.2"
},
{
"text": "S2: For this submission we also trained five XLM-R based document classification models, but only using provided training data from the target language (monolingual models). The final output is the majority vote of the outputs of the five monolingual models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Submissions",
"sec_num": "4.2"
},
{
"text": "The final output of this submission is the majority vote of the outputs of the multilingual models built in (1) and the monolingual models built in (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S3:",
"sec_num": null
},
{
"text": "For Hindi, there is no manually annotated training data provided. We used training data from English, Spanish and Portuguese, and augmented the data with machine translated training data from English to Hindi (\"weakly labeled\" data). We trained nine XLM-R based Hindi document classification models with the weakly labeled data, and the final outputs are the majority votes of these models (S1/S2/S3 is the majority vote of 5/7/9 of the models, respectively).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S3:",
"sec_num": null
},
{
"text": "To detect protest events at the sentence level, one can also formulate the problem as a binary text classification problem where a sentence is assigned label \"1\" if it contains one or more protest event(s) and label \"0\" otherwise. As for document classification, we use XLM-R as the input text encoder. The difference is that for sentence classification, we set max seq length (a parameter of the model that specifies the maximum number of tokens in the input) to be 128; while for document classification where the input text is longer, we set max seq length to be 512 (for documents longer than 512 tokens, we truncate the documents and only keep the first 512 tokens). We train the models for 10 epochs, taking 80 minutes to train a model with training data from all the languages on a single NVIDIA V100 GPU. For this subtask we are provided with training data in English, Spanish and Portuguese, and evaluation is on test data for all three languages. The sizes of the train/development/test sets are shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1015,
"end": 1022,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Subtask 2: Sentence Classification",
"sec_num": "5"
},
{
"text": "As for document classification, we build two types of XLM-R based sentence classification models: a multilingual model and monolingual models. The results of these models on the development sets are shown in Table 3 . The observations are similar to the document classification task. The multilingual model trained with data from all three languages achieves much better accuracy than a monolingual model on the development sets of other languages that the monolingual model is not trained on.",
"cite_spans": [],
"ref_spans": [
{
"start": 208,
"end": 215,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Subtask 2: Sentence Classification",
"sec_num": "5"
},
{
"text": "We prepared three submissions on the test data for each language (English, Spanish, Portuguese), similar to those described in section 4.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 2: Sentence Classification",
"sec_num": "5"
},
{
"text": "Typically, for the task of event coreference resolution, events are defined by event triggers, and are usually marked in a sentence. Two event triggers are considered coreferent when they refer to the same event. In this task, however, the gold event triggers are not provided; the sentences are deemed coreferent, possibly, on the basis of any of the multiple triggers that occur in the sentences being coreferent, or if the sentences are about the same general event that is occurring. Given a document, this event coreference subtask aims to create clusters of coreferent sentences. There is good variety in the research for coreference detection. Cattan et al. (2020) rely only on raw text without access to triggers or entity mentions to build coreference systems. Barhom et al. (2019) do joint entity and event extraction using a feature-based approach. Yu et al. (2020) use transformers to compute the event trigger and argument representation for the task.",
"cite_spans": [
{
"start": 651,
"end": 671,
"text": "Cattan et al. (2020)",
"ref_id": "BIBREF3"
},
{
"start": 770,
"end": 790,
"text": "Barhom et al. (2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 3: Event Sentence Coreference Identification",
"sec_num": "6"
},
{
"text": "Following the recent work on event coreference, our system is comprised of two parts: the classification module and the clustering module. The classification module uses a binary classifier to make pair-wise binary decisions on whether two sentences are coreferent. Once all sentence pairs have been classified as coreferent or not, the clustering module clusters the \"closest\" sentences with each other with agglomerative clustering, using a certain threshold, a common approach for coreference detection (Yang et al. (2015) ; Choubey and Huang (2017); Barhom et al. (2019) ).",
"cite_spans": [
{
"start": 506,
"end": 525,
"text": "(Yang et al. (2015)",
"ref_id": "BIBREF32"
},
{
"start": 554,
"end": 574,
"text": "Barhom et al. (2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 3: Event Sentence Coreference Identification",
"sec_num": "6"
},
{
"text": "Agglomerative clustering is a popular technique for event or entity coreference resolution. At the beginning, all event mentions are assigned their own cluster. In each iteration, clusters are merged based on the average inter-cluster link similarity scores over all mentions in each cluster. The merging procedure stops when the average link similarity falls below a threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 3: Event Sentence Coreference Identification",
"sec_num": "6"
},
{
"text": "Formally, given a document D with n sentences {s 1 , s 2 , ..., s n }, our system follows the procedure outlined in Algorithm 1 while training. The input to the algorithm is a document, and the output is a list of clusters of coreferent event sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 3: Event Sentence Coreference Identification",
"sec_num": "6"
},
{
"text": "Input: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Event Coreference Training",
"sec_num": null
},
{
"text": "D = {s 1 , s 2 , ..., s n }, threshold t Output: Clusters {c 1 , c 2 , ..., c k } 1 Module Classify(D): 2 for (s i , s j ) \u2208 D do 3 Compute sim i,j 4 SIM \u2190 SIM \u222a sim i,j 5 return SIM",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Event Coreference Training",
"sec_num": null
},
{
"text": "The evaluation of the event coreference task is based on the CoNLL coref score (Pradhan et al., 2014) , which is the unweighted average of the Fscores produced by the link-based MUC (Vilain et al., 1995) , the mention-based B 3 (Bagga and Baldwin, 1998) , and the entity-based CEAF e (Luo, 2005) metrics. As there is little Spanish and Portuguese data, we use it as a held out development set.",
"cite_spans": [
{
"start": 79,
"end": 101,
"text": "(Pradhan et al., 2014)",
"ref_id": "BIBREF25"
},
{
"start": 182,
"end": 203,
"text": "(Vilain et al., 1995)",
"ref_id": "BIBREF29"
},
{
"start": 228,
"end": 253,
"text": "(Bagga and Baldwin, 1998)",
"ref_id": "BIBREF1"
},
{
"start": 284,
"end": 295,
"text": "(Luo, 2005)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6.1"
},
{
"text": "Our system uses XLM-R large pretrained model to obtain token and sentence representations. Pairs of sentences are concatenated to each other along with the special begin-of-sentence token and separator token as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6.1"
},
{
"text": "BOS < s i > SEP < s j >",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6.1"
},
{
"text": "We feed the BOS token representation to the binary classification layer to obtain a probabilistic score of the two sentences being coreferent. Once we have the score for all sentence pairs, we call the clustering module to create clusters using the coreference scores as clustering similarity scores. We use XLM-R large pre-trained models. We trained our system for 20 epochs with learning rate of 1e-5. We experimented with various thresholds and chose 0.65 as that gave the best performance on development set. It takes about 1 hour for the model to train on a single V100 GPU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6.1"
},
{
"text": "For the final submission to the shared task we explore variations of the approach outlined in 6.1. They are: S1: This is the multilingual model. To train this we translate the English training data to Spanish and Portuguese and train a model with original English, translated Spanish and translated Portuguese data. The original Spanish and Portuguese data is used as the development set for model selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Submissions",
"sec_num": "6.2"
},
{
"text": "S2: This is the English-only model, trained on English data. Spanish and Portuguese are zeroshot.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Submissions",
"sec_num": "6.2"
},
{
"text": "S3: This is an English-only coreference model where the event triggers and place and time arguments have been extracted using our subtask 4 models (section 7). These extracted tokens are then surrounded by markers of their type, such as <trigger>, <place>, etc. in the sentence. The binary classifier is fed the sentence representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Submissions",
"sec_num": "6.2"
},
{
"text": "The performance of these techniques on the development set is shown in table 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Submissions",
"sec_num": "6.2"
},
{
"text": "The event extraction subtask aims to extract event trigger words that pertain to demonstrations, protests, political rallies, group clashes or armed militancy, along with the participating arguments Model en-dev es-dev pt-dev S1 80.57 --S2 80.25 64.09 69.67 S3 80.87 -- in such events. The arguments are to be extracted and classified as one of the following types: time, facility, organizer, participant, place or target of the event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 4: Event Extraction",
"sec_num": "7"
},
{
"text": "Formally the Event Extraction task can be summarized as follows: given a sentence s = {w 1 , w 2 , .., w n } and an event label set T = {t 1 , t 2 ..., t j }, identify contiguous phrases (w s , ..., w e ) such that l(w s , .., w e ) \u2208 T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 4: Event Extraction",
"sec_num": "7"
},
{
"text": "Most previous work (Chen et al. (2015) ; Nguyen et al. 2016; Nguyen and Grishman (2018)) for event extraction has treated event and argument extraction as separate tasks. But some systems (Li et al., 2013) treat the problem as structured prediction and train joint models for event triggers and arguments. Lin et al. (2020) built a joint system for many information extraction tasks including event trigger and arguments.",
"cite_spans": [
{
"start": 19,
"end": 38,
"text": "(Chen et al. (2015)",
"ref_id": "BIBREF4"
},
{
"start": 188,
"end": 205,
"text": "(Li et al., 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 4: Event Extraction",
"sec_num": "7"
},
{
"text": "Following the work of M'hamdi et al. (2019); Awasthy et al. (2020), we treat event extraction as a sequence labeling task. Our models are based on the stdBERT baseline in Awasthy et al. (2020), though we extract triggers and arguments at the same time. We use the IOB2 encoding (Sang and Veenstra, 1999) to represent the triggers and the argument labels, where each token is labeled with its label and an indicator of whether it starts or continues a label, or is outside the label boundary by using B-label, I-label and O respesctively.",
"cite_spans": [
{
"start": 278,
"end": 303,
"text": "(Sang and Veenstra, 1999)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 4: Event Extraction",
"sec_num": "7"
},
{
"text": "The sentence tokens are converted to token-level contextualized embeddings {h 1 , h 2 , .., h n }. We pass these through a classification block that is comprised of a dense linear hidden layer followed by a dropout layer, followed by a linear layer mapped to the task label space that produces labels for each token {l 1 , l 2 , .., l n }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 4: Event Extraction",
"sec_num": "7"
},
{
"text": "The parameters of the model are trained via cross entropy loss, a standard approach for transformerbased sequence labeling models (Devlin et al., 2019) . This is equivalent to minimizing the negative log-likelihood of the true labels,",
"cite_spans": [
{
"start": 130,
"end": 151,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 4: Event Extraction",
"sec_num": "7"
},
{
"text": "L t = \u2212 n i=1 log(P (l w i ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 4: Event Extraction",
"sec_num": "7"
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 4: Event Extraction",
"sec_num": "7"
},
{
"text": "The evaluation of the event extraction task is the CoNLL macro-F 1 score. Since there is little Spanish and Portuguese data, we use it either as train in our multilingual model or as a held out development set for our English-only model. For contextualized word embeddings, we use the XLM-R large pretrained model. The dense layer output size is same as its input size. We use the out-of-the-box pre-trained transformer models, and fine-tune them with the event data, updating all layers with the standard XLM-R hyperparameters. We ran 20 epochs with 5 seeds each, learning rate of 3 \u2022 10 \u22125 or 5 \u2022 10 \u22125 , and training batch sizes of 20. We choose the best model based on the performance on the development set. The system took 30 minutes to train on a V100 GPU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7.1"
},
{
"text": "For the final submission to the shared task we explore the following variations: S1: This is the multilingual model trained with all of the English, Spanish and Portuguese training data. The development set is English only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Submission",
"sec_num": "7.2"
},
{
"text": "S2: This is the English-only model, trained on English data. Spanish and Portuguese are zeroshot.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Submission",
"sec_num": "7.2"
},
{
"text": "S3: This is an ensemble system that votes among the outputs of 5 different systems. The voting criterion is the most frequent class. For example, if three of the five systems agree on a label then that label is chosen as the final label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Submission",
"sec_num": "7.2"
},
{
"text": "The results on development data are shown in table 5. There is no score for S1 and S3 for es and pt as all provided data was used to train the S1 model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Submission",
"sec_num": "7.2"
},
{
"text": "The final results of our submissions and rankings are shown in Table 6 : Final evaluation results and rankings across the subtasks and languages. Scores for subtasks 1 and 2 are macro-average F 1 ; subtask 3 are CoNLL average F 1 ; subtask 4 are CoNLL macro-F 1 . The ranks and best scores are shared by the organizers. Bold score denotes the best score for the track.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Final Results and Discussion",
"sec_num": "8"
},
{
"text": "models: for subtasks 1 and 2 they are languagespecific, but for subtasks 3 and 4 they are Englishonly. S3 is an ensemble system with voting for subtasks 1, 2 and 4, and an extra-feature system for subtask 3. Among our three systems, the multilingual models achieved the best scores in three tracks, the monolingual models achieved the best scores in six tracks, and the ensemble models achieved the best scores in four tracks. For subtask 1 (document-level classification), the language-specific monolingual model (S2) performs better than the multilingual model (S1) for English, Portuguese and Spanish; while for subtask 2 (sentence-level classification), the multilingual model outperforms the language-specific monolingual model for Portuguese and Spanish. This shows that building multilingual models could be better than building language-specific monolingual models for finer-grained tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Results and Discussion",
"sec_num": "8"
},
{
"text": "The monolingual English-only model (S2) performs best on all three languages for subtask 3. This could be because the multilingual model (S1) here was trained with machine translated data. Adding the trigger, time and place markers (S3) did not help, even when these features showed promise on the development sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Results and Discussion",
"sec_num": "8"
},
{
"text": "The multilingual model (S1) does better for Spanish and Portuguese on subtask 4. This is consistent with our findings in Moon et al. (2019) where training multilingual models for Named Entity Recognition, also a token-level sequence la-belling task, helps improve performance across languages. As there is much less training data for Spanish and Portuguese, pooling all languages helps.",
"cite_spans": [
{
"start": 121,
"end": 139,
"text": "Moon et al. (2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Final Results and Discussion",
"sec_num": "8"
},
{
"text": "In this paper, we presented the models and systems we developed for Multilingual Protest News Detection -Shared Task 1 at CASE 2021. We explored monolingual, multilingual, zero-shot and ensemble approaches and showed the results across the subtasks and languages chosen for this shared task. Our systems achieved an average F 1 score of 81.2, which is 2 F 1 points higher than best score of other participants on the shared task. Our submissions ranked 1 st in nine of the thirteen tracks, and ranked 2 nd in the remaining four tracks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
}
],
"back_matter": [
{
"text": "This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA) under Contract No. FA8750-19-C-0206. The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments and Disclaimer",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Event presence prediction helps trigger detection across languages",
"authors": [
{
"first": "Parul",
"middle": [],
"last": "Awasthy",
"suffix": ""
},
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Taesun",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parul Awasthy, Tahira Naseem, Jian Ni, Taesun Moon, and Radu Florian. 2020. Event presence predic- tion helps trigger detection across languages. CoRR, abs/2009.07188.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Algorithms for scoring coreference chains",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "Breck",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference",
"volume": "",
"issue": "",
"pages": "563--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In In The First Interna- tional Conference on Language Resources and Eval- uation Workshop on Linguistics Coreference, pages 563-566.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Revisiting joint modeling of cross-document entity and event coreference resolution",
"authors": [
{
"first": "Shany",
"middle": [],
"last": "Barhom",
"suffix": ""
},
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Eirew",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bugert",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4179--4189",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1409"
]
},
"num": null,
"urls": [],
"raw_text": "Shany Barhom, Vered Shwartz, Alon Eirew, Michael Bugert, Nils Reimers, and Ido Dagan. 2019. Re- visiting joint modeling of cross-document entity and event coreference resolution. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4179-4189, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Streamlining crossdocument coreference resolution: Evaluation and modeling",
"authors": [
{
"first": "Arie",
"middle": [],
"last": "Cattan",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Eirew",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arie Cattan, Alon Eirew, Gabriel Stanovsky, Mandar Joshi, and Ido Dagan. 2020. Streamlining cross- document coreference resolution: Evaluation and modeling.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Event extraction via dynamic multipooling convolutional neural networks",
"authors": [
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Liheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "167--176",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1017"
]
},
"num": null,
"urls": [],
"raw_text": "Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi- pooling convolutional neural networks. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 167-176, Beijing, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Event coreference resolution by iteratively unfolding inter-dependencies among events",
"authors": [
{
"first": "Prafulla",
"middle": [],
"last": "Kumar Choubey",
"suffix": ""
},
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2124--2133",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1226"
]
},
"num": null,
"urls": [],
"raw_text": "Prafulla Kumar Choubey and Ruihong Huang. 2017. Event coreference resolution by iteratively unfold- ing inter-dependencies among events. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2124-2133, Copenhagen, Denmark. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multilingual protest news detectionshared task 1, CASE 2021",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "H\u00fcrriyetoglu",
"suffix": ""
},
{
"first": "Osman",
"middle": [],
"last": "Mutlu",
"suffix": ""
},
{
"first": "Erdem",
"middle": [],
"last": "Farhana Ferdousi Liza",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Y\u00f6r\u00fck",
"suffix": ""
},
{
"first": "Shyam",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ratan",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021), online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali H\u00fcrriyetoglu, Osman Mutlu, Farhana Ferdousi Liza, Erdem Y\u00f6r\u00fck, Ritesh Kumar, and Shyam Ratan. 2021a. Multilingual protest news detection - shared task 1, CASE 2021. In Proceedings of the 4th Workshop on Challenges and Applications of Auto- mated Extraction of Socio-political Events from Text (CASE 2021), online. Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Challenges and applications of automated extraction of socio-political events from text (CASE 2021): Workshop and shared task report",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "H\u00fcrriyetoglu",
"suffix": ""
},
{
"first": "Hristo",
"middle": [],
"last": "Tanev",
"suffix": ""
},
{
"first": "Vanni",
"middle": [],
"last": "Zavarella",
"suffix": ""
},
{
"first": "Jakub",
"middle": [],
"last": "Piskorski",
"suffix": ""
},
{
"first": "Reyyan",
"middle": [],
"last": "Yeniterzi",
"suffix": ""
},
{
"first": "Erdem",
"middle": [],
"last": "Y\u00f6r\u00fck",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Sociopolitical Events from Text (CASE 2021), online. Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali H\u00fcrriyetoglu, Hristo Tanev, Vanni Zavarella, Jakub Piskorski, Reyyan Yeniterzi, and Erdem Y\u00f6r\u00fck. 2021b. Challenges and applications of automated extraction of socio-political events from text (CASE 2021): Workshop and shared task report. In Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio- political Events from Text (CASE 2021), online. As- sociation for Computational Linguistics (ACL).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Overview of clef 2019 lab protestnews: Extracting protests from news in a cross-context setting",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "H\u00fcrriyetoglu",
"suffix": ""
},
{
"first": "Erdem",
"middle": [],
"last": "Y\u00f6r\u00fck",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Y\u00fcret",
"suffix": ""
},
{
"first": "Burak",
"middle": [],
"last": "Agr\u0131 Yoltar",
"suffix": ""
},
{
"first": "F\u0131rat",
"middle": [],
"last": "G\u00fcrel",
"suffix": ""
},
{
"first": "Osman",
"middle": [],
"last": "Duru\u015fan",
"suffix": ""
},
{
"first": "Arda",
"middle": [],
"last": "Mutlu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Akdemir",
"suffix": ""
}
],
"year": 2019,
"venue": "Experimental IR Meets Multilinguality, Multimodality, and Interaction",
"volume": "",
"issue": "",
"pages": "425--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali H\u00fcrriyetoglu, Erdem Y\u00f6r\u00fck, Deniz Y\u00fcret, \u00c7 agr\u0131 Yoltar, Burak G\u00fcrel, F\u0131rat Duru\u015fan, Osman Mutlu, and Arda Akdemir. 2019. Overview of clef 2019 lab protestnews: Extracting protests from news in a cross-context setting. In Experimental IR Meets Multilinguality, Multimodality, and Interac- tion, pages 425-432, Cham. Springer International Publishing.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automated extraction of socio-political events from news (AESPEN): Workshop and shared task report",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "H\u00fcrriyetoglu",
"suffix": ""
},
{
"first": "Vanni",
"middle": [],
"last": "Zavarella",
"suffix": ""
},
{
"first": "Hristo",
"middle": [],
"last": "Tanev",
"suffix": ""
},
{
"first": "Erdem",
"middle": [],
"last": "Y\u00f6r\u00fck",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Safaya",
"suffix": ""
},
{
"first": "Osman",
"middle": [],
"last": "Mutlu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Workshop on Automated Extraction of Socio-political Events from News 2020",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali H\u00fcrriyetoglu, Vanni Zavarella, Hristo Tanev, Er- dem Y\u00f6r\u00fck, Ali Safaya, and Osman Mutlu. 2020. Automated extraction of socio-political events from news (AESPEN): Workshop and shared task report. In Proceedings of the Workshop on Automated Ex- traction of Socio-political Events from News 2020, pages 1-6, Marseille, France. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Cross-Context News Corpus for Protest Event-Related Knowledge Base Construction",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "H\u00fcrriyetoglu",
"suffix": ""
},
{
"first": "Erdem",
"middle": [],
"last": "Y\u00f6r\u00fck",
"suffix": ""
},
{
"first": "Osman",
"middle": [],
"last": "Mutlu",
"suffix": ""
},
{
"first": "F\u0131rat",
"middle": [],
"last": "Duru\u015fan",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Agr\u0131 Yoltar",
"suffix": ""
},
{
"first": "Burak",
"middle": [],
"last": "Y\u00fcret",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "G\u00fcrel",
"suffix": ""
}
],
"year": 2021,
"venue": "Data Intelligence",
"volume": "",
"issue": "",
"pages": "1--28",
"other_ids": {
"DOI": [
"10.1162/dint_a_00092"
]
},
"num": null,
"urls": [],
"raw_text": "Ali H\u00fcrriyetoglu, Erdem Y\u00f6r\u00fck, Osman Mutlu, F\u0131rat Duru\u015fan, \u00c7 agr\u0131 Yoltar, Deniz Y\u00fcret, and Burak G\u00fcrel. 2021. Cross-Context News Corpus for Protest Event-Related Knowledge Base Construc- tion. Data Intelligence, pages 1-28.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd International Conference on Learning Representations (ICLR), ICLR '15",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), ICLR '15.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Joint event extraction via structured prediction with global features",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "73--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global fea- tures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 73-82.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A joint neural model for information extraction with global features",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. The 58th Annual Meeting of the Association for Computational Linguistics (ACL2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Lin, Heng Ji, F Huang, and L Wu. 2020. A joint neural model for information extraction with global features. In Proc. The 58th Annual Meet- ing of the Association for Computational Linguistics (ACL2020).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Multilingual denoising pre-training for neural machine translation",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. CoRR, abs/2001.08210.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "On coreference resolution performance metrics",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqiang Luo. 2005. On coreference resolution per- formance metrics. In Proceedings of Human Lan- guage Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 25-32, Vancouver, British Columbia, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Contextualized cross-lingual event trigger extraction with minimal resources",
"authors": [
{
"first": "Marjorie",
"middle": [],
"last": "Meryem M'hamdi",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Freedman",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "656--665",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1061"
]
},
"num": null,
"urls": [],
"raw_text": "Meryem M'hamdi, Marjorie Freedman, and Jonathan May. 2019. Contextualized cross-lingual event trig- ger extraction with minimal resources. In Proceed- ings of the 23rd Conference on Computational Nat- ural Language Learning (CoNLL), pages 656-665, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Towards lingua franca named entity recognition with BERT. CoRR",
"authors": [
{
"first": "Taesun",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "Parul",
"middle": [],
"last": "Awasthy",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taesun Moon, Parul Awasthy, Jian Ni, and Radu Flo- rian. 2019. Towards lingua franca named entity recognition with BERT. CoRR, abs/1912.01389.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Joint event extraction via recurrent neural networks",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Thien Huu Nguyen",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "300--309",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thien Huu Nguyen, Kyunghyun Cho, and Ralph Gr- ishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 300-309.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Graph convolutional networks with argument-aware pooling for event detection",
"authors": [
{
"first": "Huu",
"middle": [],
"last": "Thien",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-second AAAI conference on artificial intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thien Huu Nguyen and Ralph Grishman. 2018. Graph convolutional networks with argument-aware pool- ing for event detection. In Thirty-second AAAI con- ference on artificial intelligence.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Cross-lingual relation extraction with transformers",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Taesun",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "Parul",
"middle": [],
"last": "Awasthy",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian Ni, Taesun Moon, Parul Awasthy, and Radu Flo- rian. 2020. Cross-lingual relation extraction with transformers. CoRR, abs/2010.08652.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Scoring coreference partitions of predicted mentions: A reference implementation",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "30--35",
"other_ids": {
"DOI": [
"10.3115/v1/P14-2006"
]
},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Xiaoqiang Luo, Marta Recasens, Ed- uard Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted men- tions: A reference implementation. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 30-35, Baltimore, Maryland. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Representing text chunks",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Jorn",
"middle": [],
"last": "Veenstra",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Ninth Conference on European Chapter of the Association for Computational Linguistics, EACL '99",
"volume": "",
"issue": "",
"pages": "173--179",
"other_ids": {
"DOI": [
"10.3115/977035.977059"
]
},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Jorn Veenstra. 1999. Rep- resenting text chunks. In Proceedings of the Ninth Conference on European Chapter of the Associa- tion for Computational Linguistics, EACL '99, page 173-179, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "From light to rich ERE: Annotation of entities, relations, and events",
"authors": [
{
"first": "Zhiyi",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Riese",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Mott",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Ellis",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Wright",
"suffix": ""
},
{
"first": "Seth",
"middle": [],
"last": "Kulick",
"suffix": ""
},
{
"first": "Neville",
"middle": [],
"last": "Ryant",
"suffix": ""
},
{
"first": "Xiaoyi",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation",
"volume": "",
"issue": "",
"pages": "89--98",
"other_ids": {
"DOI": [
"10.3115/v1/W15-0812"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiyi Song, Ann Bies, Stephanie Strassel, Tom Riese, Justin Mott, Joe Ellis, Jonathan Wright, Seth Kulick, Neville Ryant, and Xiaoyi Ma. 2015. From light to rich ERE: Annotation of entities, relations, and events. In Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation, pages 89-98, Denver, Colorado. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 6000-6010.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A modeltheoretic coreference scoring scheme",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Vilain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Aberdeen",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 6th Conference on Message Understanding, MUC6 '95",
"volume": "",
"issue": "",
"pages": "45--52",
"other_ids": {
"DOI": [
"10.3115/1072399.1072405"
]
},
"num": null,
"urls": [],
"raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Proceed- ings of the 6th Conference on Message Understand- ing, MUC6 '95, page 45-52, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Ace 2005 multilingual training corpus. Linguistic Data Consortium",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Medero",
"suffix": ""
},
{
"first": "Kazuaki",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilin- gual training corpus. Linguistic Data Consortium, Philadelphia, 57.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A Hierarchical Distance-dependent Bayesian Model for Event Coreference Resolution",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Frazier",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "517--528",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00155"
]
},
"num": null,
"urls": [],
"raw_text": "Bishan Yang, Claire Cardie, and Peter Frazier. 2015. A Hierarchical Distance-dependent Bayesian Model for Event Coreference Resolution. Transactions of the Association for Computational Linguistics, 3:517-528.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Paired representation learning for event and entity coreference",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Yu, Wenpeng Yin, and Dan Roth. 2020. Paired representation learning for event and entity coreference.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "i , c j ) \u2208 C do 15 score=0 16 for (s k ) \u2208 c i do 17 for (s l ) \u2208 c j do 18",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"text": "Number of examples in the train/dev/test sets.",
"content": "<table><tr><td>Subtasks 1 and 3 counts show number of documents,</td></tr><tr><td>and subtasks 2 and 4 counts show number of sentences.</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF3": {
"text": "Macro F 1 score on the development sets for subtask 1 (document classification).",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF5": {
"text": "Macro F 1 score on the development sets for subtask 2 (sentence classification).",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF7": {
"text": "CoNLL F 1 score on the development sets for subtask 3: Event Coreference.",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF8": {
"text": "CoNLL F 1 score on the development sets for subtask 4: Event Extraction.",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF9": {
"text": "",
"content": "<table><tr><td>. Our systems achieved con-</td></tr><tr><td>sistently high scores across all subtasks and lan-</td></tr><tr><td>guages.</td></tr><tr><td>To recap, our S1 systems are multilingual models</td></tr><tr><td>trained on all three languages. S2 are monolingual</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
}
}
}
}