| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T11:48:59.700270Z" |
| }, |
| "title": "Cross-lingual Zero Pronoun Resolution", |
| "authors": [ |
| { |
| "first": "Abdulrahman", |
| "middle": [], |
| "last": "Aloraini", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Queen Mary University of London", |
| "location": {} |
| }, |
| "email": "a.aloraini@qmul.ac.uk" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Queen Mary University of London", |
| "location": {} |
| }, |
| "email": "m.poesio@qmul.ac.uk" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In languages like Arabic, Chinese, Italian, Japanese, Korean, Portuguese, Spanish, and many others, predicate arguments in certain syntactic positions are not realized instead of being realized as overt pronouns, and are thus called zero-or null-pronouns. Identifying and resolving such omitted arguments is crucial to machine translation, information extraction and other NLP tasks, but depends heavily on semantic coherence and lexical relationships. We propose a BERT-based cross-lingual model for zero pronoun resolution, and evaluate it on the Arabic and Chinese portions of OntoNotes 5.0. As far as we know, ours is the first neural model of zero-pronoun resolution for Arabic; and our model also outperforms the state-of-the-art for Chinese. In the paper we also evaluate BERT feature extraction and fine-tune models on the task, and compare them with our model. We also report on an investigation of BERT layers indicating which layer encodes the most suitable representation for the task. Our code is available at https://github.com/amaloraini/cross-lingual-ZP.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In languages like Arabic, Chinese, Italian, Japanese, Korean, Portuguese, Spanish, and many others, predicate arguments in certain syntactic positions are not realized instead of being realized as overt pronouns, and are thus called zero-or null-pronouns. Identifying and resolving such omitted arguments is crucial to machine translation, information extraction and other NLP tasks, but depends heavily on semantic coherence and lexical relationships. We propose a BERT-based cross-lingual model for zero pronoun resolution, and evaluate it on the Arabic and Chinese portions of OntoNotes 5.0. As far as we know, ours is the first neural model of zero-pronoun resolution for Arabic; and our model also outperforms the state-of-the-art for Chinese. In the paper we also evaluate BERT feature extraction and fine-tune models on the task, and compare them with our model. We also report on an investigation of BERT layers indicating which layer encodes the most suitable representation for the task. Our code is available at https://github.com/amaloraini/cross-lingual-ZP.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In pronoun-dropping (pro-drop) languages such as Arabic (Eid, 1983) , Chinese (Li and Thompson, 1979) , Italian (Di Eugenio, 1990 ) and other romance languages (e.g., Portuguese, Spanish), Japanese (Kameyama, 1985) , and others (Kim, 2000) , arguments can be elided in certain contexts in which a pronoun is used in English, such as subjects. We use the term zero-pronouns (ZP) to refer to these unrealised arguments, the most used in the recent literature. 1 Anaphoric zero-pronoun (AZP) are zero-pronouns that refer to one or more noun phrases that appear previously in a text. The following example of an AZP comes from the Arabic section of OntoNotes: In the example, the zero pronoun indicated with '*' refers to an entity introduced with a masculine singular noun that was previously mentioned in the sentence. (In OntoNotes 5.0, zero pronouns are denoted as * in Arabic text, and *pro* in Chinese). AZP resolution usually consists of two steps: extracting ZPs that are anaphoric, and identifying the correct antecedents for AZPs. Our focus is on the latter because there has been no proposal for Arabic. In this paper we propose a crosslingual, BERT based model of zero pronoun resolution. Our contributions include:", |
| "cite_spans": [ |
| { |
| "start": 56, |
| "end": 67, |
| "text": "(Eid, 1983)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 78, |
| "end": 101, |
| "text": "(Li and Thompson, 1979)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 116, |
| "end": 129, |
| "text": "Eugenio, 1990", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 198, |
| "end": 214, |
| "text": "(Kameyama, 1985)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 228, |
| "end": 239, |
| "text": "(Kim, 2000)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "\u202b\ufe91\u202c", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "\u2022 We propose a novel cross-lingual, BERT-based model and test it on languages that differ completely in their morphological structure: Arabic and Chinese. (Arabic is morphologically rich, whereas Chinese's morphology is relatively simple (Pradhan et al., 2012)) \u2022 As far as we know this is the first neural network-based ZP resolution model for Arabic, and outperforms the 1 The terms null-subject or zero-subject are also used.", |
| "cite_spans": [ |
| { |
| "start": 238, |
| "end": 261, |
| "text": "(Pradhan et al., 2012))", |
| "ref_id": "BIBREF49" |
| }, |
| { |
| "start": 373, |
| "end": 374, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "current state-of-the-art on Chinese.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "\u2022 We carried out an extensive analysis on BERT layers, and discuss which settings can give the optimal performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The rest of the paper is organized as follows. We discuss Arabic and Chinese ZP-related literature and in other languages in Section 2. We explain our proposed model in Section 3. We discuss the evaluation settings and results in Section 4. We conclude in Section 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "AZP resolution is included in some coreference resolution systems (Taira et al., 2008; Imamura et al., 2009; Watanabe et al., 2010; Poesio et al., 2010; Yoshino et al., 2013) . However, it has proven challenging to combine the task with the resolution of overt mentions, so separating the task from coreference resolution may lead to more improvements (Iida and Poesio, 2011) . Chinese: The release of OntoNotes has spurred a lot of research on zero pronoun resolution in Chinese, but earlier research exists as well. Converse (2006) proposed a rulebased approach that employed Hobbs algorithm (Hobbs, 1978) to resolve ZPs in the Chinese Treebank. Yeh and Chen (2006) is another rule-based approach, using rules from Centering Theory (Grosz et al., 1995) . Zhao and Ng (2007) , the first machine learning approach to Chinese ZPs, used decision trees and a set of syntactic and positional features. Chen and Ng (2013) extended (Zhao and Ng, 2007) by incorporating contextual features and ZP links. Chen and Ng (2014; Chen and Ng (2015) proposed unsupervised techniques to resolve the task. Kong and Zhou (2010) proposed a tree kernel-based unified framework for ZP detection and resolution. Recent approaches applying deep-learning neural networks include Chen and Ng (2016) , the first to apply a forward neural network to the task; Yin et al. (2016) , who employed an LSTM to represent AZP and two subnetworks (general encoder and local encoder) to capture context-level and word-level information of the candidates; Yin et al. (2017) , who proposed a deep memory network capable to improve the semantic information of ZPs and its candidates; and Liu et al. (2017) , using an attention-based neural network and enhanced the performance by training the model on automatically generated large-scale training data of resolved ZP. Yin et al. (2018) , the current state of the art, also used an attention-based model, but combined their network with (Chen and Ng, 2016) features. Other languages: There has been also a great deal of research on ZPs particularly in Japanese (Kim and Ehara., 1995; Aone and Bennett, 1995; Seki et al., 2002; Isozaki and Hirao, 2003; Iida et al., 2006; Iida et al., 2007; Sasano et al., 2008; Sasano et al., 2009; Sasano and Kurohashi, 2011; Yoshikawa et al., 2011; Hangyo et al., 2013; Iida et al., 2015; Yoshino et al., 2013; Yamashiro et al., 2018) , but also in other languages, including Korean (Han, 2004; Byron et al., 2006) , Spanish (Ferr\u00e1ndez and Peral, 2000) , Romanian (Mih\u0103il\u0103 et al., 2011) , Bulgarian (Grigorova, 2013) , and Sanskrit (Gopal and Jha, 2017) . Iida and Poesio (2011) proposed the first cross-lingual approach for this task. They used the ILP model of Denis and Baldridge (2007) and introduced a new set of constraints incorporating common features for Italian and Japanese. All current approaches suffer from a number of limitations, one of which is that most of them rely on an extensive set of features which, as we will see below, are languagedependent. The systems using more complex linguistic features also require larger training datasets than available for many languages, including, e.g., Arabic.", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 86, |
| "text": "(Taira et al., 2008;", |
| "ref_id": "BIBREF54" |
| }, |
| { |
| "start": 87, |
| "end": 108, |
| "text": "Imamura et al., 2009;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 109, |
| "end": 131, |
| "text": "Watanabe et al., 2010;", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 132, |
| "end": 152, |
| "text": "Poesio et al., 2010;", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 153, |
| "end": 174, |
| "text": "Yoshino et al., 2013)", |
| "ref_id": "BIBREF64" |
| }, |
| { |
| "start": 352, |
| "end": 375, |
| "text": "(Iida and Poesio, 2011)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 518, |
| "end": 533, |
| "text": "Converse (2006)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 594, |
| "end": 607, |
| "text": "(Hobbs, 1978)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 648, |
| "end": 667, |
| "text": "Yeh and Chen (2006)", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 734, |
| "end": 754, |
| "text": "(Grosz et al., 1995)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 757, |
| "end": 775, |
| "text": "Zhao and Ng (2007)", |
| "ref_id": "BIBREF65" |
| }, |
| { |
| "start": 898, |
| "end": 916, |
| "text": "Chen and Ng (2013)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 926, |
| "end": 945, |
| "text": "(Zhao and Ng, 2007)", |
| "ref_id": "BIBREF65" |
| }, |
| { |
| "start": 997, |
| "end": 1015, |
| "text": "Chen and Ng (2014;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1016, |
| "end": 1034, |
| "text": "Chen and Ng (2015)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 1089, |
| "end": 1109, |
| "text": "Kong and Zhou (2010)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 1255, |
| "end": 1273, |
| "text": "Chen and Ng (2016)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1333, |
| "end": 1350, |
| "text": "Yin et al. (2016)", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 1518, |
| "end": 1535, |
| "text": "Yin et al. (2017)", |
| "ref_id": "BIBREF61" |
| }, |
| { |
| "start": 1648, |
| "end": 1665, |
| "text": "Liu et al. (2017)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 1828, |
| "end": 1845, |
| "text": "Yin et al. (2018)", |
| "ref_id": "BIBREF62" |
| }, |
| { |
| "start": 1946, |
| "end": 1965, |
| "text": "(Chen and Ng, 2016)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 2070, |
| "end": 2092, |
| "text": "(Kim and Ehara., 1995;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 2093, |
| "end": 2116, |
| "text": "Aone and Bennett, 1995;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 2117, |
| "end": 2135, |
| "text": "Seki et al., 2002;", |
| "ref_id": "BIBREF53" |
| }, |
| { |
| "start": 2136, |
| "end": 2160, |
| "text": "Isozaki and Hirao, 2003;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 2161, |
| "end": 2179, |
| "text": "Iida et al., 2006;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 2180, |
| "end": 2198, |
| "text": "Iida et al., 2007;", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 2199, |
| "end": 2219, |
| "text": "Sasano et al., 2008;", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 2220, |
| "end": 2240, |
| "text": "Sasano et al., 2009;", |
| "ref_id": "BIBREF52" |
| }, |
| { |
| "start": 2241, |
| "end": 2268, |
| "text": "Sasano and Kurohashi, 2011;", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 2269, |
| "end": 2292, |
| "text": "Yoshikawa et al., 2011;", |
| "ref_id": "BIBREF63" |
| }, |
| { |
| "start": 2293, |
| "end": 2313, |
| "text": "Hangyo et al., 2013;", |
| "ref_id": null |
| }, |
| { |
| "start": 2314, |
| "end": 2332, |
| "text": "Iida et al., 2015;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 2333, |
| "end": 2354, |
| "text": "Yoshino et al., 2013;", |
| "ref_id": "BIBREF64" |
| }, |
| { |
| "start": 2355, |
| "end": 2378, |
| "text": "Yamashiro et al., 2018)", |
| "ref_id": "BIBREF58" |
| }, |
| { |
| "start": 2427, |
| "end": 2438, |
| "text": "(Han, 2004;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 2439, |
| "end": 2458, |
| "text": "Byron et al., 2006)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 2469, |
| "end": 2496, |
| "text": "(Ferr\u00e1ndez and Peral, 2000)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 2508, |
| "end": 2530, |
| "text": "(Mih\u0103il\u0103 et al., 2011)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 2543, |
| "end": 2560, |
| "text": "(Grigorova, 2013)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 2576, |
| "end": 2597, |
| "text": "(Gopal and Jha, 2017)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 2600, |
| "end": 2622, |
| "text": "Iida and Poesio (2011)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 2707, |
| "end": 2733, |
| "text": "Denis and Baldridge (2007)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Zero Pronoun Resolution", |
| "sec_num": "2.1." |
| }, |
| { |
| "text": "There have been several studies of Arabic coreference resolution task, but none specifically devoted to ZPs except as part of the overall coreference task. In particular, several of the systems involved in the CONLL 2012 shared task attempted Arabic as well. Fernandes et al. (2014) utilized latent tree to capture hidden structure and finding coreference chains. Bj\u00f6rkelund and Kuhn (2014) stacked multiple pairwise coreference resolvers and combined decoders to cluster mentions together. Chen and Ng (2012) employed multiple sieves (Lee et al., 2011) for English and Chinese, but used only an exact match sieve for Arabic. Green et al. (2009) proposed CRF sequence classifier to detect Arabic noun phrases, and captured ZPs implicitly. Gabbard (2010) showed that Arabic ZPs can be identified and retrieved. As far as we know none of these proposals reported the results of ZP resolution.", |
| "cite_spans": [ |
| { |
| "start": 259, |
| "end": 282, |
| "text": "Fernandes et al. (2014)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 364, |
| "end": 390, |
| "text": "Bj\u00f6rkelund and Kuhn (2014)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 491, |
| "end": 509, |
| "text": "Chen and Ng (2012)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 535, |
| "end": 553, |
| "text": "(Lee et al., 2011)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 626, |
| "end": 645, |
| "text": "Green et al. (2009)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 739, |
| "end": 753, |
| "text": "Gabbard (2010)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Arabic", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "ZPs resolution involves complex, comprehensive language understanding skills. Resolving ZPs in Chinese requires reasoning, context, and background knowledge of real world entities (Huang, 1984) , whereas Arabic, in addition to the previously mentioned skills, requires deep understanding of its rich morphology (Alnajadat, 2017) . Recently, it has been shown that BERT (Devlin et al., 2018) can capture structural properties of a language, such as its surface, semantic, and syntactic aspects (Jawahar et al., 2019) which seems related to what we need for resolving ZPs. Therefore, we use BERT to produce a mention representation for AZPs and the candidates, and we also incorporate a few, non language-dependent, features. Our model is a pairwise classifier classifying <AZP,candidate> pairs to true or false for each of a ZP's candidate antecedents. In this section, we first give an overview of the BERT architecture and its adaption modes. We then describe how we represent the mentions, and how we generate AZP candidates. Finally, we present the hyperparameter tuning and training objective.", |
| "cite_spans": [ |
| { |
| "start": 180, |
| "end": 193, |
| "text": "(Huang, 1984)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 311, |
| "end": 328, |
| "text": "(Alnajadat, 2017)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 369, |
| "end": 390, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Model", |
| "sec_num": "3." |
| }, |
| { |
| "text": "BERT is a language representation model consisting of stacked multiple Transformers (Vaswani et al., 2017) , which can be pretrained on a large amount of unlabeled text, and produces distributional vectors (also called embeddings) for words and contexts. There are several versions of BERT; we use BERT-base Multilingual which was pretrained on many languages, including Chinese and Arabic, and is publicly available 2 . BERT-base Multilingual consists of 12 hidden layers. Each has 768 hidden units and multiple attention heads. Thus, for every input, BERT computes 12 embeddings each of size 768 units.", |
| "cite_spans": [ |
| { |
| "start": 84, |
| "end": 106, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BERT", |
| "sec_num": "3.1." |
| }, |
| { |
| "text": "BERT requires a special format for its input; therefore, it comes with a tool to preprocess its input called Tokenizer. The core of Tokenizer is Wordpiece (Wu et al., 2016) which segments words into sub-words (sub-tokens). Tokenizer also tags the inputs with [CLS] at the beginning and [SEP] at the end.", |
| "cite_spans": [ |
| { |
| "start": 155, |
| "end": 172, |
| "text": "(Wu et al., 2016)", |
| "ref_id": "BIBREF57" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BERT", |
| "sec_num": "3.1." |
| }, |
| { |
| "text": "[CLS] is a context classification token made by aggregating the word embeddings in the sentence, and [SEP] indicates the end of a sentence input. An illustration of Tokenizer is shown in Figure 1 . The input \"My sweetheart is sleeping\" is preprocessed through Tokenizer. Character sequences My and is each translates into one token, whereas sweetheart and sleeping originate two sub-tokens. After the Tokenizer step, tokens are evaluated in BERT which produces their embeddings each of 768 hidden units. BERT has two modes of adaptation: feature extraction and fine-tuning. Feature extraction (also called featurebased) is when BERT's weights are fixed and used to produce the pretrained embeddings. Fine-tuning is the process of slightly adjusting BERT's parameters for a target task. Both have benefits. Feature extraction is computationally cheaper and might be more suitable for a specific task. Fine-tuning is more convenient to utilize, and may smoothly adapt to several general-purpose tasks. Both modes learn interesting properties about a language and work well for various NLP problems. However, they might not be able to achieve optimal performance for some tasks. In this paper, we propose combining BERT representations with additional task-related features to improve ZP resolution. In our model, we use BERT feature extraction mode to produce embeddings for AZPs and their antecedents, and add two features: same_sentence and find_distance. same_sentence feature finds whether an AZP and a candidate appear in the same sentence or not, and find_distance computes the word distance between an AZP and a candidate. These two features are cross-lingual and highly related to the task because AZPs and their antecedents usually appear near each other (Chen and Ng, 2014) .", |
| "cite_spans": [ |
| { |
| "start": 101, |
| "end": 106, |
| "text": "[SEP]", |
| "ref_id": null |
| }, |
| { |
| "start": 1762, |
| "end": 1781, |
| "text": "(Chen and Ng, 2014)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 187, |
| "end": 195, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "BERT", |
| "sec_num": "3.1." |
| }, |
| { |
| "text": "Consider a sentence consisting of n words and containing an AZP mention at position i, so that its previous word is at position i-1, and the next word at i+1. Let us assume we also have a candidate k starting at position k, and appearing before the AZP. 3 There can be a number of candidates, each of which is a noun phrase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Input representation", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "sentence = (w 1 , w 2 , ..., w i\u22121 , azp i , w i+1 , ..., w n ) (1) candidate k \u2282 sentence", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Input representation", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "We compute the positional features for every (azp, candidate) pair as follows: -same_sentence (azp, candidate): returns 1 if an AZP and its candidate are in the same sentence, 0 otherwise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Input representation", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "-find_distance (azp, candidate): finds the word distance between an AZP and its candidate. The word distance is normalized between 0 and 1 based on the training instances.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Input representation", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "s = same_sentence(azp i , candidate k ) (3) d = f ind_distance(azp i , candidate k )", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Input representation", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "We feed sentence into BERT feature extraction mode, which produces the input's embeddings. embeddings contain BERT pretrained vectors of every word in sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Input representation", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "embeddings = BERT (sentence)", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Input representation", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "A word can have one representation or several based on the segment step of Tokenizer. For example, in Figure 1 My has only one embedding while sweetheart has two because it has been segmented into two sub-tokens (sweet and 3 An AZP and its candidate may appear in distinct sentences. This could be specified using BERT's parameters 'text_a', and 'text_b'. In such cases, however, we empirically found that we get better results by merging the two sentences into one, and add a [SEP] token in between. Thus, we only use 'text_a'. ##heart). In 6, 7, and 8 equations, the subscript of embeddings represents the word location in the sentence. \u00b5 is a function to compute the mean of a mention representation which can made of several subtoken embeddings 4 .", |
| "cite_spans": [ |
| { |
| "start": 223, |
| "end": 224, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 102, |
| "end": 110, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Input representation", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "a 1 = \u00b5(embeddings (i\u22121) ) (6) a 2 = \u00b5(embeddings (i+1) ) (7) c k = \u00b5(embeddings (k) )", |
| "eq_num": "(8)" |
| } |
| ], |
| "section": "Input representation", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "To obtain a mention representation for an AZP, we compute the average embeddings of the AZP previous word and the next word, and join them together. For every candidate, we calculate the mean of its embeddings which then joined with the positional features. We combine the AZP and its candidate representations to form the input to our classifier.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Input representation", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "azp = [a 1 , a 2 ] (9) c = [c k , s, d] (10) input = [azp, c]", |
| "eq_num": "(11)" |
| } |
| ], |
| "section": "Input representation", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "Our classifier consists of multiple multi-layer perceptrons (MLPs) scoring the <azp, candidate> pair \"input\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Input representation", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "layer 1 = f (W 1 input + b 1 ) (12) layer 2 = f (W 2 layer 1 + b 2 ) (13) layer 3 = f (W 3 layer 2 + b 3 ) (14) scoring = f (W 4 layer 3 + b 4 )", |
| "eq_num": "(15)" |
| } |
| ], |
| "section": "Input representation", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "f is the RELU activation function (Nair and Hinton, 2010) . layer 1 , layer 2 , layer 3 , and scoring are the resolver's layers; each has learning parameters W and b. After scoring all candidates, we choose the candidate with the highest coreference score as the correct antecedent for the AZP. The overall architecture of our model and data representations are shown in Figure 2 . In the figure, there is one AZP and two candidates: noun phrase 1 (NP1) and noun phrase 2 (NP2). We run the sentence into BERT to get their word embeddings. AZP is represented with the mean of its previous word, and next word. Candidates are also represented with the mean of their subtoken embeddings, and combined with their positional features. We join each candidate representation with the AZP. We compute <AZP, NP1> and <AZP, NP2> scores which normalized using the softmax layer.", |
| "cite_spans": [ |
| { |
| "start": 34, |
| "end": 57, |
| "text": "(Nair and Hinton, 2010)", |
| "ref_id": "BIBREF46" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 371, |
| "end": 379, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Input representation", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "For every AZP, we consider as candidate antecedents all maximal and modifier noun phrases (NPs) at most two sentences away, as done by Chen and Ng (2016; Yin et al. (2017) . This strategy results in high recall of mentions in both Arabic and Chinese.", |
| "cite_spans": [ |
| { |
| "start": 135, |
| "end": 153, |
| "text": "Chen and Ng (2016;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 154, |
| "end": 171, |
| "text": "Yin et al. (2017)", |
| "ref_id": "BIBREF61" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Candidate generation", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "We optimize the hyperparameters based on the development sets. We employ three layers and initialize each one's weights using Glorot and Bengio (2010)'s method. We also add a dropout regularization between every two layers. Table 1 shows the used settings. Figure 2 : An example of one AZP and two candidates: NP1 and NP2. For every candidate, we calculate its task-specific features find_distance and same_sentence, the features are represented as . We compute the average embeddings of each candidate and AZP surrounding words, a subtoken embedding is represented as . We form <AZP, NP1> and <AZP, NP2> pairs and feed them into a classifier made of MLPs. The classifier finds their scores which then normalized using Softmax. \u2295 is a concatenation operation. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 257, |
| "end": 265, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Hyperparameter tuning", |
| "sec_num": "3.4." |
| }, |
| { |
| "text": "We minimize the cross entropy error between every AZP and its candidates using: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training objective", |
| "sec_num": "3.5." |
| }, |
| { |
| "text": "J(\u03b8) = \u2212", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training objective", |
| "sec_num": "3.5." |
| }, |
| { |
| "text": "\u03b8 = {W 1 , W 2 , W 3 , W 4 , b 1 , b 2 , b 3 , b 4 }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training objective", |
| "sec_num": "3.5." |
| }, |
| { |
| "text": "\u03b8 denotes the set of learning parameters. T consists of the n training instances of AZPs, and C represents the k candidates of an azp. \u03b4(azp, c) returns whether a candidate c is correct antecedent of the azp. log (P(azp, c) is the predicted log probability of the (azp, c) pair.", |
| "cite_spans": [ |
| { |
| "start": 213, |
| "end": 223, |
| "text": "(P(azp, c)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training objective", |
| "sec_num": "3.5." |
| }, |
| { |
| "text": "We tested our model on the Arabic and Chinese portions of OntoNotes 5.0, which were used in the the official CoNLL-2012 shared task (Pradhan et al., 2012) . Gold syntactic parse trees and gold AZPs annotations are available for both languages and were used in all experiments. Chinese training and development sets contain ZPs, but the test set does not. Therefore, we train the model using the training set and we use the development set as the test set, as done in prior research (Zhao and Ng, 2007; Chen and Ng, 2015; Chen and Ng, 2016; Yin et al., 2016; Yin et al., 2017; Liu et al., 2017; Yin et al., 2018) . We hold out 20% of the training data as a development set. Arabic training, development, and test sets all have ZPs. and we use each set for its purpose. We preprocessed the data by normalizing the letter \"alif\" variants and removing all diacritics. Detailed information about the number of documents, sentences, words, and AZPs can be found in Table 2 . The Chinese dataset is larger than Arabic; nonetheless, our model succeeds in resolving many Arabic ZPs.", |
| "cite_spans": [ |
| { |
| "start": 132, |
| "end": 154, |
| "text": "(Pradhan et al., 2012)", |
| "ref_id": "BIBREF49" |
| }, |
| { |
| "start": 482, |
| "end": 501, |
| "text": "(Zhao and Ng, 2007;", |
| "ref_id": "BIBREF65" |
| }, |
| { |
| "start": 502, |
| "end": 520, |
| "text": "Chen and Ng, 2015;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 521, |
| "end": 539, |
| "text": "Chen and Ng, 2016;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 540, |
| "end": 557, |
| "text": "Yin et al., 2016;", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 558, |
| "end": 575, |
| "text": "Yin et al., 2017;", |
| "ref_id": "BIBREF61" |
| }, |
| { |
| "start": 576, |
| "end": 593, |
| "text": "Liu et al., 2017;", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 594, |
| "end": 611, |
| "text": "Yin et al., 2018)", |
| "ref_id": "BIBREF62" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 959, |
| "end": 966, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "We evaluate the results in terms of recall, precision, and Fscore, defined as in (Zhao and Ng, 2007) : ", |
| "cite_spans": [ |
| { |
| "start": 81, |
| "end": 100, |
| "text": "(Zhao and Ng, 2007)", |
| "ref_id": "BIBREF65" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Metrics", |
| "sec_num": "4.2." |
| }, |
| { |
| "text": "Recall = AZP", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Metrics", |
| "sec_num": "4.2." |
| }, |
| { |
| "text": "Key represents the true set of AZP entities in the dataset, and Response represents the set of identified AZPs in the model. AZP hits mean the total number of AZPs correctly resolved with at least one of its antecedents in the gold coreference chain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "P recision = AZP hits N umber of AZP s in Response", |
| "sec_num": null |
| }, |
| { |
| "text": "We compare our results with other published results, and with the results using BERT's two adaptation modes. BERT fine-tuning already has a built-in classification layer on top of the stacked Transformers. The feature extraction mode only produces the learned vectors and needs a framework to be trained on. To do so, we implement a bi-attentive neural network to train feature extraction embeddings and optimize it as done in (Peters et al., 2019) who empirically analyzed fine-tuning and feature extraction modes for a few pretrained models, including BERT. In both modes, we train AZPs and their antecedents without the proposed additional features.", |
| "cite_spans": [ |
| { |
| "start": 427, |
| "end": 448, |
| "text": "(Peters et al., 2019)", |
| "ref_id": "BIBREF47" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.3." |
| }, |
| { |
| "text": "We report our results for Arabic in Table 3 . Given that there was no existing ZP resolver for Arabic, we implemented (Chen and Ng, 2016) 's model and used it as a baseline in our experiments, as it features an extensive range of syntactic, positional, and grammatical features which were then used in other systems as well (Yin et al., 2018) . However, Table 3 shows that these features did not work well with Arabic. We can think of two likely reasons for this. First, the size of Arabic OntoNotes is small, thus might not have provided enough training data for the learning phase. Second, some of Chen and Ng's features might only apply for Chinese; therefore, they might have hurt the performance rather than helped. Also, (Chen and Ng, 2016) 's model lacked morphological features because Chinese morphology is considered relatively simple. In contrast, Arabic morphology is highly derivational and inflectional, and very important for resolving ZPs. Arabic ZPs are preceded by verbs, and verbs encode information about gender, person, and number. The context of ZPs and their antecedents share similar morphological characteristics.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 137, |
| "text": "(Chen and Ng, 2016)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 324, |
| "end": 342, |
| "text": "(Yin et al., 2018)", |
| "ref_id": "BIBREF62" |
| }, |
| { |
| "start": 727, |
| "end": 746, |
| "text": "(Chen and Ng, 2016)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 36, |
| "end": 43, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 354, |
| "end": 361, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Arabic", |
| "sec_num": "4.3.1." |
| }, |
| { |
| "text": "Recall Precision F-score (Chen and Ng, 2016) 8.1 10.1 8.9 BERT (feature extraction) 47.9 59.5 53.1 BERT (fine-tuning) 50.3 62.5 55.8 Our Model 51.8 64.4 57.4 Table 3 : Arabic AZPs results.", |
| "cite_spans": [ |
| { |
| "start": 25, |
| "end": 44, |
| "text": "(Chen and Ng, 2016)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 158, |
| "end": 165, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": null |
| }, |
| { |
| "text": "Interestingly, BERT seems to be capable of modeling these morphological connections and resolve correctly many AZPs. BERT's feature extraction and fine tuning modes produce F-scores of 53.1% and 55.8%. Our model outperforms BERT both modes and achieves an F-score of 57.4%. The incorporated features seem to help with an increase of 1.6% compared to fine tuning, and 4.3% to feature extraction. These findings suggest that while BERT learns many details of a language, it might also need more information to achieve the optimal performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": null |
| }, |
| { |
| "text": "Our experimental results for Chinese can be seen in Table 4 . The Chinese dataset consists of 6 different categories: Broadcast News (BN), Newswires (NW), Broadcast Conversations (BC), Telephone Conversations (TC), Web Blogs (WB), and Magazines (MZ). The state-of-theart, attention-based model of Yin et al. (2018) performs better than the others in all categories except TC. The TC category contains many short sentences; perhaps Yin et al's model struggles to learn short size inputs. Our model achieves the best overall F-score of 63.5% outperforming all prior models in all categories except in (NW). Specifically, our approach outperforms the current state-of-the-art F-scores in these categories: 1.9% (MZ), 7.4% (WB), 10% (BN), 3.2% (BC), and 8.9% (TC). Feature extraction and fine-tuning modes report 60.4% and 62.1% respectively. Fine tuning process leads to 1.7% increase than feature extraction. Our model outperforms BERT both modes with an increase of 3.1% and 1.4% compared to feature extraction and fine tuning modes. The results in Chinese (even in Arabic) imply that even though fine tuning can improve ZP resolution; however, defining more task-related features with BERT feature extraction mode can enhance AZP resolution.", |
| "cite_spans": [ |
| { |
| "start": 297, |
| "end": 314, |
| "text": "Yin et al. (2018)", |
| "ref_id": "BIBREF62" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 52, |
| "end": 59, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Chinese", |
| "sec_num": "4.3.2." |
| }, |
| { |
| "text": "Other versions of BERT were pretrained specifically for English and Chinese. Chinese-only BERT performs better than BERT Multilingual on Chinese texts in some NLP tasks, according to BERT authors' Github page 5 . Therefore, it might also improve the results we obtain with Chinese, although of course adopting that model would defeat the purpose of developing a cross-lingual model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Chinese", |
| "sec_num": "4.3.2." |
| }, |
| { |
| "text": "Numerous studies show that BERT layers encode rich information about language structure (Jawahar et al., 2019; Kovaleva et al., 2019; Aken et al., 2019; Goldberg, 2019; Hewitt and Manning, 2019) . For a specific NLP task, some layers may carry more useful information than others. In fact, layers that contain indirect information may not lead NW (84) MZ (162) WB (284) BN (390) BC (510) TC (283) Overall (Zhao and Ng, 2007) 40.5 28.4 40.1 43.1 44.7 42.8 41.5 (Chen and Ng, 2015) 46.4 39.0 51.8 53.8 49.4 52.7 50.2 (Chen and Ng, 2016) 48.8 41.5 56.3 55.4 50.8 53.1 52.2 (Yin et al., 2016) 50.0 45.0 55.9 53.3 55.3 54.4 53.6 (Yin et al., 2017) 48.8 46.3 59.8 58.4 43.2 54.8 54.9 (Liu et al., 2017) 59.2 51.3 60.5 53.9 55.5 52.9 55.3 (Yin et al., 2018) 64 to the optimal performance. Therefore, it is important to investigate the internal layers and find the most transferable representation. We examined every BERT layer's weights for our model, and report their behaviour on Arabic and Chinese in Figure 3 . We can see that higher layers produce better F-scores than the lower ones. ZP context and true candidates usually share similar morphological characteristics and semantic relationships and higher layers seem to carry such information.", |
| "cite_spans": [ |
| { |
| "start": 88, |
| "end": 110, |
| "text": "(Jawahar et al., 2019;", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 111, |
| "end": 133, |
| "text": "Kovaleva et al., 2019;", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 134, |
| "end": 152, |
| "text": "Aken et al., 2019;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 153, |
| "end": 168, |
| "text": "Goldberg, 2019;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 169, |
| "end": 194, |
| "text": "Hewitt and Manning, 2019)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 405, |
| "end": 424, |
| "text": "(Zhao and Ng, 2007)", |
| "ref_id": "BIBREF65" |
| }, |
| { |
| "start": 460, |
| "end": 479, |
| "text": "(Chen and Ng, 2015)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 515, |
| "end": 534, |
| "text": "(Chen and Ng, 2016)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 570, |
| "end": 588, |
| "text": "(Yin et al., 2016)", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 624, |
| "end": 642, |
| "text": "(Yin et al., 2017)", |
| "ref_id": "BIBREF61" |
| }, |
| { |
| "start": 678, |
| "end": 696, |
| "text": "(Liu et al., 2017)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 732, |
| "end": 750, |
| "text": "(Yin et al., 2018)", |
| "ref_id": "BIBREF62" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 997, |
| "end": 1005, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "BERT Layers", |
| "sec_num": "4.3.3." |
| }, |
| { |
| "text": "Therefore, the layers in the last half tend to be more relevant to our task than the layers in the lower half. Generally, F-scores increase as we employ higher layers except when we reach the last two layers. Their slight drops of F-scores might be attributed to BERT training objectives. BERT was trained on masked language modeling (MLM) and next sentence prediction (NSP). Since we are using BERT feature extraction mode, the last layers were optimized on these pretrained tasks. Even though MLM and NSP helped BERT model learn linguistic aspects in the internal and middle layers, it might have made the last layers biased and specific to their objective goals. The third-to-last (10th layer) and fourth-to-last (9th layer) layers achieve almost equal high F-scores in Arabic and Chinese, but we find the third-tolast to provide more stable states. In our model, we set the third-to-last as the base to produce embeddings for AZPs and their candidates. We also tried combinations of layers to see if they can produce better representations for the task. Table 5 reports the first, last, and third-to-last layer F-scores. We compare their F-scores with two more settings: the weighted sum of the last 4 layers and all of 12 layers. The weighted sum of the 4 layers results in 63.1% for Chinese and 55.2% for Arabic. Chinese F-score decreases only 0.4% and Arabic 2.2% compared to their corresponding third-to-last F-scores. When we calculate the mean of all 12 layers, we get 62.4% and 53.1% for Chinese and Arabic respectively. F-scores drop 1.1% for Chinese and 4.7% for Arabic. The weighted sum of multiple layers did not seem to help improve the ZP resolution task. In both settings, Arabic seems to be more sensitive when several layers involved. Arabic morphology is complex and BERT layers might encode its morpheme interactions in some parts of its layers. Some of these interactions might get lost when multiple layers are weighted sum. Table 5 : F-scores results when we use different BERT layer(s) for token representations.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1058, |
| "end": 1065, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 1949, |
| "end": 1956, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "BERT Layers", |
| "sec_num": "4.3.3." |
| }, |
| { |
| "text": "Figure 3: Arabic and Chinese F-scores when we use each of BERT layers to produce mention embeddings. Overall, higher layers produce better representations than the lower layers. The 10th layer led to the highest F-scores in both languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BERT Layers", |
| "sec_num": "4.3.3." |
| }, |
| { |
| "text": "We presented a cross-lingual model for zero pronoun resolution based on BERT, and evaluated it on the Arabic and Chinese portions of OntoNotes 5.0. Our model is the first to specifically focus on Arabic ZPs, and outperforms state-ofthe-art results for Chinese as well. In addition, our model demonstrated better outcomes than BERT fine-tuning and feature extraction modes. We showed that adding positional features to BERT learned representations can improve ZP resolution. We also examined BERT layers, and reported our observations and insights on which layer can be the most suitable for the task. In the future, we plan to develop a ZP identification system, and evaluate our proposed model on more languages and other global features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5." |
| }, |
| { |
| "text": "https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In our experiments, Tokenizer segmented many Arabic text into several sub-tokens, but rarely did segment Chinese.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/google-research/bert/blob/master/multilingual.md", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank the anonymous reviewers for their insightful comments and suggestions which helped to improve the quality of the paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "How does bert answer questions? a layer-wise analysis of transformer representations", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Aken", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Winter", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "L\u00f6ser", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "A" |
| ], |
| "last": "Gers", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXivpreprintarXiv:1908.08593" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aken, B., Winter, B., L\u00f6ser, A., and Gers, F. A. (2019). How does bert answer questions? a layer-wise anal- ysis of transformer representations. In arXiv preprint arXiv:1908.08593.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Pro-drop in standard arabic", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [ |
| "M" |
| ], |
| "last": "Alnajadat", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "In International Journal of English Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alnajadat, B. M. (2017). Pro-drop in standard arabic. In International Journal of English Linguistics 7.1.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Evaluating automated and manual acquisition of anaphora resolution strategies", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Aone", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "W" |
| ], |
| "last": "Bennett", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "ACL '95 Proceedings of the 33rd annual meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "122--129", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aone, C. and Bennett, S. W. (1995). Evaluating automated and manual acquisition of anaphora resolution strate- gies. In ACL '95 Proceedings of the 33rd annual meet- ing on Association for Computational Linguistics, pages 122-129.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Learning structured perceptrons for coreference resolution with latent antecedents and non-local features", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Bj\u00f6rkelund", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Kuhn", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "47--57", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bj\u00f6rkelund, A. and Kuhn, J. (2014). Learning structured perceptrons for coreference resolution with latent an- tecedents and non-local features. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 47-57.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Resolving zero anaphors and pronouns in korean", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "K" |
| ], |
| "last": "Byron", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Gegg-Harrison", |
| "suffix": "" |
| }, |
| { |
| "first": "S.-H", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Traitement Automatique des Langues", |
| "volume": "46", |
| "issue": "", |
| "pages": "91--114", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Byron, D. K., Gegg-Harrison, W., and Lee, S.-H. (2006). Resolving zero anaphors and pronouns in korean. In Traitement Automatique des Langues 46.1, pages 91-114.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Combining the best of two worlds: A hybrid approach to multilingual coreference resolution", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Joint Conference on EMNLP and CoNLL-Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "56--63", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, C. and Ng, V. (2012). Combining the best of two worlds: A hybrid approach to multilingual coreference resolution. In Joint Conference on EMNLP and CoNLL- Shared Task, pages 56-63.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Chinese zero pronoun resolution: Some recent advances", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 conference on empirical methods in natural language processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1360--1365", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, C. and Ng, V. (2013). Chinese zero pronoun res- olution: Some recent advances. In Proceedings of the 2013 conference on empirical methods in natural lan- guage processing, pages 1360-1365.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Chinese zero pronoun resolution: An unsupervised probabilistic model rivaling supervised resolvers", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, C. and Ng, V. (2014). Chinese zero pronoun resolu- tion: An unsupervised probabilistic model rivaling super- vised resolvers. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Chinese zero pronoun resolution: A joint unsupervised discourse-aware model rivaling state-of-the-art resolvers", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "2", |
| "issue": "", |
| "pages": "320--326", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, C. and Ng, V. (2015). Chinese zero pronoun reso- lution: A joint unsupervised discourse-aware model ri- valing state-of-the-art resolvers. In Proceedings of the 53rd Annual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), pages 320-326.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Chinese zero pronoun resolution with deep neural network", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "778--788", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, C. and Ng, V. (2016). Chinese zero pronoun resolu- tion with deep neural network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778-788.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Pronominal anaphora resolution in chinese", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Converse", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Converse, S. (2006). Pronominal anaphora resolution in chinese. In PhD Thesis, University of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Joint determination of anaphoricity and coreference resolution using integer programming", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Denis", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Baldridge", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "236--243", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Denis, P. and Baldridge, J. (2007). Joint determination of anaphoricity and coreference resolution using integer programming. In Human Language Technologies 2007: The Conference of the North American Chapter of the As- sociation for Computational Linguistics; Proceedings of the Main Conference, pages 236-243.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "M.-W", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXivpreprintarXiv:1810.04805" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional trans- formers for language understanding. In arXiv preprint arXiv:1810.04805.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Centering theory and the italian pronominal system", |
| "authors": [ |
| { |
| "first": "Di", |
| "middle": [], |
| "last": "Eugenio", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Proc. of the 13th COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Di Eugenio, B. (1990). Centering theory and the ital- ian pronominal system. In Proc. of the 13th COLING, Helsinki, Finland.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "On the communicative function of subject pronouns in arabic", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Eid", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "Journal of Linguistics 19", |
| "volume": "2", |
| "issue": "", |
| "pages": "287--303", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eid, M. (1983). On the communicative function of subject pronouns in arabic. In Journal of Linguistics 19.2, pages 287-303.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Latent trees for coreference resolution", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [ |
| "R" |
| ], |
| "last": "Fernandes", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "N" |
| ], |
| "last": "Santos", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Milidi\u00fa", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Computational Linguistics", |
| "volume": "40", |
| "issue": "", |
| "pages": "801--835", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fernandes, E. R., dos Santos, C. N., and Milidi\u00fa, R. (2014). Latent trees for coreference resolution. In Computational Linguistics, 40(4), pages 801-835.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "A computational approach to zero-pronouns in spanish", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Ferr\u00e1ndez", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Peral", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "166--172", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ferr\u00e1ndez, A. and Peral, J. (2000). A computational ap- proach to zero-pronouns in spanish. In Proceedings of the 38th Annual Meeting of the Association for Compu- tational Linguistics, pages 166-172.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Null element restoration", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Gabbard", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gabbard, R. (2010). Null element restoration. In Ph.D Thesis, University of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Understanding the difficulty of training deep feedforward neural networks", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Glorot", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the thirteenth international conference on artificial intelligence and statistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Glorot, X. and Bengio, Y. (2010). Understanding the dif- ficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Assessing bert's syntactic abilities", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1901.05287" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Goldberg, Y. (2019). Assessing bert's syntactic abilities. In arXiv preprint arXiv:1901.05287.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Zero pronouns and their resolution in sanskrit texts", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Gopal", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "N" |
| ], |
| "last": "Jha", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "The International Symposium on Intelligent Systems Technologies and Application", |
| "volume": "", |
| "issue": "", |
| "pages": "255--267", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gopal, M. and Jha, G. N. (2017). Zero pronouns and their resolution in sanskrit texts. In The International Sympo- sium on Intelligent Systems Technologies and Applica- tion, pages 255-267.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Np subject detection in verb-initial arabic clauses", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Green", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Sathi", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Third Workshop on Computational Approaches to Arabic Script-based Languages (CAASL3)", |
| "volume": "112", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Green, S., Sathi, C., and Manning, C. (2009). Np subject detection in verb-initial arabic clauses. In Proceedings of the Third Workshop on Computational Approaches to Arabic Script-based Languages (CAASL3). Vol. 112.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "An algorithm for zero pronoun resolution in bulgarian", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Grigorova", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 14th International Conference on Computer Systems and Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Grigorova, D. (2013). An algorithm for zero pronoun res- olution in bulgarian. In Proceedings of the 14th Interna- tional Conference on Computer Systems and Technolo- gies.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Centering: A framework for modeling the local coherence of discourse", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Grosz", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Weinstein", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Computational linguistics", |
| "volume": "21", |
| "issue": "", |
| "pages": "203--225", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Grosz, B., Joshi, A., and Weinstein, S. (1995). Center- ing: A framework for modeling the local coherence of discourse. In Computational linguistics 21, no. 2, pages 203-225.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "A korean null pronouns: Classification and annotation", |
| "authors": [ |
| { |
| "first": "N.-R", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 2004 ACL Workshop on Discourse Annotation. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "33--40", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Han, N.-R. (2004). A korean null pronouns: Classification and annotation. In Proceedings of the 2004 ACL Work- shop on Discourse Annotation. Association for Compu- tational Linguistics, 2004., pages 33-40.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Japanese zero reference resolution considering exophora and author/reader mentions", |
| "authors": [], |
| "year": null, |
| "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "924--934", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Japanese zero reference resolution considering exophora and author/reader mentions. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 924-934.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "A structural probe for finding syntax in word representations", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Hewitt", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hewitt, J. and Manning, C. (2019). A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Pa- pers).", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Resolving pronoun references", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Hobbs", |
| "suffix": "" |
| } |
| ], |
| "year": 1978, |
| "venue": "Lingua", |
| "volume": "", |
| "issue": "", |
| "pages": "311--338", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hobbs, J. (1978). Resolving pronoun references. In Lin- gua, pages 311-338.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "On the distribution and reference of empty pronouns", |
| "authors": [ |
| { |
| "first": "C.-T", |
| "middle": [ |
| "J" |
| ], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "Linguistic Inquiry", |
| "volume": "15", |
| "issue": "", |
| "pages": "531--574", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Huang, C.-T. J. (1984). On the distribution and reference of empty pronouns. In Linguistic Inquiry, Vol. 15, No. 4, pages 531-574.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "A cross-lingual ilp solution to zero anaphora resolution", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Iida", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "804--813", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iida, R. and Poesio, M. (2011). A cross-lingual ilp solution to zero anaphora resolution. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 804-813.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Exploiting syntactic patterns as clues in zero-anaphora resolution", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Iida", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Inui", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistic", |
| "volume": "", |
| "issue": "", |
| "pages": "625--632", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iida, R., Inui, K., and Matsumoto, Y. (2006). Exploit- ing syntactic patterns as clues in zero-anaphora resolu- tion. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistic, pages 625-632.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Zeroanaphora resolution by learning rich syntactic pattern features", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Iida", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Inui", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "In ACM Transactions on Asian Language Information Processing", |
| "volume": "6", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iida, R., Inui, K., and Matsumoto., Y. (2007). Zero- anaphora resolution by learning rich syntactic pattern fea- tures. In ACM Transactions on Asian Language Informa- tion Processing, 6(4).", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Intra-sentential zero anaphora resolution using subject sharing recognition", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Iida", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Torisawa", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Hashimoto", |
| "suffix": "" |
| }, |
| { |
| "first": "J.-H", |
| "middle": [], |
| "last": "Oh", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Kloetzer", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2179--2189", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iida, R., Torisawa, K., Hashimoto, C., Oh, J.-H., and Kloet- zer, J. (2015). Intra-sentential zero anaphora resolution using subject sharing recognition. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2179-2189.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Discriminative approach to predicate argument structure analysis with zero-anaphora resolution", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Imamura", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Saito", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Izumi", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the ACL-IJCNLP 2009 Conference Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "85--88", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Imamura, K., Saito, K., and Izumi, T. (2009). Discrim- inative approach to predicate argument structure anal- ysis with zero-anaphora resolution. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 85-88.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Japanese zero pronoun resolution based on ranking rules and machine learning", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Isozaki", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Hirao", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "184--191", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Isozaki, H. and Hirao, T. (2003). Japanese zero pronoun resolution based on ranking rules and machine learn- ing. In Proceedings of the 2003 Conference on Empir- ical Methods in Natural Language Processing,, pages 184-191.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "What does bert learn about the structure of language?", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Jawahar", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Sagot", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Seddah", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "57th Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jawahar, G., Sagot, B., and Seddah, D. (2019). What does bert learn about the structure of language? In 57th An- nual Meeting of the Association for Computational Lin- guistics (ACL), Florence, Italy.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Zero Anaphora: The case of Japanese", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Kameyama", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kameyama, M. (1985). Zero Anaphora: The case of Japanese. Ph.D. thesis, Stanford University, Stanford, CA.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Zero-subject resolution method based on probabilistic inference with evaluation function", |
| "authors": [ |
| { |
| "first": "Y.-B", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Ehara", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the 3rd Natural Language Processing Pacific-Rim Symposium", |
| "volume": "", |
| "issue": "", |
| "pages": "721--727", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kim, Y.-B. and Ehara., T. (1995). Zero-subject resolution method based on probabilistic inference with evaluation function. In Proceedings of the 3rd Natural Language Processing Pacific-Rim Symposium, pages 721--727.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Subject/object drop in the acquisition of korean: A cross-linguistic comparison", |
| "authors": [ |
| { |
| "first": "Y.-J", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Journal of East Asian Linguistics 9", |
| "volume": "4", |
| "issue": "", |
| "pages": "325--351", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kim, Y.-J. (2000). Subject/object drop in the acquisition of korean: A cross-linguistic comparison. In Journal of East Asian Linguistics 9.4, pages 325-351.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "A tree kernel-based unified framework for chinese zero anaphora resolution", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Kong", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "882--891", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kong, F. and Zhou, G. (2010). A tree kernel-based unified framework for chinese zero anaphora resolution. In Pro- ceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Com- putational Linguistics, pages 882-891.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Revealing the dark secrets of bert", |
| "authors": [ |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Kovaleva", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Romanov", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Rogers", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Rumshisky", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1908.08593" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kovaleva, O., Romanov, A., Rogers, A., and Rumshisky, A. (2019). Revealing the dark secrets of bert. In arXiv preprint arXiv:1908.08593.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Stanford's multi-pass sieve coreference resolution system at the conll-2011 shared task", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Peirsman", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Chambers", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "CONLL Shared Task '11 Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "28--34", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lee, H., Peirsman, Y., Chang, A., Chambers, N., Surdeanu, M., and Jurafsky, D. (2011). Stanford's multi-pass sieve coreference resolution system at the conll-2011 shared task. In CONLL Shared Task '11 Proceedings of the Fif- teenth Conference on Computational Natural Language Learning: Shared Task, pages 28-34.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Third person pronouns and zero anaphora in chinese discourse", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "N" |
| ], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Thompson", |
| "suffix": "" |
| } |
| ], |
| "year": 1979, |
| "venue": "Syntax and Semantics", |
| "volume": "12", |
| "issue": "", |
| "pages": "311--335", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li, C. N. and Thompson, S. A. (1979). Third person pro- nouns and zero anaphora in chinese discourse. In Syntax and Semantics, volume 12: Discourse and Syntax, pages 311-335. Academic Press.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Generating and exploiting large-scale pseudo training data for zero pronoun resolution", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Cui", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Yin", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXivpreprintarXiv:1606.01603" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liu, T., Cui, Y., Yin, Q., Zhang, W., Wang, S., and Hu, G. (2017). Generating and exploiting large-scale pseudo training data for zero pronoun resolution. In arXiv preprint arXiv:1606.01603.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Zero pronominal anaphora resolution for the romanian language", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Mih\u0103il\u0103", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Ilisei", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Inkpen", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "In Research Journal on Computer Science and Computer Engineering with Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mih\u0103il\u0103, C., Ilisei, I., , and Inkpen, D. (2011). Zero pronominal anaphora resolution for the romanian lan- guage. In Research Journal on Computer Science and Computer Engineering with Applications, POLIBITS, 42.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Rectified linear units improve restricted boltzmann machines", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Nair", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 27th international conference on machine learning (ICML-10)", |
| "volume": "", |
| "issue": "", |
| "pages": "807--814", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nair, V. and Hinton, G. (2010). Rectified linear units im- prove restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807-814.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "To tune or not to tune? adapting pretrained representations to diverse tasks", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Peters", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Ruder", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXivpreprintarXiv:1903.05987" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peters, M., Ruder, S., and Smith, N. (2019). To tune or not to tune? adapting pretrained representations to diverse tasks. In arXiv preprint arXiv:1903.05987.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Creating a coreference resolution system for italian", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Uryupina", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Versley", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Poesio, M., Uryupina, O., and Versley, Y. (2010). Creat- ing a coreference resolution system for italian. In LREC 2010 May 19.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Moschitti", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Uryupina", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Joint Conference on EMNLP and CoNLL-Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "1--40", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pradhan, S., Moschitti, A., Xue, N., Uryupina, O., and Zhang, Y. (2012). Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In Joint Conference on EMNLP and CoNLL-Shared Task. Association for Computational Linguistics, Association for Computational Linguistics., pages 1-40.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "discriminative approach to japanese zero anaphora resolution with largescale lexicalized case frames", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Sasano", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kurohashi", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 5th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "758--766", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sasano, R. and Kurohashi, S. (2011). discriminative ap- proach to japanese zero anaphora resolution with large- scale lexicalized case frames. In Proceedings of the 5th International Joint Conference on Natural Language Processing, pages 758-766.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "A fully-lexicalized probabilistic model for japanese zero anaphora resolution", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Sasano", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Kawahara", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kurohashi", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "769--776", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sasano, R., Kawahara, D., and Kurohashi, S. (2008). A fully-lexicalized probabilistic model for japanese zero anaphora resolution. In Proceedings of the 22nd Interna- tional Conference on Computational Linguistics, pages 769-776.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "The effect of corpus size on case frame acquisition for discourse analysis", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Sasano", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Kawahara", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kurohashi", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "521--529", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sasano, R., Kawahara, D., and Kurohashi, S. (2009). The effect of corpus size on case frame acquisition for dis- course analysis. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 521-529.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "A probabilistic method for analyzing japanese anaphora integrating zero pronoun detection and resolution", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Seki", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Fujii", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Ishikawa", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 19th International Conference on Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1--7", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Seki, K., Fujii, A., and Ishikawa., T. (2002). A probabilis- tic method for analyzing japanese anaphora integrating zero pronoun detection and resolution. In Proceedings of the 19th International Conference on Computational Linguistics -Volume 1, pages 1-7.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "A japanese predicate argument structure analysis using decision lists", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Taira", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Fujita", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Nagata", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "523--532", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taira, H., Fujita, S., and Nagata, M. (2008). A japanese predicate argument structure analysis using decision lists. In Proceedings of the 2008 Conference on Em- pirical Methods in Natural Language Processing, pages 523-532.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "A structured model for joint learning of argument roles and predicate senses", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Watanabe", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Asahara", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the ACL 2010 Conference Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "98--101", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Watanabe, Y., Asahara, M., and Matsumoto, Y. (2010). A structured model for joint learning of argument roles and predicate senses. In Proceedings of the ACL 2010 Con- ference Short Papers,, pages 98-101.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "Google's neural machine translation system: Bridging the gap between human and machine translation", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [ |
| "V" |
| ], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Norouzi", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Macherey", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Krikun", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Cao", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Macherey", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Klingner", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Shah", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "\u0141ukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Gouws", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Kato", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Kudo", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Kazawa", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Stevens", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Kurian", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Patil", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Riesa", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Rudnick", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Hughes", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1609.08144" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., Klingner, J., Shah, A., Johnson, M., Liu, X., \u0141ukasz Kaiser, Gouws, S., Kato, Y., Kudo, T., Kazawa, H., Stevens, K., Kurian, G., Patil, N., Wang, W., Young, C., Smith, J., Riesa, J., Rudnick, A., Vinyals, O., Cor- rado, G., Hughes, M., and Dean, J. (2016). Google's neural machine translation system: Bridging the gap be- tween human and machine translation. In arXiv preprint arXiv:1609.08144.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "Neural japanese zero anaphora resolution using smoothed large-scale case frames with word embedding", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Yamashiro", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Nishikawa", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Tokunaga", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yamashiro, S., Nishikawa, H., and Tokunaga, T. (2018). Neural japanese zero anaphora resolution using smoothed large-scale case frames with word embedding. In Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "Zero anaphora resolution in chinese with shallow parsing", |
| "authors": [ |
| { |
| "first": "C.-L", |
| "middle": [], |
| "last": "Yeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Y.-C", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "In Journal of Chinese Language and Computing", |
| "volume": "17", |
| "issue": "1", |
| "pages": "41--56", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yeh, C.-L. and Chen, Y.-C. (2006). Zero anaphora resolu- tion in chinese with shallow parsing. In Journal of Chi- nese Language and Computing 17 (1), pages 41-56.", |
| "links": null |
| }, |
| "BIBREF60": { |
| "ref_id": "b60", |
| "title": "A deep neural network for chinese zero pronoun resolution", |
| "authors": [ |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Yin", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXivpreprintarXiv:1604.05800" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yin, Q., Zhang, Y., Zhang, W., and Liu, T. (2016). A deep neural network for chinese zero pronoun resolution. In arXiv preprint arXiv:1604.05800.", |
| "links": null |
| }, |
| "BIBREF61": { |
| "ref_id": "b61", |
| "title": "Chinese zero pronoun resolution with deep memory network", |
| "authors": [ |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Yin", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1309--1318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yin, Q., Zhang, Y., Zhang, W., and Liu, T. (2017). Chinese zero pronoun resolution with deep memory network. In Proceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 1309-1318.", |
| "links": null |
| }, |
| "BIBREF62": { |
| "ref_id": "b62", |
| "title": "Zero pronoun resolution with attention-based neural network", |
| "authors": [ |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Yin", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "13--23", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yin, Q., Zhang, Y., Zhang, W., Liu, T., and Wang, W. Y. (2018). Zero pronoun resolution with attention-based neural network. In Proceedings of the 27th International Conference on Computational Linguistics, pages 13-23.", |
| "links": null |
| }, |
| "BIBREF63": { |
| "ref_id": "b63", |
| "title": "Jointly extracting japanese predicate-argument relation with markov logic", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Yoshikawa", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Asahara", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1125--1133", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoshikawa, K., Asahara, M., and Matsumoto, Y. (2011). Jointly extracting japanese predicate-argument relation with markov logic. In Proceedings of 5th Interna- tional Joint Conference on Natural Language Process- ing, pages 1125-1133.", |
| "links": null |
| }, |
| "BIBREF64": { |
| "ref_id": "b64", |
| "title": "Predicate argument structure analysis using partially annotated corpora", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Yoshino", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Mori", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Kawahara", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "957--961", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoshino, K., Mori, S., and Kawahara, T. (2013). Pred- icate argument structure analysis using partially anno- tated corpora. In Proceedings of the Sixth Interna- tional Joint Conference on Natural Language Process- ing, pages 957-961.", |
| "links": null |
| }, |
| "BIBREF65": { |
| "ref_id": "b65", |
| "title": "Identification and resolution of chinese zero pronouns: A machine learning approach", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "T" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "541--550", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhao, S. and Ng, H. T. (2007). Identification and resolution of chinese zero pronouns: A machine learning approach. In Proceedings of the 2007 Joint Conference on Empiri- cal Methods in Natural Language Processing and Com- putational Natural Language Learning, pages 541-550.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "text": "The sentence \"My sweetheart is sleeping\" preprocessed through Tokenizer. Tokenizer segments words, and introduces '[CLS]' and '[SEP]' tokens. After the Tokenizer step, the input is fed into BERT, which outputs embeddings.", |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "html": null, |
| "content": "<table><tr><td colspan=\"2\">\u202b\ufee0\u202c \u202b\ufeb2\u202c</td><td colspan=\"2\">\u202b\ufee3\u202c \u202b\ufea0\u202c</td><td colspan=\"2\">\u202b\u0646\u202c</td><td>\u202b\u0623\u202c</td><td>\u202b\ufef0\u202c</td><td>\u202b\ufecb\u202c \u202b\ufee0\u202c</td><td colspan=\"2\">*</td><td colspan=\"2\">\u202b\ufeb0\u202c</td><td>\u202b\ufedb\u202c</td><td colspan=\"2\">\u202b\u0631\u202c</td><td colspan=\"2\">\u202b\ufe9a\u202c</td><td>\u202b\ufbff\u202c</td><td>\u202b\ufea3\u202c</td><td>.</td><td>.</td><td colspan=\"2\">.</td><td colspan=\"2\">\u202b\ufbff\u202c \u202b\ufede\u202c</td><td>\u202b\ufed4\u202c \u202b\ufe8e\u202c \u202b\ufebb\u202c</td><td>\u202b\ufe98\u202c</td><td>\u202b\ufe91\u202c</td><td>\u202b\ufeb0\u202c</td><td>\u202b\ufe97\u202c \u202b\ufee4\u202c \u202b\ufbff\u202c</td><td>\u202b\ufeae\u202c \u202b\u064a\u202c</td><td>\u202b\ufbfe\u202c</td><td>\u202b\ufeae\u202c</td><td>\u202b\ufea4\u202c</td><td>\u202b\ufedf\u202c</td><td>\u202b\u0627\u202c</td><td>\u202b\u0646\u202c</td><td>\u202b\ufbff\u202c \u202b\ufe8e\u202c</td></tr><tr><td>.</td><td>.</td><td>.</td><td colspan=\"2\">\u202b\ufee6\u202c</td><td colspan=\"2\">\u202b\ufecb\u202c</td><td colspan=\"2\">\u202b\ufe86\u202c \u202b\u0648\u202c \u202b\u0644\u202c</td><td>\u202b\ufeb4\u202c</td><td colspan=\"2\">\u202b\ufedf\u202c \u202b\ufee4\u202c</td><td>\u202b\u0627\u202c</td><td colspan=\"2\">\u202b\u0647\u202c</td><td colspan=\"2\">\u202b\ufea3\u202c \u202b\ufeaa\u202c</td><td>\u202b\u0648\u202c</td><td colspan=\"2\">\u202b\ufe8e\u202c \u202b\u0646\u202c</td><td>\u202b\ufedf\u202c \u202b\ufe92\u202c \u202b\ufee8\u202c</td><td colspan=\"2\">\u202b\u0627\u202c \u202b\u0621\u202c</td><td colspan=\"2\">\u202b\u0631\u202c</td><td colspan=\"2\">\u202b\u0648\u202c \u202b\u0632\u202c</td><td/><td/><td/></tr></table>", |
| "type_str": "table", |
| "text": "Alhariri's statement included more details ...in which (he) emphasized that the council of ministers of Lebanon is the only representative ...", |
| "num": null |
| }, |
| "TABREF1": { |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "text": "Hyperparameter settings.", |
| "num": null |
| }, |
| "TABREF3": { |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "text": "Statistics on Chinese and Arabic datasets. Chinese test portion does not contain zero pronouns; therefore, the development portion is used for evaluation as done in prior works.", |
| "num": null |
| }, |
| "TABREF4": { |
| "html": null, |
| "content": "<table><tr><td/><td>.3</td><td>52.5</td><td>62.0</td><td>58.5</td><td>57.6</td><td>53.2</td><td>57.3</td></tr><tr><td>BERT (feature extraction)</td><td>59.3</td><td>48.7</td><td>66.0</td><td>64.9</td><td>57.9</td><td>59.5</td><td>60.4</td></tr><tr><td>BERT (fine-tuning)</td><td>61.8</td><td>51.8</td><td>67.9</td><td>66.7</td><td>58.7</td><td>61.6</td><td>62.1</td></tr><tr><td>Our model</td><td>63.4</td><td>54.4</td><td>69.4</td><td>68.5</td><td>60.8</td><td>62.1</td><td>63.5</td></tr></table>", |
| "type_str": "table", |
| "text": "Table 4: Our proposed model F scores on Chinese ZPs compared with BERT two modes and other models.", |
| "num": null |
| } |
| } |
| } |
| } |