ACL-OCL / Base_JSON /prefixP /json /paclic /2020.paclic-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:01:48.136489Z"
},
"title": "Contextual Characters with Segmentation Representation for Named Entity Recognition in Chinese",
"authors": [
{
"first": "Blouin",
"middle": [],
"last": "Baptiste",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IrAsia Aix-Marseille University",
"location": {
"country": "LIS ENP-China"
}
},
"email": "baptiste.blouin@lis-lab.fr"
},
{
"first": "Magistry",
"middle": [],
"last": "Pierre",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aix-Marseille University",
"location": {
"country": "IrAsia ENP-China"
}
},
"email": "pierre@magistry.fr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Named Entity Recognition (NER) is a typical sequence labeling task. It remains challenging for Chinese, partly because of the lack of clear typographic word boundaries. Decisions have to be made regarding the choice of basic units which constitute the sequence to be labeled, and their vectorized representation. Recent approaches have shown that character-based models lack the information about larger units (words) which is useful for NER, while word-based models may suffer from the propagation of word segmentation errors and a higher rate of Out-of-Vocabulary (OOV) tokens. In this paper, we propose a new representation of sinograms (Chinese characters) enriched with word boundary information, for which different types of embeddings can be built. Experiments show that our solution outperforms other state-of-the-art models. We also took great care to propose a fully retrainable pipeline, which is available at https://github.com/enp-china/CCSR-NER. It does not rely on pretrained models and can be trained in few days on common hardware.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Named Entity Recognition (NER) is a typical sequence labeling task. It remains challenging for Chinese, partly because of the lack of clear typographic word boundaries. Decisions have to be made regarding the choice of basic units which constitute the sequence to be labeled, and their vectorized representation. Recent approaches have shown that character-based models lack the information about larger units (words) which is useful for NER, while word-based models may suffer from the propagation of word segmentation errors and a higher rate of Out-of-Vocabulary (OOV) tokens. In this paper, we propose a new representation of sinograms (Chinese characters) enriched with word boundary information, for which different types of embeddings can be built. Experiments show that our solution outperforms other state-of-the-art models. We also took great care to propose a fully retrainable pipeline, which is available at https://github.com/enp-china/CCSR-NER. It does not rely on pretrained models and can be trained in few days on common hardware.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The present work explores the task of Named Entity Recognition (NER) in Mandarin Chinese, specifically for cases when relying on large pre-trained models is not an option. This can occur when one has to process domain specific data, or in our case 1 , historical texts where language is quite different from the language of the corpora used to pretrain publicly available models, especially words and characters embeddings. The models we propose can be trained in a reasonable time (days) from a relatively small amount of raw data (few hundred millions of characters) on affordable hardware (such as a single GTX 1080 ti).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 ENP China, https://www.enpchina.eu / (ERC No 788476) Chinese script does not provide a clear and frequent typographic marker for word boundaries. As a result, when addressing the case of Chinese(s) language(s) in NER, we have to face the issue of word segmentation. Recent models proposed in the literature can be divided into character-based, word-based or hybrid models, but every work had to take a stance regarding Chinese Word Segmentation (CWS). The importance and methods for CWS have a long history in Chinese NLP, a recent work Li et al. (2019) makes the strong claim that the neural era of NLP is turning CWS into an irrelevant or even harmful step in a pipeline. However Li et al. (2019) did not provide experimental results on the NER task and our own experiments presented in this paper tend to show that CWS can be either harmful or beneficial, depending on how much care is given to consistency in segmentation and to the way word embeddings are built and used. Our main findings are that off-the-shelf embeddings for Mandarin Chinese must be used carefully, but it is possible to improve on the state-of-the-art by retraining everything from raw and labeled corpora, as we achieve 77.27 (+2.84) of f-score on OntoNotes 4 (Hovy et al., 2006) and 80.64 (+1.04) on OntoNotes 5 with a model simpler than previous state-of-the-art which requires dependency parsing.",
"cite_spans": [
{
"start": 37,
"end": 54,
"text": "/ (ERC No 788476)",
"ref_id": null
},
{
"start": 539,
"end": 555,
"text": "Li et al. (2019)",
"ref_id": "BIBREF22"
},
{
"start": 684,
"end": 700,
"text": "Li et al. (2019)",
"ref_id": "BIBREF22"
},
{
"start": 1239,
"end": 1258,
"text": "(Hovy et al., 2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The second focus of our study is a comparison between supervised and unsupervised CWS. When targeting a specific downstream NLP task, we ran experiments to decide whether we should follow a specific segmentation guideline by the mean of supervised machine learning, or if consistency brought by an unsupervised system is enough to improve on the downstream (here NER) task. This question is crucial for us to face more ancient texts, for which training data for CWS may not be available. We show that using CWS for the task of named entity recognition allows to provide useful information compared to using only characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, the contributions of this paper are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a novel method to combine CWS information and a character-level representation which can be used by a BiLSTM-CRF (Lample et al., 2016) model to improve on Chinese NER task. \u2022 In an attempt to explain this improvement, we study the impact of our new representation on the OOV issue compared to other possible representations. \u2022 We investigate two different strategies of supervised and unsupervised CWS, to assess for the need of manually segmented training corpus. \u2022 The experimental results demonstrate that our proposed method significantly outperforms the current state-of-the-art performance on five different Chinese NER datasets. Our proposed solution does not rely on any pre-trained models, and can be fully trained from corpora of relatively small size on affordable hardware.",
"cite_spans": [
{
"start": 126,
"end": 147,
"text": "(Lample et al., 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work relates to existing methods on multiple tasks, including NER, segmentation and embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "2"
},
{
"text": "Our model architecture is similar to that proposed by Huang et al. (2015) , which is a bidirectional recurrent neural network (BiLSTMs) with a subsequent conditional random field (CRF) decoding layer. For this kind of architecture we have to choose a level of tokenization for the input. It can result in wordbased models, character-based models and hybrid models. A word-based BiLSTM-CRF model applied to Chinese NER will suffer from segmentation errors. Zhang and Yang (2018) and Liu et al. (2019) showed that using a hybrid model to integrate words in character sequence leads to better results for character-based Chinese NER. The main difference between those models is that Zhang and Yang (2018) uses a DAG-structured LSTM to put every potential words that match a lexicon into their model, this requires them to process sentences one by one, whereas Liu et al. (2019) add word infor-mation into the input vector. This second approach selects a single segmentation and choose one word for each character without ambiguity. Another approach to integrate the word segmentation information to the model was proposed by Cao et al. (2018) which involves using multitask on Chinese segmentation to transfer this information to the NER task. Jie and Lu (2019) propose a more complex approach which integrates dependency parses to the LSTM and relies on pre-trained ELMo contextual embeddings. They obtain promising results on the OntoNotes 5 corpus, but they do not discuss the issue of word segmentation (for which they use the gold segmentation).",
"cite_spans": [
{
"start": 54,
"end": 73,
"text": "Huang et al. (2015)",
"ref_id": "BIBREF16"
},
{
"start": 456,
"end": 477,
"text": "Zhang and Yang (2018)",
"ref_id": "BIBREF46"
},
{
"start": 482,
"end": 499,
"text": "Liu et al. (2019)",
"ref_id": null
},
{
"start": 680,
"end": 701,
"text": "Zhang and Yang (2018)",
"ref_id": "BIBREF46"
},
{
"start": 857,
"end": 874,
"text": "Liu et al. (2019)",
"ref_id": null
},
{
"start": 1122,
"end": 1139,
"text": "Cao et al. (2018)",
"ref_id": null
},
{
"start": 1241,
"end": 1258,
"text": "Jie and Lu (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Recognition",
"sec_num": "2.1"
},
{
"text": "Word-level information can be introduced into a NER system in various ways, as a first step of processing or to build an external resource such as a word embeddings lexicon. In any case, it relies on a Chinese Word Segmentation (CWS) system and training corpus in the supervised case. When using pre-trained word embeddings, one implicitly relies on the CWS system which has been used to prepare the embeddings. In our case we conduct two kinds of experiments, the first one is based on supervised CWS for which we use zpar (Zhang and Clark, 2007) trained on the Chinese Treebank 1 . Since training data for word segmentation is not available for all domains, languages (to adapt to other sinitic languages, such as Cantonese) or more ancient documents, and can be time consuming or costly to obtain, we also run experiments based on an unsupervised CWS system using eleve (Magistry and Sagot, 2012) which requires only an unannotated corpus. We use texts from the Chinese Wikipedia to train the segmenter, which we sampled from the corpus prepared by Majli\u0161 and\u017dabokrtsk\u00fd (2012) down to a size we think consistent to what will be available for future adaptations of our system.",
"cite_spans": [
{
"start": 524,
"end": 547,
"text": "(Zhang and Clark, 2007)",
"ref_id": null
},
{
"start": 873,
"end": 899,
"text": "(Magistry and Sagot, 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Segmentation",
"sec_num": "2.2"
},
{
"text": "Vectorized word representations (Turian et al., 2010; Mikolov et al., 2013) , especially known as word embeddings, are a key element for multiple NLP tasks including NER (Collobert et al., 2011) . Today there are three distinct embedding types. Classical word embedding (Pennington et al., 2014; Mikolov et al., 2013) , character-level features (Ma and Hovy, 2016; Zhang and Yang, 2018) and contextualized word embeddings (Peters et al., 2017; Zhang and Yang, 2018) . Contextualized word embeddings as been shown to be effective for improving many natural language processing tasks including NER. In our work we use FastText (Bojanowski et al., 2016a) to generate our non-contextual embeddings and Flair Akbik et al. (2018) for the contextual ones. We decided not to use BERT (Devlin et al., 2018) because in our situation we will have to train new embeddings on multiple historical subcorpora of a limited size, which makes BERT either unusable or not affordable. It remains worth noting that we outperform the systems tested in (Jie and Lu, 2019) which rely on ELMo (Peters et al., 2018) and for which the authors report it obtained performances similar to BERT in preliminary experiments.",
"cite_spans": [
{
"start": 32,
"end": 53,
"text": "(Turian et al., 2010;",
"ref_id": "BIBREF35"
},
{
"start": 54,
"end": 75,
"text": "Mikolov et al., 2013)",
"ref_id": "BIBREF29"
},
{
"start": 166,
"end": 194,
"text": "NER (Collobert et al., 2011)",
"ref_id": null
},
{
"start": 270,
"end": 295,
"text": "(Pennington et al., 2014;",
"ref_id": "BIBREF32"
},
{
"start": 296,
"end": 317,
"text": "Mikolov et al., 2013)",
"ref_id": "BIBREF29"
},
{
"start": 345,
"end": 364,
"text": "(Ma and Hovy, 2016;",
"ref_id": "BIBREF25"
},
{
"start": 365,
"end": 386,
"text": "Zhang and Yang, 2018)",
"ref_id": "BIBREF46"
},
{
"start": 422,
"end": 443,
"text": "(Peters et al., 2017;",
"ref_id": "BIBREF34"
},
{
"start": 444,
"end": 465,
"text": "Zhang and Yang, 2018)",
"ref_id": "BIBREF46"
},
{
"start": 625,
"end": 651,
"text": "(Bojanowski et al., 2016a)",
"ref_id": null
},
{
"start": 704,
"end": 723,
"text": "Akbik et al. (2018)",
"ref_id": null
},
{
"start": 776,
"end": 797,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 1030,
"end": 1048,
"text": "(Jie and Lu, 2019)",
"ref_id": null
},
{
"start": 1068,
"end": 1089,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings",
"sec_num": "2.3"
},
{
"text": "The larger project for which we design our models introduces constraints in terms of corpus size and retrainability. We limit ourselves to a reasonable amount of data. Nevertheless, for the experiments presented in this paper, we rely on standard datasets of Modern Chinese, widely used in the literature to be able to provide a comprehensive evaluation. We limit our raw data to a random sample of 324 millions tokens (243 millions sinograms) taken from the Wikipedia in Mandarin Chinese. We make this sample available for the sake of reproducibility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "For word segmentation, we finally used the Chinese Treebank (CTB) 1 and compare it to an unsupervised word segmentation. 2 For Named Entities, we use the OntoNotes4 Corpus (Hovy et al., 2006) and follow the de facto standard split and entity types selection from . We also evaluate our system against the popular MSRA (Levow, 2006) Weibo NER (Peng and Dredze, 2015) and corpus of resume in Chinese 2 we also tried to use the dataset from Peking University (PKU) and Microsoft Research (MSR) provided for the CWS Bakeoff 2 (http://sighan.cs.uchicago.edu/bakeoff2005/) but it did not make any noticeable differences.",
"cite_spans": [
{
"start": 121,
"end": 122,
"text": "2",
"ref_id": null
},
{
"start": 172,
"end": 191,
"text": "(Hovy et al., 2006)",
"ref_id": null
},
{
"start": 318,
"end": 331,
"text": "(Levow, 2006)",
"ref_id": null
},
{
"start": 342,
"end": 365,
"text": "(Peng and Dredze, 2015)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "Type (Zhang and Yang, 2018) . Those four datasets represent three different domains, OntoNotes and MSRA datasets are in the news domain, the Chinese resume dataset contains resumes of senior executives from listed companies in the Chinese stock market and the Weibo NER dataset is drawn from the social media website Sina Weibo. Another difference between those datasets is that MSRA , Weibo and Chinese resume did not provide word segmentation for all the sections, unlike OntoNotes4 which has a gold-standard segmentation for the training, development and test sections. We also provide results on OntoNotes5 (Weischedel et al., 2013) to compare our system with Jie and Lu (2019) . We summarize the datasets in Table 1 .",
"cite_spans": [
{
"start": 5,
"end": 27,
"text": "(Zhang and Yang, 2018)",
"ref_id": "BIBREF46"
},
{
"start": 611,
"end": 636,
"text": "(Weischedel et al., 2013)",
"ref_id": null
},
{
"start": 664,
"end": 681,
"text": "Jie and Lu (2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 713,
"end": 720,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "Contextual word embeddings have shown to improve state-of-the-art on several NLP tasks. One of our contribution is to propose two new kinds of contextual embeddings at the character level which can take into account word boundary information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Character Embeddings",
"sec_num": "4.1"
},
{
"text": "Referring to Akbik et al. (2018) paper which introduces a word-level embeddings based on a character-level language model, we introduce a sinogram embedding using their character language model (LM). Where the LM allows the text to be treated as a sequence of characters passed to an LSTM which at each point in the sequence is trained to predict the next character. In our system, we train the LM to produce characters with segmentation information. Given a sequence of characters ( C 0 , C 1 , ..., C N ) we learn P (C i |C 0 , ..., C i\u22121 ), an estimate of the predictive distribution over the next character given past characters. We utilize the hidden states of a forward-backward recurrent neural network to create contextualized character embeddings. The final contextual character representation is given by :",
"cite_spans": [
{
"start": 13,
"end": 32,
"text": "Akbik et al. (2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Character Embeddings",
"sec_num": "4.1"
},
{
"text": "C LM i = C f i C b T \u2212i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Character Embeddings",
"sec_num": "4.1"
},
{
"text": "Where C f i denote the hidden state at position i of the forward LM and C b T \u2212i denote the hidden state at position T \u2212 i of the backward LM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Character Embeddings",
"sec_num": "4.1"
},
{
"text": "In this work, we investigate the different ways to inject the CWS information into a NER pipeline. Several approaches propose to directly use the wordtokens as segmented by a CWS system, they showed that discrepancies between the output of the CWS and the NE annotation can be harmful for NER. Out-of-Vocabulary (OOV) tokens is another common issue for NER. In order to tackle those issues, we designed a new kind of sinogram representation which contains the information of the chosen word segmentation at the character level. We decide to use the BIES format to represent the CWS (as introduced in Xue and Shen (2003) , originally as an intermediary step for CWS) and we train a language model to produce embeddings of those character with BIES tag. As we use a BI-LSTM to process the NER task and as we stay at a character level, our new representation allows us to reconstruct the entire word according to the BIES tag. But in the case of a mismatching segmentation between NE and word, the model can still learn to use this wrong segmentation as the right delimiter of an entity.",
"cite_spans": [
{
"start": 600,
"end": 619,
"text": "Xue and Shen (2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Character with segmentation information Embeddings",
"sec_num": "4.2"
},
{
"text": "We use the Flair framework (Akbik et al., 2019) to create our model (Figure 2.1) . The main difference with other existing NER models is that we use stacked embeddings to represent our input. With this kind of architecture we can combine our different kinds of embeddings. Character, word information and bichar embeddings are concatenated to represent each character. The final character representation is given by",
"cite_spans": [
{
"start": 27,
"end": 47,
"text": "(Akbik et al., 2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 68,
"end": 80,
"text": "(Figure 2.1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Model Description",
"sec_num": "4.3"
},
{
"text": "c i = \uf8ee \uf8f0 r char i r bichar i r word i \uf8f9 \uf8fb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "4.3"
},
{
"text": "The fact that we use character as neural units allows us to give word information associated to a character. In our case, the word information is given by the contextual character with segmentation embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "4.3"
},
{
"text": "We denote a Chinese sentence as s = {c 1 , c 2 , ..., c n }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "4.3"
},
{
"text": "We use an extra linear layer between the input layer and the LSTM's to make the stacked representation trainable. Figure 1 shows the structure of our model. The blue part of the model shows how we use the embeddings. The symbol indicates the possibility to concatenate different kinds of embeddings. Using this approach, we can then add other types of embeddings related to characters. The red part is a BiLSTM-CRF.",
"cite_spans": [],
"ref_spans": [
{
"start": 114,
"end": 122,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Model Description",
"sec_num": "4.3"
},
{
"text": "We conducted several experiments to evaluate the effectiveness of our approach across different domains. In addition, we evaluate the importance of the segmentation for our representations by using supervised and non-supervised segmentation approaches. We also investigate on the usefulness of the bichar representation for Chinese Natural Language Processing. Evaluations are reported using standard metrics of precision (P), recall (R) and F1score (F).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We used the datasets presented in the section 3, including the OntoNotes gold segmentation to evaluate the distance between our supervised/unsupervised segmentations and whether this distance makes a difference to our overall process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "Embeddings. We used FastText (Bojanowski et al., 2016b) to pretrain characters and bi-characters embeddings on a subset of 7 millions sentences from Chinese Wikipedia dump. for both of these representations we used a context of bi-character.",
"cite_spans": [
{
"start": 29,
"end": 55,
"text": "(Bojanowski et al., 2016b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "Hyper-parameter. Table 2 shows the values of hyper-parameters for our models, which were fixed without specific grid search adjustments for each individual dataset. Stochastic gradient descent (SGD) is used for optimization, with an initial learning rate of 0.1 and we divide its value by two if the f-score does not increase on the development corpus during 5 epochs. In that case, we reload the previous best model before dividing the learning rate. Configurations. In order to evaluate the importance of the different representations, we have set up 8 configurations of embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 24,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "\u2022 Char For this configuration we only use character embeddings. Table 3 : NER results for named entities on the OntoNotes 4 dataset. There are three blocks. The first two blocks contain the previous state-of-the-art models where \"Gold seg\" means that they used the reference segmentation proposed by the dataset and \"No seg\" means that they used other approaches that do not rely on reference segmentation. The last block lists the performance of our proposed model.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 71,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "OntoNotes. Table 3 shows the experimental results on OntoNotes 4 dataset. The first column (Input) shows the representations of input sentence that was used. \"Gold seg\" means that they used the segmentation provided by the corpus to represent the word in the sentence, \"No seg\" means that we used only the character as input and other approaches that do not benefit from the reference segmentation to provide information about the word level.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5.2"
},
{
"text": "The first part of table 3 are the results of ; ; Yang et al. (2017) . These three approaches rely on gold segmentation at the word level, with character embeddings. achieve good performance with 75.02 F-score. Here we exceed this score without using the gold segmentation.",
"cite_spans": [
{
"start": 49,
"end": 67,
"text": "Yang et al. (2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5.2"
},
{
"text": "The second part shows the performances of more recent approaches (Zhang and Yang, 2018; Liu et al., 2019) and a character baseline which is the original character-based BILSTM-CRF model. Zhang and Yang (2018) proposes a lattice LSTM to ex-ploit word information in character sequence and Liu et al. (2019) use a new word-character LSTM model to add word information on the first or on the last character of each word. These two approaches show a significant improvement compared to the character baseline, which illustrates the importance of the word information in character sequence.",
"cite_spans": [
{
"start": 65,
"end": 87,
"text": "(Zhang and Yang, 2018;",
"ref_id": "BIBREF46"
},
{
"start": 88,
"end": 105,
"text": "Liu et al., 2019)",
"ref_id": null
},
{
"start": 187,
"end": 208,
"text": "Zhang and Yang (2018)",
"ref_id": "BIBREF46"
},
{
"start": 288,
"end": 305,
"text": "Liu et al. (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5.2"
},
{
"text": "The last part of the table 3 shows the results of our configurations. The first three rows show results where we only used the character information. Through these results we show that bichar representations are very efficient for Chinese. This may be explained by the fact that bichars have a length closer to the average word length and provide more contextual information than single characters. The last four rows show the results of using our contextual char-seg representations. Those configurations achieve very good results, improving the state of the art, beating both models that do not use gold segmentation and even those that do. Firstly, these results show that the information about the boundaries of a word is useful. Secondly, on this corpus, we can see that there is only a slight difference between using supervised and unsupervised segmentation. Which is very encouraging to address situations where we do not have adequate CWS training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5.2"
},
{
"text": "Weibo NER. Table 4 shows the experimental results on Weibo NER dataset. This dataset proposes two kinds of annotations, named entities and nominal entities. For our experiments we only evaluated the combination of these two annotations. Compared to the other corpus, this one offers few annotated data, that is why different approaches have been proposed. Dredze (2015, 2016) ; Cao et al. (2018) use multitask learning and He and Sun (2017) use semi-supervised learning. As a result of these approaches, they use cross-domain or semisupervised additional data. In contrast, Zhang and Yang (2018) ; Liu et al. (2019) and our model do not need any additional data.",
"cite_spans": [
{
"start": 356,
"end": 375,
"text": "Dredze (2015, 2016)",
"ref_id": null
},
{
"start": 378,
"end": 395,
"text": "Cao et al. (2018)",
"ref_id": null
},
{
"start": 574,
"end": 595,
"text": "Zhang and Yang (2018)",
"ref_id": "BIBREF46"
},
{
"start": 598,
"end": 615,
"text": "Liu et al. (2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5.2"
},
{
"text": "These results exhibit similar patterns as those on OntoNotes. However in this case the unsupervised CWS can even lead to higher scores. This may be the result of Weibo Corpus being drawn from social media. A CWS system trained on the CTB is better suited for the news domain and less reliable in the Weibo case. Resume Table 5 shows the experimental results on Resume dataset. These are consistent with the observations made on OntoNotes and Weibo NER. Our model achieves good results on this dataset, but unlike the other corpora, very good results were already obtained by other systems. It does not allow us to highlight our approach as much as the other corpora.",
"cite_spans": [],
"ref_spans": [
{
"start": 319,
"end": 326,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5.2"
},
{
"text": "MSRA Table 6 shows the experimental results on MSRA dataset. The best results are obtained with the unsupervised segmentation.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5.2"
},
{
"text": "Ontonotes 5 To complete our evaluation, we run our best model from the Ontonotes 4 experiment on Ontonotes 5 to provide comparison with Jie and Lu (2019) . Results are shown Table 7 . Note that the comparison is somewhat unfair as Jie and Lu 2019Models P R F Zhang et al. (2006) 92.20 90.18 91.18 Zhou et al. (2013 ) 91.86 88.75 90.28 Dong et al. (2016 91.28 90.62 90.95 Cao et al. (2018) 91.73 89.58 90.64 Zhang and Yang (2018) rely on gold segmentation. Nevertheless, our system obtains the highest results, without the need for a dependency parser. The embeddings we propose achieve state-of-the-art results on a diversity domains such as news, social media, and Chinese resume.",
"cite_spans": [
{
"start": 136,
"end": 153,
"text": "Jie and Lu (2019)",
"ref_id": null
},
{
"start": 259,
"end": 278,
"text": "Zhang et al. (2006)",
"ref_id": null
},
{
"start": 297,
"end": 314,
"text": "Zhou et al. (2013",
"ref_id": "BIBREF47"
},
{
"start": 315,
"end": 352,
"text": ") 91.86 88.75 90.28 Dong et al. (2016",
"ref_id": null
},
{
"start": 371,
"end": 388,
"text": "Cao et al. (2018)",
"ref_id": null
},
{
"start": 407,
"end": 428,
"text": "Zhang and Yang (2018)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [
{
"start": 174,
"end": 181,
"text": "Table 7",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5.2"
},
{
"text": "When using a model with word-level features, one of the most common problems comes from unknown words. Our approach which injects segmentation information at the characters level allows to rebuild the words from characters and leads to fewer unknowns. To do so, we used two types of segmentation, word level and char-seg level, in a supervised and unsupervised way to segment our Wikipedia sample. Once our four Wikipedia samples were segmented, we trained four different FastText to obtain 4 lexicons for each of them. To evaluate the OOV rate on OntoNotes, we segmented it in three different ways in order to compare for each case the presence or not of words in the lexicons generated by our embeddings. We segmented OntoNotes in a supervised and unsupervised way with the same two models we used to segment Wikipedia and in a last step we left the \"gold\" segmentation in words proposed by OntoNotes. Results of this experiments are shown in table 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Out Of Vocabulary analysis",
"sec_num": "5.3"
},
{
"text": "For the embeddings column, we have two levels of segmentation, in word and char-seg, and two levels of supervision, \"ctb\" for the supervised part trained on the Chinese TreeBank and \"unsup\" for the unsupervised part. The OntoNotes seg column represents the three types of segmentation used to segregate OntoNotes into words. Because OntoNotes is segmented into words and because our lexicon for our char-seg embeddings contains only characters with segmentation information, for a given word coming from OntoNotes, we try to reconstruct the char-seg sequence constituting this word from our embedding lexicon. For example, for the word \u8d8a \u5357 we are looking for char-seg \u8d8a-B and \u5357-E in our embedding lexicon. If a char-seg is missing, then the whole word is missing too.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Out Of Vocabulary analysis",
"sec_num": "5.3"
},
{
"text": "The results show our representations greatly decrease the unknown word rates. it allows us to have a representation for most of the words. Moreover, unlike traditional word representations, we do not have fixed representations of our words, which makes it easier to have representations for new words, but which can then call into question the quality of our representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Out Of Vocabulary analysis",
"sec_num": "5.3"
},
{
"text": "Annotation ambiguity. The named entity recognition task combines a step of segmentation with one of classification. We feel the need to question some cases of ambiguity from the data. By using the guideline from OntoNotes we annotated inhouse data and we found it difficult in some cases to choose between Geopolitical Entity (GPE) and Location (LOC). This case of ambiguity has a direct impact on our predictions. we noted that more than 1 3 of LOC that has been detected is annotated as a GPE, which is consistent with the difficulties encountered in our annotation experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "Another issue arises from the conversion of the OntoNotes 4 corpus from 18 classes to 4. Most notably the entity types NORP (Nationality, Other, Religion, Political) and FAC (Facility). These classes are discarded in the 4-classes version, but are typical cases of nested entities containing a GPE, LOC or ORG, which is also discarded in the process, creating erroneous annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "Entity segmentation against word segmentation. Our results show that although staying at the character level allows us to tackle the OOV issue, the information brought by CWS is still what enables us to reach the highest scores. In the cases when the CTB segmentation guidelines are consistent with the NER corpus, supervised segmentation performs better. However NER with unsupervised segmentation is close in these cases and can perform better in other cases. So our answer to Li et al. (2019) could be that Word Segmentation is actually necessary, but unsupervised CWS may be enough.",
"cite_spans": [
{
"start": 479,
"end": 495,
"text": "Li et al. (2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "In this paper, we propose new sinogram embeddings which includes word information at the character level for Chinese NER. Our proposed approach shows that adding CWS label to a character allows to give word level information while reducing considerably the number of OOV compared to a word sequence. Our experiments on multiple datasets, in different domains, show that our system outperforms previous state-of-the-art approaches. This paves the road to NER in more challenging situations such as historical documents or less-resourced situations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future works",
"sec_num": "7"
},
{
"text": "https://catalog.ldc.upenn.edu/LDC2013T21",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This result is the average of 20 runs. The results of these runs have a variance of 4.10 \u22122",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "FLAIR: An easy-to-use framework for stateof-the-art NLP",
"authors": [],
"year": null,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "54--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "https://www.aclweb.org/anthology/N19-4010 FLAIR: An easy-to-use framework for state- of-the-art NLP. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54-59, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Contextual string embeddings for sequence labeling",
"authors": [],
"year": null,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1638--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649, Santa Fe, New Mexico, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.04606"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016b. Enriching word vec- tors with subword information. arXiv preprint arXiv:1607.04606.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adversarial transfer learning for Chinese named entity recognition with self-attention mechanism",
"authors": [],
"year": null,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "182--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adversarial transfer learning for Chinese named entity recognition with self-attention mechanism. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 182-192, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Named entity recognition with bilingual constraints",
"authors": [
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Mengqiu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "52--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wanxiang Che, Mengqiu Wang, Christo- pher D. Manning, and Ting Liu. 2013. https://www.aclweb.org/anthology/N13-1006 Named entity recognition with bilingual con- straints. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 52-62, Atlanta, Georgia. Association for Computational Linguistics. Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Natural language processing (almost) from scratch",
"authors": [],
"year": 2078,
"venue": "J. Mach. Learn. Res",
"volume": "999888",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "http://dl.acm.org/citation.cfm?id=2078183.2078186 Natural language processing (almost) from scratch. J. Mach. Learn. Res., 999888:2493- 2537.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT: pretraining of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/978-3-319-50496-420"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Ken- ton Lee, and Kristina Toutanova. 2018. http://arxiv.org/abs/1810.04805 BERT: pre- training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Chuanhai Dong, Jiajun Zhang, Chengqing Zong, Masanori Hattori, and Hui Di. 2016. https://doi.org/10.1007/978-3-319-50496-4 20",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Character-based lstm-crf with radical-level features for chinese named entity recognition",
"authors": [],
"year": null,
"venue": "",
"volume": "10102",
"issue": "",
"pages": "239--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Character-based lstm-crf with radical-level features for chinese named entity recognition. volume 10102, pages 239-250.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "F-score driven max margin neural network for named entity recognition in Chinese social media",
"authors": [],
"year": null,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "713--718",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F-score driven max margin neural network for named entity recognition in Chinese social media. In Proceedings of the 15th Conference of the European Chapter of the Association for Com- putational Linguistics: Volume 2, Short Papers, pages 713-718, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Ontonotes: The 90% solution",
"authors": [],
"year": null,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, NAACL-Short '06",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ontonotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, NAACL-Short '06, pages 57-60, Strouds- burg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bidirectional LSTM-CRF models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. http://arxiv.org/abs/1508.01991 Bidirectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Dependency-guided LSTM-CRF for named entity recognition",
"authors": [],
"year": null,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3862--3872",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dependency-guided LSTM-CRF for named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Process- ing (EMNLP-IJCNLP), pages 3862-3872, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "260--270",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1030"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. https://doi.org/10.18653/v1/N16- 1030 Neural architectures for named entity recog- nition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270, San Diego, Cal- ifornia. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The third international Chinese language processing bakeoff: Word segmentation and named entity recognition",
"authors": [],
"year": null,
"venue": "Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "108--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The third international Chinese language process- ing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 108-117, Sydney, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Is word segmentation necessary for deep learning of Chinese representations?",
"authors": [
{
"first": "Xiaoya",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuxian",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Xiaofei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qinghong",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3242--3252",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1314"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaoya Li, Yuxian Meng, Xiaofei Sun, Qinghong Han, Arianna Yuan, and Jiwei Li. 2019. https://doi.org/10.18653/v1/P19-1314 Is word segmentation necessary for deep learning of Chinese representations? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3242-3252, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "An encoding strategy based word-character LSTM for Chinese NER",
"authors": [],
"year": null,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "2379--2389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "An encoding strategy based word-character LSTM for Chinese NER. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2379-2389. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF. arXiv e-prints",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.01354"
]
},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. http://arxiv.org/abs/1603.01354 End-to-end Se- quence Labeling via Bi-directional LSTM-CNNs- CRF. arXiv e-prints, page arXiv:1603.01354.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Unsupervized word segmentation: the case for Mandarin Chinese",
"authors": [],
"year": null,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "383--387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Unsupervized word segmentation: the case for Mandarin Chinese. In Proceedings of the 50th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 383-387, Jeju Island, Korea. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Language richness of the web",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Majli\u0161",
"suffix": ""
},
{
"first": "Zden\u011b\u01e9",
"middle": [],
"last": "Zabokrtsk\u00fd",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "2927--2934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Majli\u0161 and Zden\u011b\u01e9 Zabokrtsk\u00fd. 2012. http://www.lrec- conf.org/proceedings/lrec2012/pdf/267 Paper.pdf Language richness of the web. In Proceedings of the Eighth International Conference on Lan- guage Resources and Evaluation (LREC'12), pages 2927-2934, Istanbul, Turkey. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems 26",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. http://papers.nips.cc/paper/5021-distributed- representations-of-words-and-phrases-and-their- compositionality.pdf Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Sys- tems 26, pages 3111-3119. Curran Associates, Inc.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Named entity recognition for Chinese social media with jointly trained embeddings",
"authors": [
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "548--554",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1064"
]
},
"num": null,
"urls": [],
"raw_text": "Nanyun Peng and Mark Dredze. 2015. https://doi.org/10.18653/v1/D15-1064 Named entity recognition for Chinese social media with jointly trained embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 548-554, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Multi-task multidomain representation learning for sequence tagging",
"authors": [
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nanyun Peng and Mark Dredze. 2016. http://arxiv.org/abs/1608.02689 Multi-task multi- domain representation learning for sequence tagging. CoRR, abs/1608.02689.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "14",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532-1543.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202Deepcontextualizedwordrepresentations"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Ken- ton Lee, and Luke Zettlemoyer. 2018. https://doi.org/10.18653/v1/N18-1202 Deep contextualized word representations. In Pro- ceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Semi-supervised sequence tagging with bidirectional language models",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Russell",
"middle": [],
"last": "Power",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Waleed Ammar, Chan- dra Bhagavatula, and Russell Power. 2017. http://arxiv.org/abs/1705.00108 Semi-supervised sequence tagging with bidirectional language models. CoRR, abs/1705.00108.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Word representations: a simple and general method for semi-supervised learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. http://dl.acm.org/citation.cfm?id=1858721 Word representations: a simple and general method for semi-supervised learning. In Pro- ceedings of the 48th Annual Meeting of the Asso- ciation for Computational Linguistics, ACL '10, pages 384-394, Stroudsburg, PA, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Effective bilingual constraints for semisupervised learning of named entity recognizers",
"authors": [],
"year": null,
"venue": "Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, AAAI'13",
"volume": "",
"issue": "",
"pages": "919--925",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "http://dl.acm.org/citation.cfm?id=2891460.2891588 Effective bilingual constraints for semi- supervised learning of named entity recognizers. In Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, AAAI'13, pages 919-925. AAAI Press.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Ontonotes release 5.0. Linguistic Data Consortium",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ontonotes release 5.0. Linguistic Data Consor- tium.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Combining discrete and neural features for sequence labeling. CoRR, abs/1708",
"authors": [],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Combining discrete and neural features for se- quence labeling. CoRR, abs/1708.07279. Suxiang Zhang, Ying Qin, Juan Wen, and Xiaojie Wang. 2006.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Word segmentation and named entity recognition for SIGHAN bakeoff3",
"authors": [],
"year": null,
"venue": "Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "158--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Word segmentation and named entity recognition for SIGHAN bakeoff3. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 158-161, Sydney, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Chinese segmentation with a word-based perceptron algorithm",
"authors": [],
"year": null,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "840--847",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinese segmentation with a word-based per- ceptron algorithm. In Proceedings of the 45th Annual Meeting of the Association of Compu- tational Linguistics, pages 840-847, Prague, Czech Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "1",
"issue": "",
"pages": "1554--1564",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1144Chi-nesenerusinglatticelstm"
]
},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Jie Yang. 2018. https://doi.org/10.18653/v1/P18-1144 Chi- nese ner using lattice lstm. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1554-1564, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Chinese named entity recognition via joint identification and categorization",
"authors": [
{
"first": "J",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2013,
"venue": "Chinese Journal of Electronics",
"volume": "22",
"issue": "",
"pages": "225--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Zhou, W. Qu, and F. Zhang. 2013. Chinese named entity recognition via joint identification and categorization. Chinese Journal of Electron- ics, 22:225-230.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Architecture of the model and representation of our",
"uris": null
},
"TABREF1": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "",
"html": null
},
"TABREF4": {
"num": null,
"content": "<table><tr><td colspan=\"3\">: Weibo NER results</td><td/></tr><tr><td>Models</td><td>P</td><td>R</td><td>F</td></tr><tr><td colspan=\"4\">Zhang and Yang (2018) 94.81 94.11 94.46</td></tr><tr><td>Liu et al. (2019)</td><td colspan=\"3\">95.27 95.15 95.21</td></tr><tr><td>char baseline</td><td colspan=\"3\">93.26 93.44 93.35</td></tr><tr><td>Char</td><td colspan=\"3\">92.76 94.36 93.55</td></tr><tr><td>Bichar</td><td colspan=\"3\">93.64 94.79 94.21</td></tr><tr><td>Bichar Char</td><td colspan=\"3\">93.93 94.97 94.45</td></tr><tr><td>Char ctx</td><td colspan=\"3\">94.39 95.03 94.71</td></tr><tr><td>Char-seg unsup</td><td colspan=\"3\">94.77 95.58 95.17</td></tr><tr><td colspan=\"4\">Bichar + Char-seg unsup 94.56 94.91 94.73</td></tr><tr><td>Char-seg ctb</td><td colspan=\"3\">94.84 94.66 94.75</td></tr><tr><td>Bichar + Char-seg ctb</td><td colspan=\"3\">95.07 95.83 95.45</td></tr></table>",
"type_str": "table",
"text": "",
"html": null
},
"TABREF5": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Chinese resume results",
"html": null
},
"TABREF7": {
"num": null,
"content": "<table><tr><td colspan=\"2\">: MSRA results</td><td/><td/></tr><tr><td>Models</td><td>P</td><td>R</td><td>F</td></tr><tr><td colspan=\"4\">Zhang and Yang (2018) 76.34 77.01 76.67</td></tr><tr><td>Jie and Lu (2019)</td><td/><td/><td/></tr><tr><td>BiLSTM-CRF</td><td colspan=\"3\">77.94 75.33 76.61</td></tr><tr><td colspan=\"4\">BiLSTM-CRF + ELMo 79.20 79.21 79.20</td></tr><tr><td colspan=\"4\">DGLSTM-CRF + ELMo 78.86 81.00 79.92</td></tr><tr><td>without Gold dep.</td><td/><td/><td>79.59</td></tr><tr><td>Bichar + Char-seg ctb</td><td colspan=\"3\">80.70 80.60 80.65</td></tr></table>",
"type_str": "table",
"text": "",
"html": null
},
"TABREF8": {
"num": null,
"content": "<table><tr><td>: Ontonotes 5 results. Jie and Lu (2019) provide</td></tr><tr><td>detailed results on gold segmentation and parsing only.</td></tr><tr><td>An F-measure of 79.59 is obtained with non-gold depen-</td></tr><tr><td>dencies, but the authors did not report experiments related</td></tr><tr><td>to the quality of the word segmentation.</td></tr></table>",
"type_str": "table",
"text": "",
"html": null
},
"TABREF10": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "OOV statistics on OntoNotes 4 with supervised and unsupervised segmentation.",
"html": null
}
}
}
}