ACL-OCL / Base_JSON /prefixY /json /Y17 /Y17-1016.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y17-1016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:33:37.825347Z"
},
"title": "The Importance of Automatic Syntactic Features in Vietnamese Named Entity Recognition",
"authors": [
{
"first": "Hoang",
"middle": [],
"last": "Pham",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Phuong",
"middle": [],
"last": "Le-Hong",
"suffix": "",
"affiliation": {},
"email": "phuonglh@vnu.edu.vn"
},
{
"first": "Thai-Hoang",
"middle": [],
"last": "Pham",
"suffix": "",
"affiliation": {},
"email": "phamthaihoang.hn@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a state-of-the-art system for Vietnamese Named Entity Recognition (NER). By incorporating automatic syntactic features with word embeddings as input for bidirectional Long Short-Term Memory (Bi-LSTM), our system, although simpler than some deep learning architectures, achieves a much better result for Vietnamese NER. The proposed method achieves an overall F 1 score of 92.05% on the test set of an evaluation campaign, organized in late 2016 by the Vietnamese Language and Speech Processing (VLSP) community. Our named entity recognition system outperforms the best previous systems for Vietnamese NER by a large margin.",
"pdf_parse": {
"paper_id": "Y17-1016",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a state-of-the-art system for Vietnamese Named Entity Recognition (NER). By incorporating automatic syntactic features with word embeddings as input for bidirectional Long Short-Term Memory (Bi-LSTM), our system, although simpler than some deep learning architectures, achieves a much better result for Vietnamese NER. The proposed method achieves an overall F 1 score of 92.05% on the test set of an evaluation campaign, organized in late 2016 by the Vietnamese Language and Speech Processing (VLSP) community. Our named entity recognition system outperforms the best previous systems for Vietnamese NER by a large margin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named entity recognition (NER) is an essential task in natural language processing that falls under the domain of information extraction. The function of this task is to identify noun phrases and categorize them into a predefined class. NER is a crucial preprocessing step used in some NLP applications such as question answering, automatic translation, speech processing, and biomedical science. In two shared tasks, CoNLL 2002 1 and CoNLL 2003 2 , language independent NER systems were evaluated for English, German, Spanish, and Dutch. These systems focus on four named entity types namely person, organization, location, and remaining miscellaneous entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lately, an evaluation campaign that systematically compared NER systems for the Vietnamese language has been launched by the Vietnamese Language and Speech Processing (VLSP) 3 community. They collect data from electronic newspapers on the web and annotate named entities in this corpus. Similar to the CoNLL 2003 share task, there are four named entity types in VLSP dataset: person (PER), organization (ORG), location (LOC), and miscellaneous entity (MISC) .",
"cite_spans": [
{
"start": 451,
"end": 457,
"text": "(MISC)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a state-of-the-art NER system for Vietnamese language that uses automatic syntactic features with word embedding in Bi-LSTM. Our system outperforms the leading system of the VLSP campaign utilizing a number of syntactic and hand-crafted features, and an end-to-end system described in (Pham and Le-Hong, 2017 ) that is a combination of Bi-LSTM, Convolutional Neural Network (CNN), and Conditional Random Field (CRF) about 3%. To sum up, the overall F 1 score of our system is 92.05% as assessed by the standard test set of VLSP. The contributions of this work consist of:",
"cite_spans": [
{
"start": 311,
"end": 334,
"text": "(Pham and Le-Hong, 2017",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We demonstrate a deep learning model reaching the state-of-the-art performance for Vietnamese NER task. By incorporating automatic syntactic features, our system (Bi-LSTM), although simpler than (Bi-LSTM-CNN-CRF) model described in (Pham and Le-Hong, 2017) , achieves a much better result on Vietnamese NER dataset. The simple architecture also contributes to the feasibility of our system in practice because it requires less time for inference stage. Our best system utilizes partof-speech, chunk, and regular expression type features with word embeddings as an input for two-layer Bi-LSTM model, which achieves an F 1 score of 92.05%.",
"cite_spans": [
{
"start": 234,
"end": 258,
"text": "(Pham and Le-Hong, 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We demonstrate the greater importance of syntactic features in Vietnamese NER compared to their impact in other languages. Those features help improve the F 1 score of about 18%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We also indicate some network parameters such as network size, dropout are likely to affect the performance of our system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We conduct a thorough empirical study on applying common deep learning architectures to Vietnamese NER, including Recurrent Neural Network (RNN), unidirectional and bidirectional LSTM. These models are also compared to conventional sequence labelling models such as Maximum Entropy Markov models (MEMM).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We publicize our NER system for research purpose, which is believed to positively contributing to the long-term advancement of Vietnamese NER as well as Vietnamese language processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is structured as follows. Section 2 summarizes related work on NER. Section 3 describes features and model used in our system. Section 4 gives experimental results and discussions. Finally, Section 5 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We categorize two main approaches for NER in a large number of research published in the last two decades. The first approach is characterized by the use of traditional sequence labelling models such as CRF, hidden markov model, support vector machine, maximum entropy that are heavily dependent on hand-crafted features (Florian et al., 2003; Lin and Wu, 2009; Durrett and Klein, 2014; Luo and Xiaojiang Huang, 2015) . These systems made an endeavor to exploit external information instead of the available training data such as gazetteers and unannotated data.",
"cite_spans": [
{
"start": 321,
"end": 343,
"text": "(Florian et al., 2003;",
"ref_id": "BIBREF4"
},
{
"start": 344,
"end": 361,
"text": "Lin and Wu, 2009;",
"ref_id": "BIBREF11"
},
{
"start": 362,
"end": 386,
"text": "Durrett and Klein, 2014;",
"ref_id": "BIBREF3"
},
{
"start": 387,
"end": 417,
"text": "Luo and Xiaojiang Huang, 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "In the last few years, deep neural network approaches have gained in popularity dealing with NER task. With the advance of computational power, there has been more and more research that applied deep learning methods to improve performances of their NLP systems. LSTM and CNN are prevalent models used in these architectures. Firstly, (Collobert et al., 2011) used a CNN over a sequence of word embeddings with a CRF layer on the top. They nearly achieved state-of-the-art results on some sequence labelling tasks such as POS tagging, chunking, albeit did not work for NER. To improve the accuracy for recognizing named entities, used Bi-LSTM with CRF layer for joint decoding. This model also used handcrafted features to ameliorate its performance. Recently, (Chiu and Nichols, 2016) proposed a hybrid model that combined Bi-LSTM with CNN to learn both character-level and word-level representations. Instead of using CNN to learn character-level features like (Chiu and Nichols, 2016), (Lample et al., 2016) used BI-LSTM to capture both character and word-level features.",
"cite_spans": [
{
"start": 335,
"end": 359,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF2"
},
{
"start": 989,
"end": 1010,
"text": "(Lample et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "For Vietnamese, VLSP community has organized an evaluation campaign that follows the rules of CoNLL 2003 shared task to systematically compare NER systems. Participating systems have approached this task by both traditional and deep learning architectures. In particular, the first-rank system of the VLSP campaign which achieved an F 1 score of 88.78% used MEMM with many hand-crafted features (Le-Hong, 2016). Meanwhile, (Nguyen et al., 2016) adopted deep neural networks for this task. They used the system provided by (Lample et al., 2016) , which consists of two types of LSTM models: Bi-LSTM-CRF and Stack-LSTM. Their best system achieved an F 1 score of 83.80%. More recently, (Pham and Le-Hong, 2017) used an end-to-end system that is a combination of Bi-LSTM-CNN-CRF for Vietnamese NER. The F 1 score of this system is 88.59% that is competitive with the accuracy of (Le-Hong, 2016).",
"cite_spans": [
{
"start": 522,
"end": 543,
"text": "(Lample et al., 2016)",
"ref_id": "BIBREF9"
},
{
"start": 684,
"end": 708,
"text": "(Pham and Le-Hong, 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Word Embeddings We use a word embedding set trained from 7.3GB of 2 million articles collected through a Vietnamese news portal 4 by word2vec 5 toolkit. Details of this word embedding set are described in (Pham and Le-Hong, 2017) .",
"cite_spans": [
{
"start": 205,
"end": 229,
"text": "(Pham and Le-Hong, 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Engineering",
"sec_num": "3.1"
},
{
"text": "To ameliorate a performance of our system, we incorporate some syntactic features with word embeddings as input for Bi-LSTM model. These syntactic features are generated automatically by some public tools so the actual input of our system is only raw texts. These additional features consist of part-of-speech (POS) and chunk tags that are available in the dataset, and regular expression types that capture common organization and location names. These regular expressions over tokens described particularly in (Le-Hong, 2016) are shown to provide helpful features for classifying candidate named entities, as shown in the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Syntactic Features",
"sec_num": null
},
{
"text": "Long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997 ) is a special kind of Recurrent Neural Network (RNN) which is capable of dealing with possible gradient exploding and vanishing problems (Bengio et al., 1994; Pascanu et al., 2013) when handling long-range sequences. It is because LSTM uses memory cells instead of hidden layers in a standard RNN. In particular, there are three multiplicative gates in a memory cell unit that decides on the amount of information to pass on to the next step. Therefore, LSTM is likely to exploit long-range dependency data. Each multiplicative gate is computed as follows: ",
"cite_spans": [
{
"start": 30,
"end": 63,
"text": "(Hochreiter and Schmidhuber, 1997",
"ref_id": "BIBREF7"
},
{
"start": 202,
"end": 223,
"text": "(Bengio et al., 1994;",
"ref_id": "BIBREF0"
},
{
"start": 224,
"end": 245,
"text": "Pascanu et al., 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Long Short-Term Memory",
"sec_num": "3.2"
},
{
"text": "i t = \u03c3(W i h t\u22121 + U i x t + b i ) f t = \u03c3(W f h t\u22121 + U f x t + b f ) c t = f t c t\u22121 + i t tanh(W c h t\u22121 + U c x t + b c ) o t = \u03c3(W o h t\u22121 + U o x t + b o ) h t = o t tanh(c t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Long Short-Term Memory",
"sec_num": "3.2"
},
{
"text": "The original LSTM uses only past features. For many sequence labelling tasks, it is beneficial when accessing both past and future contexts. For this reason, we utilize the bidirectional LSTM (Bi-LSTM) (Graves and Schmidhuber, 2005; Graves et al., 2013) for NER task. The basic idea is running both forward and backward passes to capture past and future information, respectively, and concatenate two hidden states to form a final representation. Figure 2 illustrates the backward and forward passes of Bi-LSTM.",
"cite_spans": [
{
"start": 202,
"end": 232,
"text": "(Graves and Schmidhuber, 2005;",
"ref_id": "BIBREF5"
},
{
"start": 233,
"end": 253,
"text": "Graves et al., 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 447,
"end": 455,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bidirectional Long Short-Term Memory",
"sec_num": "3.3"
},
{
"text": "For Vietnamese named entity recognition, we use a 2-layer Bi-LSTM with softmax layer on the top to detect named entities in sequence of sentences. The inputs are the combination of word and syntactic features, and the outputs are the probability distributions over named entity tags. Figure 3 describes the details of our deep learning model. In the next sections, we present our experimental results. 99 We conduct experiments on the VSLP NER shared task 2016 corpus. Four named entity types are evaluated in this corpus including person, location, organization, and other named entities. Definitions of these entity types match with their descriptions in the CoNLL shared task 2003.",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 292,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Our Deep Learning Model",
"sec_num": "3.4"
},
{
"text": "There are five columns in this dataset including surface word, automatic POS and chunking tags, named entity and nested named entity labels, of which the first four columns conform to the format of the CoNLL 2003 shared task. We do not use the fifth column because our system focuses on only named entity without nesting. Named entities are labelled by the IOB notation as in the CoNLL 2003 shared tasks. In particular, there are 9 named entity labels in this corpus including B-PER and I-PER for persons, B-ORG and I-ORG for organizations, B-LOC and I-LOC for locations, B-MISC and I-MISC for other named entities, and O for other elements. Because we use early stopping method described in (Graves et al., 2013) to avoid overfitting when training our neural network models, we hold one part of training data for validation. The number of sentences of each part of VLSP corpus is described in Table 2 ",
"cite_spans": [
{
"start": 692,
"end": 713,
"text": "(Graves et al., 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 894,
"end": 901,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Our Deep Learning Model",
"sec_num": "3.4"
},
{
"text": "We evaluate the performance of our system with F 1 score:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Method",
"sec_num": "4.2"
},
{
"text": "F 1 = 2 * precision * recall precision + recall",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Method",
"sec_num": "4.2"
},
{
"text": "Precision and recall are the percentage of correct named entities identified by the system and the percentage of identified named entities present in the corpus respectively. To compare fairly with previous systems, we use an available evaluation script provided by the CoNLL 2003 shared task 6 to calculate F 1 score of our NER system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Method",
"sec_num": "4.2"
},
{
"text": "In this section, we analyze the efficiency of word embeddings, bidirectional learning, model configuration, and especially automatic syntactic features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Embeddings To evaluate the effectiveness of word embeddings, we compare the systems on three types of input: skip-gram, random vector, and onehot vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "The number of dimensions we choose for word embedding is 300. We create random vectors for words that do not appear in word embeddings set by uniformly sampling from the range",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "[\u2212 3 dim , + 3 dim ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "where dim is the dimension of embeddings. For random vector setting, we also sample vectors for all words from this distribution. The performances of the system with each input type are represented in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 208,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "We can conclude that word embedding is an important factor of our model. Skip-gram vector significantly improves our performance. The improve- Table 3 : Performance of our model on three input types ment is about 11% when using skip-gram vectors instead of random vectors. Thus, we use skip-gram vectors as inputs for our system.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 150,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In the second experiment, we examine the benefit of accessing both past and future contexts by comparing the performances of RNN, LSTM and Bi-LSTM models. In this task, RNN model fails because it faces the gradient vanishing/exploding problem when training with long-range dependencies (132 time steps), leading to the unstable value of the cost functions. For this reason, only performances of LSTM and Bi-LSTM models are shown in Table 4 : Performance of our model when using one and two layers",
"cite_spans": [],
"ref_spans": [
{
"start": 432,
"end": 439,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Effect of Bidirectional Learning",
"sec_num": null
},
{
"text": "We see that learning both past and future contexts is very useful for NER. Performances of all of the entity types are increased, especially for ORG and MISC. The total accuracy is improved greatly, from 65.80% to 74.02%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Bidirectional Learning",
"sec_num": null
},
{
"text": "In the third experiment, we investigate the improvement when adding more Bi-LSTM layers. Table 5 shows the accuracy when using one or two Bi-LSTM layers. We observe a significant improvement when using two layers of Bi-LSTM. The performance is increased from 71.74% to 74.02% Table 6 : Performance of our model with and without dropout Syntactic Features Integration As shown in the previous experiments, using only word features in deep learning models is not enough to achieve the state-of-the-art result. In particular, the accuracy of this model is only 74.02%. This result is far lower in comparison to that of state-of-the-art systems for Vietnamese NER. In the following experiments, we add more useful features to enhance the performance of our deep learning model. As seen in this table, adding each of these syntactic features helps improve the performance significantly. The best result we get is adding part-ofspeech, chunk and regular expression features. The accuracy of this final system is 92.05% that is much higher than 74.02 of the system without using syntactic features. An explanation for this problem is possibly a characteristic of Vietnamese. In particular, Vietnamese named entities are often a noun phrase chunk.",
"cite_spans": [],
"ref_spans": [
{
"start": 89,
"end": 96,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 276,
"end": 283,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Number of Bi-LSTM Layers",
"sec_num": null
},
{
"text": "Comparision with Other Languages In the sixth experiment, we want to compare the role of syntactic features for NER task in other languages. For this reason, we run our system on CoNLL 2003 data set for English. The word embedding set we use for En-glish is pre-trained by Glove model and is provided by the authors 7 . Table 8 shows the performances of our system when adding part-of-speech and chunk features. Comparison with Previous Systems In VLSP 2016 workshop, there are several different systems proposed for Vietnamese NER. These systems focus on only three entities types LOC, ORG, and PER. For the purpose of fairness, we evaluate our performances based on these named entity types on the same corpus. The accuracy of our best model over three entity types is 92.02%, which is higher than the best participating system (Le-Hong, 2016) in that shared task about 3.2%.",
"cite_spans": [],
"ref_spans": [
{
"start": 320,
"end": 327,
"text": "Table 8",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Number of Bi-LSTM Layers",
"sec_num": null
},
{
"text": "Moreover, (Pham and Le-Hong, 2017 ) used a combination of Bi-LSTM, CNN, and CRF that achieved the same performance with (Le-Hong, 2016). This system is end-to-end architecture that required only word embeddings while (Le-Hong, 2016) used many syntactic and hand-crafted features with MEMM. Our system surpasses both of these systems by using Bi-LSTM with automatically syntactic features, which takes less time for training and inference than Bi-LSTM-CNN-CRF model and does not depend on many hand-crafted features as MEMM. Table 9 presents the accuracy of each system. Table 9 : Performance of our model and previous systems",
"cite_spans": [
{
"start": 10,
"end": 33,
"text": "(Pham and Le-Hong, 2017",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 524,
"end": 531,
"text": "Table 9",
"ref_id": null
},
{
"start": 570,
"end": 577,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Number of Bi-LSTM Layers",
"sec_num": null
},
{
"text": "In this work, we have presented a state-of-the-art named entity recognition system for the Vietnamese language, which achieves an F 1 score of 92.05% on the standard dataset published by the VLSP community. Our system outperforms the first-rank system of the related NER shared task with a large margin, 3.2% in particular. We have also shown the effectiveness of using automatic syntactic features for Bi-LSTM model that surpass the combination of Bi-LSTM-CNN-CRF models albeit requiring less time for computation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://www.cnts.ua.ac.be/conll2002/ner/ 2 http://www.cnts.ua.ac.be/conll2003/ner/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://vlsp.org.vn/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.baomoi.com 5 https://code.google.com/archive/p/ word2vec/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.cnts.ua.ac.be/conll2003/ner/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://nlp.stanford.edu/projects/ glove/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The second author is partly funded by the Vietnam National University, Hanoi (VNU) under project number QG.15.04. Any opinions, findings and conclusion expressed in this paper are those of the authors and do not necessarily reflect the view of VNU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning long-term dependencies with gradient descent is difficult",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Patrice",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Frasconi",
"suffix": ""
}
],
"year": 1994,
"venue": "IEEE transactions on neural networks",
"volume": "5",
"issue": "2",
"pages": "157--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradi- ent descent is difficult. IEEE transactions on neural networks, 5(2):157-166.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Named entity recognition with bidirectional lstm-cnns",
"authors": [
{
"first": "P",
"middle": [
"C"
],
"last": "Jason",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nichols",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "357--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason P.C. Chiu and Eric Nichols. 2016. Named en- tity recognition with bidirectional lstm-cnns. Transac- tions of the Association for Computational Linguistics, 4:357-370.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493- 2537.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A joint model for entity analysis: Coreference, typing, and linking",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "477--490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and Dan Klein. 2014. A joint model for en- tity analysis: Coreference, typing, and linking. Trans- actions of the Association for Computational Linguis- tics, 2:477-490.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Named entity recognition through classifier combination",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "Hongyan",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of CoNLL-2003",
"volume": "",
"issue": "",
"pages": "168--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Florian, Abe Ittycheriah, Hongyan Jing, and Tong Zhang. 2003. Named entity recognition through clas- sifier combination. In Walter Daelemans and Miles Osborne, editors, Proceedings of CoNLL-2003, pages 168-171. Edmonton, Canada.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Framewise phoneme classification with bidirectional lstm networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of 2005 IEEE International Joint Conference on Neural Networks",
"volume": "4",
"issue": "",
"pages": "2047--2052",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional lstm networks. In Proceedings of 2005 IEEE International Joint Conference on Neural Networks, volume 4, pages 2047-2052, Montreal, QC, Canada. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Speech recognition with deep recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Abdel",
"middle": [],
"last": "Rahmand",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of 2013 IEEE international conference on acoustics, speech and signal processing",
"volume": "",
"issue": "",
"pages": "6645--6649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Abdel rahmand Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recur- rent neural networks. In Proceedings of 2013 IEEE international conference on acoustics, speech and sig- nal processing, pages 6645-6649, Vancouver, BC, Canada. IEEE.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735- 1780.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bidirectional lstm-crf models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.01991"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.01360"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Subra- manian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Vietnamese named entity recognition using token regular expressions and bidirectional inference",
"authors": [
{
"first": "Phuong",
"middle": [],
"last": "Le-Hong",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The Fourth International Workshop on Vietnamese Language and Speech Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phuong Le-Hong. 2016. Vietnamese named entity recognition using token regular expressions and bidi- rectional inference. In Proceedings of The Fourth In- ternational Workshop on Vietnamese Language and Speech Processing, Hanoi, Vietnam.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Phrase clustering for discriminative learning",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xiaoyun",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "1030--1038",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin and Xiaoyun Wu. 2009. Phrase clustering for discriminative learning. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, volume 2, pages 1030-1038. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Joint entity recognition and disambiguation",
"authors": [
{
"first": "Gang",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Zaiqing Nie Xiaojiang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "879--888",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gang Luo and Zaiqing Nie Xiaojiang Huang, Chin- Yew Lin. 2015. Joint entity recognition and disam- biguation. In Proceedings of the 2015 Conference on Empirical Methods on Natural Language Processing, pages 879-888. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Vietnamese named entity recognition at vlsp 2016 evaluation campaign",
"authors": [
{
"first": "Le",
"middle": [
"Minh"
],
"last": "Truong Son Nguyen",
"suffix": ""
},
{
"first": "Xuan",
"middle": [
"Chien"
],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tran",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of The Fourth International Workshop on Vietnamese Language and Speech Processing",
"volume": "28",
"issue": "",
"pages": "1310--1318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Truong Son Nguyen, Le Minh Nguyen, and Xuan Chien Tran. 2016. Vietnamese named entity recognition at vlsp 2016 evaluation campaign. In Proceedings of The Fourth International Workshop on Vietnamese Language and Speech Processing, Hanoi, Vietnam. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In The 30th International Conference on Machine Learning, volume 28, pages 1310-1318, At- lanta, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "End-toend recurrent neural network models for vietnamese named entity recognition: Word-level vs. characterlevel",
"authors": [
{
"first": "Thai-Hoang",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Phuong",
"middle": [],
"last": "Le-Hong",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of The 15th International Conference of the Pacific Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "251--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thai-Hoang Pham and Phuong Le-Hong. 2017. End-to- end recurrent neural network models for vietnamese named entity recognition: Word-level vs. character- level. In Proceedings of The 15th International Con- ference of the Pacific Association for Computational Linguistics, pages 251-264.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "LSTM memory cell where \u03c3 and are element-wise sigmoid function and element-wise product, i, f, o and c are the input gate, forget gate, output gate and cell vector respectively. U i , U f , U c , U o are weight matrices that connect input x and gates, and U i , U f , U c , U o are weight matrices that connect gates and hidden state h, and finally, b i , b f , b c , b o are the bias vectors. Figure 1 illustrates a single LSTM memory cell.",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Figure 2: Bidirectional LSTM",
"type_str": "figure"
},
"TABREF0": {
"text": "presents the number of annotated named entities in the training and testing set.",
"content": "<table><tr><td>Entity Types</td><td colspan=\"2\">Training Set Testing Set</td></tr><tr><td>Location</td><td>6,247</td><td>1,379</td></tr><tr><td>Organization</td><td>1,213</td><td>274</td></tr><tr><td>Person</td><td>7,480</td><td>1,294</td></tr><tr><td>Miscellaneous names</td><td>282</td><td>49</td></tr><tr><td>All</td><td>15,222</td><td>2,996</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF1": {
"text": "Statistics of named entities in VLSP corpus",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF3": {
"text": "Size of each data set in VLSP corpus",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF4": {
"text": ".48 83.05 79.39 66.37 72.26 79.21 72.37 75.63 MISC 84.14 78.37 81.07 65.23 69.80 76.70 82.14 46.94 59.74 ORG 49.85 50.51 50.07 35.19 19.56 25.11 30.56 12.04 17.28 PER 72.77 65.73 69.06 70.76 50.35 58.83 69.13 52.09 59.41 ALL 75.88 72.26 74.02 72.99 55.23 62.87 57.68 72.88 64.39",
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td/><td colspan=\"3\">Figure 3: Our deep learning model</td></tr><tr><td>Entity</td><td/><td colspan=\"2\">Skip-Gram</td><td/><td>Random</td><td/><td/><td>One-hot</td></tr><tr><td/><td>Pre.</td><td>Rec.</td><td>F1</td><td>Pre.</td><td>Rec.</td><td>F1</td><td>Pre.</td><td>Rec.</td><td>F1</td></tr><tr><td>LOC</td><td colspan=\"2\">83.63 82</td><td/><td/><td/><td/><td/><td/></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF5": {
"text": "",
"content": "<table><tr><td>Entity</td><td/><td>Bi-LSTM</td><td/><td/><td>LSTM</td></tr><tr><td/><td>Pre.</td><td>Rec.</td><td>F 1</td><td>Pre.</td><td>Rec.</td><td>F 1</td></tr><tr><td>LOC</td><td colspan=\"6\">83.63 82.48 83.05 74.60 77.38 75.96</td></tr><tr><td colspan=\"5\">MISC 84.14 78.37 81.07 2.15</td><td>2.04</td><td>2.09</td></tr><tr><td colspan=\"7\">ORG 49.85 50.51 50.07 32.22 34.60 33.60</td></tr><tr><td>PER</td><td colspan=\"6\">72.77 65.73 69.06 67.95 60.73 64.12</td></tr><tr><td>ALL</td><td colspan=\"6\">75.88 72.26 74.02 66.61 65.04 65.80</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF7": {
"text": "Performance of our model when using one and two layers LOC 83.63 82.48 83.05 80.98 76.79 78.79 MISC 84.14 78.37 81.07 84.09 64.49 72.73 ORG 49.85 50.51 50.07 41.09 32.92 36.43 PER 72.77 65.73 69.06 67.35 59.23 62.97 ALL 75.88 72.26 74.02 71.97 64.99 68.27",
"content": "<table><tr><td>Effect of Dropout In the fourth experiment, we compare the results of our model with and without</td></tr><tr><td>dropout layers. The optimal dropout ratio for our</td></tr><tr><td>experiments is 0.5. The accuracy with dropout is</td></tr><tr><td>74.02%, compared to 68.27% without dropout. It</td></tr><tr><td>proves the effectiveness of dropout for preventing</td></tr><tr><td>overfitting.</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF8": {
"text": "",
"content": "<table><tr><td/><td/><td colspan=\"2\">shows the</td></tr><tr><td colspan=\"4\">improvement when adding part-of-speech, chunk,</td></tr><tr><td colspan=\"4\">case-sensitive, and regular expression features.</td></tr><tr><td>Features</td><td>Pre.</td><td>Rec.</td><td>F 1</td></tr><tr><td>Word</td><td colspan=\"3\">75.88 72.26 74.02</td></tr><tr><td>Word+POS</td><td colspan=\"3\">84.23 87.64 85.90</td></tr><tr><td>Word+Chunk</td><td colspan=\"3\">90.73 83.18 86.79</td></tr><tr><td>Word+Case</td><td colspan=\"3\">83.68 84.45 84.06</td></tr><tr><td>Word+Regex</td><td colspan=\"3\">76.58 71.86 74.13</td></tr><tr><td colspan=\"4\">Word+POS+Chunk+Case+Regex 90.25 92.55 91.39</td></tr><tr><td>Word+POS+Chunk+Regex</td><td colspan=\"3\">91.09 93.03 92.05</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF9": {
"text": "Performance of our model when adding more features",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF11": {
"text": "The importance of syntactic features for Vietnamese compared to it for English For English NER task, adding the syntactic features does not help to improve the performance of our system. Thus, we can conclude that syntactic features have the greater importance in Vietnamese NER compared to their impact in English.",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
}
}
}
}