| { |
| "paper_id": "Y18-1018", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:36:26.573401Z" |
| }, |
| "title": "Effectiveness of Character Language Model for Vietnamese Named Entity Recognition", |
| "authors": [ |
| { |
| "first": "Xuan-Dung", |
| "middle": [], |
| "last": "Doan", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Trung-Thanh", |
| "middle": [], |
| "last": "Dang", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Le-Minh", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "nguyenml@jaist.ac.jp" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Recently, many studies indicate that character language model can capture syntacticsemantic word features, resulting in state-ofthe-art performance in typical NLP sequence labeling tasks. This paper shows the effectiveness of character language model for Vietnamese Named Entity Recognition by comparing several methods. We evaluate the proposed model on the VLSP 2016 dataset and our own VTNER dataset. Experimental results show that our model is the current state-of-theart end-to-end obtains for the task.", |
| "pdf_parse": { |
| "paper_id": "Y18-1018", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Recently, many studies indicate that character language model can capture syntacticsemantic word features, resulting in state-ofthe-art performance in typical NLP sequence labeling tasks. This paper shows the effectiveness of character language model for Vietnamese Named Entity Recognition by comparing several methods. We evaluate the proposed model on the VLSP 2016 dataset and our own VTNER dataset. Experimental results show that our model is the current state-of-theart end-to-end obtains for the task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Named-entity recognition (NER) is the task of automatically identifying and classifying elements of the document into several categories, such as organization, person, location, currency, time, etc. NER is used in data mining systems, text summarization, question answering, translation, etc. Most methods for NER are based on machine learning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For example, given the following sentence as input: Many methods for NER are based on supervised learning. In particular, Conditional Random Field -CRF (Lafferty et al., 2001; Sutton and McCallum, 2006) and Long Short Term Memory -LSTM (Hochreiter and Schmidhuber, 1997) are the most popular methods. (Le, 2016) combined regular expressions over tokens and a bidirectional inference method in a sequence labeling model. (Pham and Le, 2017) combined Bi-LSTM, CNN and CRF that achieved the same performance with (Le, 2016) . This system is the end-to-end architecture that required only word embeddings. After that, (Pham and Le, 2017) system surpassed both (Le, 2016) and end-to-end (Pham and Le, 2017) systems by using Bi-LSTM with automatically syntactic to present a state-of-the-art named entity recognition system for the Vietnamese language. Minh (2018) showed the effectiveness of rich features on CRF methods by using default features for CRF with POS and chunking tags and achieved best results on F1 score.", |
| "cite_spans": [ |
| { |
| "start": 152, |
| "end": 175, |
| "text": "(Lafferty et al., 2001;", |
| "ref_id": null |
| }, |
| { |
| "start": 176, |
| "end": 202, |
| "text": "Sutton and McCallum, 2006)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 236, |
| "end": 270, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 301, |
| "end": 311, |
| "text": "(Le, 2016)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 420, |
| "end": 439, |
| "text": "(Pham and Le, 2017)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 510, |
| "end": 520, |
| "text": "(Le, 2016)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 614, |
| "end": 633, |
| "text": "(Pham and Le, 2017)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 656, |
| "end": 666, |
| "text": "(Le, 2016)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 682, |
| "end": 701, |
| "text": "(Pham and Le, 2017)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Recently, Contextualized word embeddings (Peters et al., 2017; Peters et al., 2018) capture word sematics in context to address the polysemous and context-dependent nature of words. They report new state-of-the-art results for NER but this approach require a larger model, external corpus and timeconsuming training. (Liu et al., 2018) proposed a sequence labeling framework, LM-LSTM-CRF, and (Akbik et al., 2018) suggested Contextual String Embeddings, both of which achieved state-of-theart results in English datasets. Their model leverages both word-level and character-level knowledge.", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 62, |
| "text": "(Peters et al., 2017;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 63, |
| "end": 83, |
| "text": "Peters et al., 2018)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 317, |
| "end": 335, |
| "text": "(Liu et al., 2018)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 393, |
| "end": 413, |
| "text": "(Akbik et al., 2018)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Thus, we wanted to implement the methods: CRF, LSTM-CRF and checked the effectiveness of handcrafted features on the models. Besides, we used the character language model that (Liu et al., 2018) and (Akbik et al., 2018) proposed, on the VLSP dataset (Nguyen and Vu, 2016) and our VTNER dataset. Contributions: We overview the methods for Vietnamese Named Entity Recognition. We indicate the effectiveness of character language model in named entity recognition. We make our VTNER dataset publicly available to all community researchers.", |
| "cite_spans": [ |
| { |
| "start": 176, |
| "end": 194, |
| "text": "(Liu et al., 2018)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 199, |
| "end": 219, |
| "text": "(Akbik et al., 2018)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As proposed by (Lafferty et al., 2001) , (Sutton and McCallum, 2006) , CRF is a popular method for sequence labeling.", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 38, |
| "text": "(Lafferty et al., 2001)", |
| "ref_id": null |
| }, |
| { |
| "start": 41, |
| "end": 68, |
| "text": "(Sutton and McCallum, 2006)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Field", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "With X, Y as random vectors,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Field", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u03b8 = \u03bb k R K is a parameter vector, f k (y, y , x t ) K", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Field", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "t=1 is a set of feature function values. Linear-chain CRF model calculates the probability p(y|x):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Field", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "p(y|x) = 1 Z(x) exp K k=1 \u03bb k f k (y t , y t\u22121 , x t ) (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Field", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where Z(x) is a normalization function. Estimation \u03b8 = \u03bb k is calculated by maximum loglikelihood. Log-likelihood of probability p(y|x) is calculated by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Field", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "l(\u03b8) = N i=1 logp(y (i) |x (i) )", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Conditional Random Field", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "After estimating \u03b8, the inference phase is run by performing Viterbi algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditional Random Field", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Recurrent Neural Network (RNN) (Goller and Kuchler, 1996) can summarize semantic sentences in lower-dimension vectors. Given a sequence input x 1 , x 2 , . . . , x T , a RNN calculate:", |
| "cite_spans": [ |
| { |
| "start": 31, |
| "end": 57, |
| "text": "(Goller and Kuchler, 1996)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Long Short Term Memory", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "h i = f (h i\u22121 , x i ), i = 1, ..., T", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Long Short Term Memory", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "where h i identifies a hidden state of sequence after observation x i . Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) ", |
| "cite_spans": [ |
| { |
| "start": 102, |
| "end": 136, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Long Short Term Memory", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "[f t , i t , o t ] = \u03c3(W [h t\u22121 , x t ] + b) (4) l t = tanh(V [h t\u22121 , x t ] + d) (5) c t = f t c t\u22121 + i t l t (6) h t = o t tanh(c t )", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "Long Short Term Memory", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "where c t is a memory cell and f t , i t , o t are forget gate, input gate and output gate respectively. A popular RNN network is the bidirectional network (BRNN), which can summarize information bidirectionally. Besides BRNN, character level and word level are created in the prediction model. The final layer is defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Long Short Term Memory", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "a q = \u03c3(W dec .\u03c6(W f ushion [ \u2212 \u2192 h T , \u2190 \u2212 h 0 ]))", |
| "eq_num": "(8)" |
| } |
| ], |
| "section": "Long Short Term Memory", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "where W f usion is unified bidirectional RNN state, \u03c3 is a sigmoid function, \u03c6 is a non-linear function, a q is the prediction output. Finally, we minimize logistic error to optimize our network. (Lample et al., 2016) proposed LSTM-CRF model as joint model LSTM and CRF. The result is better than LSTM model and CRF model. The idea uses Viterbi inference after the final layer in LSTM model.", |
| "cite_spans": [ |
| { |
| "start": 196, |
| "end": 217, |
| "text": "(Lample et al., 2016)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Long Short Term Memory", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "After that, (Ma and Hovy, 2016) proposed Bidirectional LSTM-CNN-CRF model as an end-toend sequence labeling model. The result is better than LSTM-CRF. The idea uses CNN layer in character embedding.", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 31, |
| "text": "(Ma and Hovy, 2016)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSTM-CRF", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Recently, (Liu et al., 2018) proposed an effective sequence labeling framework, LM-LSTM-CRF. They incorporated a neural language model with the sequence labeling task and conduct multi-task learning to guide the language model towards task specific key knowledge. They combined CRF model and neural language model to create a joint object function:", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 28, |
| "text": "(Liu et al., 2018)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSTM-CRF", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "J = \u2212 i (p(y i |Z i ) + \u03bb(logp f (x i ) + logp r (x i )) (9)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSTM-CRF", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "where p(y i |Z i ) is the probability calculated by the CRF layer, p f (x i ) is the prediction probability for words by taking the character sequence as inputs from left to right. p r (x i ) is the prediction probability for words by taking the character sequence as inputs from right to left.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSTM-CRF", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We compared CRF model, LSTM-CRF model on VLSP 2016 dataset and VTNER dataset. We checked the effectiveness of each feature for NER accuracy in each model. In particular, we integrated Character language model that (Liu et al., 2018) and (Akbik et al., 2018) proposed, with our system (LM-LSTM-CRF). ", |
| "cite_spans": [ |
| { |
| "start": 214, |
| "end": 232, |
| "text": "(Liu et al., 2018)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 237, |
| "end": 257, |
| "text": "(Akbik et al., 2018)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Method", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "c = (c 0, , c 1,1 , c 1,2 , ..., c 1, , c 2,1 , ..., c n, )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Method", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": ", where c i,j is the j-th character for word w i and c i, is the space character after w i . By training a language model, we learn P f (x i |c 0, , ..., c i\u22121, ), an estimate of the predictive distribution over the next word given previous characters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Method", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "P f (x i |c 0, , ..., c i\u22121, ) = sof tmax(V f t + b) (10)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Method", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "where f t represents the entire previous character sequence from left to right. V and b are weights and biases parameters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Method", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "We also adopted a reversed-order language model, which calculated the generation probability from right to left P (x i |c i+1, , ..., c n, ), to extract knowledge in both directions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Method", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "P (x i |c i+1, , ..., c n, ) = sof tmax(V r t + b) (11)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Method", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "where r t represents the entire previous character sequence from right to left.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Method", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "The results show the effectiveness of our model on both datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Method", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "To evaluate our system, we used the VLSP 2016 dataset, in which we used 80% of the data as a training set, and the remaining as a testing set. We used 10% of the training set as a development set when training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "In addition, we created our own VTNER dataset following the annotation guide from VLSP 2018 organization. The dataset consists of articles crawled from a popular online news website VnExpress 1 . We used VnTokenizer tool 2 to determine word segmentation and VnTagger tool 3 to determine POS tagging which is a noun, adjective or verb. Three annotators were asked to perform annotations independently. Each person annotated three files with different number of sentences. The number of sentences in the first file, the second file and the third file is 1000 sentences, 2000 sentences and 3000 sentences. After the annotation process was completed, we asked each annotator to check another annotator's files. Finally, another expert annotator was asked to check all datasets. The datasets have nine files.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "\u2022 a1.conll, b1.conll, c1.conll (each file contains about 1000 sentences)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "\u2022 a2.conll, b2.conll, c2.conll (each file contains about 2000 sentences)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "\u2022 a3.conll, b3.conll, c3.conll (each file contains about 3000 sentences)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "The development set contains a1.conll, b1.conll, c1.conll. We use 3-fold cross-validation with 3 test sets: a3.conll, b3.conll, c3.conll. The training set contains a2.conll, b2.conll, c2.conll and 2 of 3 files a3.conll, b3.conll, c3.conll. We use F1 score to measure the performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "F 1 = 2 * P * R P + R", |
| "eq_num": "(12)" |
| } |
| ], |
| "section": "Data", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "where Precision (P) is the percentage of named entities found by the learning system that is correct. Recall (R) is the percentage of named entities present in the corpus that is found by the system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "We implemented CRF model with features:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Word and neighbor word", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 POS tags", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Word and neighbor word are in Vietnamese dictionary", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Word is a person name: first name, mid name, last name", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Word and neighbor word is a location name", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Capital feature: the first character is capitalization, all character is capitalization", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Word is punctuation and special character.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Word is the first word in a sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We used a window size for neighbor word of 7 (three words after and three words before). We used CRFsuite 4 to implement the model. We experimented CRF without POS tags (CRF-without tag), CRF with the window size of 3 (CRF-window 3), CRF with the window size of 5 (CRF-window 5). In addition, we used the CRF with Brown cluster (CRFwith Brown) which is published in Minh (2018).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Brown cluster is created by performing on 6.3 G segment text. We also used the LSTM-CRF model with features as follows,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 LSTM-CRF without Character (LSTM-CRFnot char)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 LSTM-CRF with Capital feature (LSTM-CRFcap)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 LSTM-CRF with POS tagging (LSTM-CRFpos)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 LSTM-CRF with Capital feature and POS tagging feature with 100 dimensions (LSTM-CRF-cap-pos-100)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 LSTM-CRF with Capital feature and POS tagging feature with 30 dimensions (LSTM-CRFcap-pos-30)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 LSTM-CRF loads the embedding matrix (300 dimensions) and Capital feature and POS tagging feature with 30 dimensions (LSTM-CRFcap-post-emb-300)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 LSTM-CRF loads the embedding matrix (100 dimensions) and Capital feature and POS tagging feature with 30 dimensions (LSTM-CRFcap-post-emb-100)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 LSTM-CRF loads the embedding matrix (100 dimensions) and Capital feature and POS tagging feature with 30 dimensions and chunking feature (LSTM-CRF-cap-post-emb-chunk)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We used Glove 5 pre-trained word embedding released by Stanford on 6.3G segment text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Optimization: We employed mini-batch stochastic gradient descent (SGD) with a learning rate of 0.01 and a gradient clipping of 5.0. We set the dropout rate to 0.5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We used the embedding matrix with 100 dimensions on LM-LSTM-CRF for VLSP 2016 and our VTNER dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Specially, we used LM-LSTM-CRF model which is proposed by (Liu et al., 2018) . This model used highway layers (Srivastava et al., 2015) and the cotraining strategy. 32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018", |
| "cite_spans": [ |
| { |
| "start": 58, |
| "end": 76, |
| "text": "(Liu et al., 2018)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 110, |
| "end": 135, |
| "text": "(Srivastava et al., 2015)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Copyright 2018 by the authors", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments 3.1 Experimental settings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We show the results of VLSP 2016 datasets and VTNER dataset. (Le, 2016) 89.66 end-to-end (Pham and Le, 2017) 88.59 vie-ner-lstm (Pham and Le, 2017) 92.05 feature-rich 93.93", |
| "cite_spans": [ |
| { |
| "start": 61, |
| "end": 71, |
| "text": "(Le, 2016)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 89, |
| "end": 108, |
| "text": "(Pham and Le, 2017)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 128, |
| "end": 147, |
| "text": "(Pham and Le, 2017)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u2022 CRF models are showed in Table 2 CRF model which contains POS tags feature and the window size of 7 gets the best score. POS tags feature increases F1 result by 2%. CRF with POS tags and Brown cluster feature gets the best score.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 27, |
| "end": 34, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "VLSP 2016 dataset", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 LSTM-CRF models are showed in Table 3 The results indicate that LSTM-CRF models' scores are higher than CRF models'. POS tags feature increases F1 score by 2%. POS tags feature with 100 dimensions score is higher than 300 dimensions. LSTM-CRF model load pre-train embedding matrix gets better score than LSTM-CRF model. LSTM-CRF model with 100 dimensions pre-trained embedding gets better score than the model with 300 dimensions. LSTM-CRF model with pretrained embedding, capital feature, POS tags feature and chunking feature gets the best score (94.56% F1).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 32, |
| "end": 39, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "VLSP 2016 dataset", |
| "sec_num": null |
| }, |
| { |
| "text": "The results of LM-LSTM-CRF model are only lower than LSTM-CRF models' with pretrained embedding, capital feature, POS tags feature and chunking feature. But LM-LSTM-CRF is the end-to-end model and handcrafted feature as POS tags and chunking are hard to apply to new tasks or domains.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VLSP 2016 dataset", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 The SOTA models are showed in Table 4 We compared with the SOTA models on VLSP 2016. LM-LSTM-CRF scores are higher than Vitk (Le, 2016) , end-to-end (Pham and Le, 2017) . Our system scores are lower than viet-ner-lstm (Pham and Le, 2017) , featurerich (Minh, 2018) because our system is the end-to-end model and the viet-ner-lstm and feature-rich models use handcrafted features including chunking feature. The effectiveness of handcrafted features to results is same with VLSP 2016 dataset. Although, LSTM-CRF models used capital and POS tags feature, LM-LSTM-CRF's scores are higher than those of CRF and LSTM-CRF models. Besides, the LM-LSTM-CRF scores are higher than LSTM-CRF models because we didn't use chunking feature in LSTM-CRF models as on VLSP 2016 dataset. The chunking features are difficult feature in the Vietnamese language.", |
| "cite_spans": [ |
| { |
| "start": 127, |
| "end": 137, |
| "text": "(Le, 2016)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 151, |
| "end": 170, |
| "text": "(Pham and Le, 2017)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 220, |
| "end": 239, |
| "text": "(Pham and Le, 2017)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 32, |
| "end": 39, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "VLSP 2016 dataset", |
| "sec_num": null |
| }, |
| { |
| "text": "Especially, LM-LSTM-CRF with highway and co-training obtain the best scores. This was because highway networks transform the output of characterlevel layers into different semantic spaces. Beside, co-training transform the output of character-level layers into different semantic spaces for different objectives. Hence, our language model can provide related knowledge of the sequence labeling, without forcing it to share the whole feature space.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VTNER dataset", |
| "sec_num": null |
| }, |
| { |
| "text": "In some cases, LM-LSTM-CRF model is better at recognizing than LSTM-CRF-cap-pos and CRF models. LG l\u00e0 c\u00f4ng ty g\u00ec? What company is LG?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "LG -ORG Table 7 shows some examples in which our system can perform more accurately than the others. In the first four examples, LM-LSTM-CRF correctly identifies all person names, while CRF and LSTM-CRF-cap-pos can only correctly identify one of four cases. In the last example, LM-LSTM-CRF correctly identifies organization name, while CRF and LSTM-CRF-cap-pos fail to do so.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 8, |
| "end": 15, |
| "text": "Table 7", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In this paper, we carefully conducted various experimental results on named-entity recognition for Vietnamese. We also indicated which is the state of the art model for standard data. We created a VTNER dataset with 20500 sentences. The best result is our LM-LSTM-CRF model on the VTNER dataset. On VLSP 2016 dataset, LM-LSTM-CRF result is lower than LSTM-CRF model with word embedding feature, capital, POS tags and chunking feature. But chunking features and other than handcrafted features are hard for applying to new tasks or domains. The results show that LM-LSTM-CRF with highway and co-training is the current state-ofthe-art end-to-end method for NER task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We made our VTNER data containing 6439 sentences publicly available for the research community. The dataset contains three files: train.conll, dev.conll and test.conll. 6 In future, we plan to extract and incorporate knowledge from pre-trained word-level language models which (Peters et al., 2017; Peters et al., 2018) proposed.", |
| "cite_spans": [ |
| { |
| "start": 169, |
| "end": 170, |
| "text": "6", |
| "ref_id": null |
| }, |
| { |
| "start": 277, |
| "end": 298, |
| "text": "(Peters et al., 2017;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 299, |
| "end": 319, |
| "text": "Peters et al., 2018)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://vnexpress.net/ 2 http://mim.hus.vnu.edu.vn/dsl/tools/tokenizer 3 http://mim.hus.vnu.edu.vn/dsl/tools/tagger PACLIC 32", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.chokkan.org/software/crfsuite/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://nlp.stanford.edu/projects/glove PACLIC 32", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. Proceedings of the 18th International Conf. on Machine Learning", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "N" |
| ], |
| "last": "Fernando", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Lafferty, Andrew McCallum, Fernando CN Pereira 2012. Conditional Random Fields: Probabilistic Mod- els for Segmenting and Labeling Sequence Data. Pro- ceedings of the 18th International Conf. on Machine Learning.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "An introduction to conditional random fields for relational learning", |
| "authors": [ |
| { |
| "first": "Charles", |
| "middle": [], |
| "last": "Sutton", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Charles Sutton, Andrew McCallum. 2006. An introduc- tion to conditional random fields for relational learn- ing.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hochreiter, S., Schmidhuber, J. 1997. Long short-term memory. Neural computation 9(8), 1735-1780.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Neural Architectures for Named Entity Recognition", |
| "authors": [ |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Lample", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandeep", |
| "middle": [], |
| "last": "Subramanian", |
| "suffix": "" |
| }, |
| { |
| "first": "Kazuya", |
| "middle": [], |
| "last": "Kawakami", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Subra- manian, Kazuya Kawakami, Chris Dyer. 2016. Neu- ral Architectures for Named Entity Recognition. Pro- ceedings of the 2016 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics: Human Language Technologies.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF", |
| "authors": [ |
| { |
| "first": "Xuezhe", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end Sequence Labeling via Bi-directional LSTM-CNNs- CRF. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL).", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Empower Sequence Labeling with Task-Aware Neural Language Model", |
| "authors": [ |
| { |
| "first": "Liyuan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jingbo", |
| "middle": [], |
| "last": "Shang", |
| "suffix": "" |
| }, |
| { |
| "first": "Frank", |
| "middle": [ |
| "F" |
| ], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiang", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Huan", |
| "middle": [], |
| "last": "Gui", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiawei", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of Thirty-second AAAI Conference on Artificial Intelligence (AAAI)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liyuan Liu, Jingbo Shang, Frank F. Xu, Xiang Ren, Huan Gui, Jian Peng, Jiawei Han. 2018. Empower Se- quence Labeling with Task-Aware Neural Language Model. In Proceedings of Thirty-second AAAI Con- ference on Artificial Intelligence (AAAI).", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "End-toend Recurrent Neural Network Models for Vietnamese Named Entity Recognition", |
| "authors": [ |
| { |
| "first": "Thai-Hoang", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "Hong-Phuong", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thai-Hoang Pham, Hong-Phuong Le 2017. End-to- end Recurrent Neural Network Models for Vietnamese Named Entity Recognition. International Conference 6 https://github.com/dungdx34", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Pacific Asia Conference on Language, Information and Computation Hong Kong", |
| "authors": [], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors of the Pacific Association for Computational Linguis- tics (PACLING).", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "NNVLP: A Neural Network-Based Vietnamese Language Processing Toolkit", |
| "authors": [], |
| "year": 2017, |
| "venue": "Proceedings of the 8th International Joint Conference on Natural Language Processing -System Demonstrations (IJCNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thai-Hoang Pham, Xuan-Khoai Pham, Tuan-Anh Nguyen, Hong-Phuong Le. 2017. NNVLP: A Neu- ral Network-Based Vietnamese Language Processing Toolkit. Proceedings of the 8th International Joint Conference on Natural Language Processing -System Demonstrations (IJCNLP).", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A Feature-Rich Vietnamese Named-Entity Recognition Model", |
| "authors": [ |
| { |
| "first": "Nhat", |
| "middle": [], |
| "last": "Pham Quang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Minh", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1803.04375" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pham Quang Nhat Minh. 2018. A Feature-Rich Viet- namese Named-Entity Recognition Model. arXiv preprint arXiv: 1803.04375.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "A Feature-Based Model for Nested Named-Entity Recognition at VLSP-2018 NER Evaluation Campaign", |
| "authors": [ |
| { |
| "first": "Nhat", |
| "middle": [], |
| "last": "Pham Quang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Minh", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1803.08463" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pham Quang Nhat Minh. 2018. A Feature-Based Model for Nested Named-Entity Recognition at VLSP-2018 NER Evaluation Campaign. arXiv preprint arXiv: 1803.08463.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Introduction to the conll-2003 shared task:Language-independent named entity recognition", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [ |
| "F T K" |
| ], |
| "last": "Sang", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "D" |
| ], |
| "last": "Meulder", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sang, E.F.T.K., Meulder, F.D. 2003. Introduction to the conll-2003 shared task:Language-independent named entity recognition. CoNLL.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Vlsp 2016 shared task: Named entity recognition", |
| "authors": [ |
| { |
| "first": "Minh", |
| "middle": [], |
| "last": "Nguyen Thi", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Huyen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Vu Xuan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of Vietnamese Speech and Language Processing (VLSP)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nguyen Thi Minh Huyen and Vu Xuan Luong 1997. Vlsp 2016 shared task: Named entity recognition. In: Proceedings of Vietnamese Speech and Language Pro- cessing (VLSP).", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Semi-supervised learning for natural language", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang, P. 2005. Semi-supervised learning for natural lan- guage. PhD thesis, Massachusetts Institute of Technol- ogy (2005).", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pennington, J., Socher, R., Manning, C.D. 2014. Glove: Global vectors for word representation. In: Empirical Methods in Natural Language Processing (EMNLP).", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Crfsuite: a fast implementation of conditional random fields (crfs)", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Okazaki", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Okazaki, N. 2007. Crfsuite: a fast implementation of conditional random fields (crfs).", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Learning task-dependent distributed representations by backpropagation through structure", |
| "authors": [ |
| { |
| "first": "Christoph", |
| "middle": [], |
| "last": "Goller", |
| "suffix": "" |
| }, |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Kuchler", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "IEEE International Conference on", |
| "volume": "1", |
| "issue": "", |
| "pages": "347--352", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christoph Goller and Andreas Kuchler. 1996. Learning task-dependent distributed representations by back- propagation through structure. In Neural Networks, 1996., IEEE International Conference on, volume 1, pages 347-352. IEEE.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Vietnamese named entity recognition using token regular expressions and bidirectional inference", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [ |
| "P" |
| ], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of Vietnamese Speech and Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Le, H.P. 2016. Vietnamese named entity recognition using token regular expressions and bidirectional in- ference. Proceedings of Vietnamese Speech and Lan- guage Processing (VLSP).", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "The importance of automatic syntactic features in vietnamese named entity recognition", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [ |
| "H" |
| ], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "P" |
| ], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 31st Pacific Asia Conference on Language, Information and Computation (PACLIC)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pham, T.H., Le, H.P. 2017. The importance of auto- matic syntactic features in vietnamese named entity recognition. Proceedings of the 31st Pacific Asia Con- ference on Language, Information and Computation (PACLIC).", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Contextual String Embeddings for Sequence Labeling. 27th International Conference on Computational Linguistics (COLING)", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Akbik", |
| "suffix": "" |
| }, |
| { |
| "first": "Duncan", |
| "middle": [], |
| "last": "Blythe", |
| "suffix": "" |
| }, |
| { |
| "first": "Roland", |
| "middle": [], |
| "last": "Vollgraf", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Akbik, Alan and Blythe, Duncan and Vollgraf, Roland. 2018. Contextual String Embeddings for Sequence Labeling. 27th International Conference on Compu- tational Linguistics (COLING).", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Semi-supervised sequence tagging with bidirectional language models", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [ |
| "E" |
| ], |
| "last": "Peters", |
| "suffix": "" |
| }, |
| { |
| "first": "Waleed", |
| "middle": [], |
| "last": "Ammar", |
| "suffix": "" |
| }, |
| { |
| "first": "Chandra", |
| "middle": [], |
| "last": "Bhagavatula", |
| "suffix": "" |
| }, |
| { |
| "first": "Russell", |
| "middle": [], |
| "last": "Power", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1756--1765", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew E. Peters, Waleed Ammar, Chandra Bhagavat- ula, and Russell Power. 2017. Semi-supervised se- quence tagging with bidirectional language models In Proceedings of the 55th Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers), pages 1756-1765, Vancouver, Canada, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Deep contextualized word representations", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [ |
| "E" |
| ], |
| "last": "Peters", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Neumann", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Iyyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Gardner", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "The 2018 Conference of the Association for Computational Linguistics will be held in New Orleans", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettle- moyer. 2018. Deep contextualized word represen- tations. The 2018 Conference of the Association for Computational Linguistics will be held in New Or- leans, Louisiana, 2018.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "LM-LSTM-CRF Neural ArchitectureGiven input x = (x 1 , x 2 , . . . , x T ) and output's annotations y = (y 1 , y 2 , ..., y T ). Its character-level input is recorded as" |
| }, |
| "TABREF0": { |
| "text": "\u0110\u1ea1i h\u1ecdc B\u00e1ch khoa H\u00e0 N\u1ed9i n\u1eb1m tr\u00ean \u0111\u01b0\u1eddng \u0110\u1ea1i C\u1ed3 Vi\u1ec7t. (The Hanoi University of Science and Technology is on Dai Co Viet street.) The output is: [\u0110\u1ea1i h\u1ecdc B\u00e1ch khoa H\u00e0 N\u1ed9i] ORGANIZATION n\u1eb1m tr\u00ean [\u0111\u01b0\u1eddng \u0110\u1ea1i C\u1ed3 Vi\u1ec7t] LOCATION .", |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF2": { |
| "text": "VTNER dataset", |
| "type_str": "table", |
| "content": "<table><tr><td/><td colspan=\"2\">Number Document Number sentence</td><td colspan=\"2\">Number entity PER LOC ORG MISC</td></tr><tr><td>Total</td><td>990</td><td>20509</td><td>5041 11948 6912</td><td>914</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF3": { |
| "text": "", |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"2\">: CRF models on VLSP 2016 datasets</td></tr><tr><td/><td>F1</td></tr><tr><td>CRF</td><td>86.21</td></tr><tr><td colspan=\"2\">CRF-without tag 84.12</td></tr><tr><td colspan=\"2\">CRF-window 3 86.43</td></tr><tr><td colspan=\"2\">CRF-window 5 85.25</td></tr><tr><td>CRF-brown</td><td>87.96</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF4": { |
| "text": "", |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"2\">: LSTM-CRF models on VLSP 2016 datasets</td></tr><tr><td/><td>F1</td></tr><tr><td>LSTM-CRF</td><td>87.33</td></tr><tr><td>LSTM-CRF-not char</td><td>81.15</td></tr><tr><td>LSTM-CRF-cap</td><td>87.34</td></tr><tr><td>LSTM-CRF-pos</td><td>89.39</td></tr><tr><td>LSTM-CRF-cap-pos-100</td><td>89.36</td></tr><tr><td>LSTM-CRF-cap-pos-30</td><td>88.12</td></tr><tr><td>LSTM-CRF-cap-pos-emb-300</td><td>90.13</td></tr><tr><td>LSTM-CRF-cap-pos-emb-100</td><td>90.58</td></tr><tr><td>LSTM-CRF-emb-cap-pos-chunk</td><td>94.56</td></tr><tr><td>LM-LSTM-CRF</td><td>91.89</td></tr><tr><td colspan=\"2\">LM-LSTM-CRF-highway-co-training 92.17</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF5": { |
| "text": "The SOTA models on VLSP 2016 datasets F1 vitk", |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF6": { |
| "text": "", |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"4\">: CRF models on VTNER datasets</td></tr><tr><td>F1</td><td>a3</td><td>b3</td><td>c3</td></tr><tr><td>CRF</td><td colspan=\"3\">75.74 85.57 84.83</td></tr><tr><td colspan=\"4\">CRF-without tag 74.21 84.7 84.72</td></tr><tr><td colspan=\"4\">CRF-window 3 73.88 84.09 83.15</td></tr><tr><td colspan=\"4\">CRF-window 5 74.78 85.41 84.02</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF7": { |
| "text": "LSTM-CRF models on VTNER datasets", |
| "type_str": "table", |
| "content": "<table><tr><td>F1</td><td>a3</td><td>b3</td><td>c3</td></tr><tr><td>LSTM-CRF</td><td colspan=\"3\">86.06 88.46 89.46</td></tr><tr><td>LSTM-CRF-cap-pos</td><td colspan=\"3\">86.99 88.99 89.72</td></tr><tr><td>LM-LSTM-CRF</td><td colspan=\"3\">86.81 90.15 91.50</td></tr><tr><td colspan=\"4\">LM-LSTM-CRF-highway-co-training 87.38 90.58 91.92</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF8": { |
| "text": "Some case studies", |
| "type_str": "table", |
| "content": "<table><tr><td>Result</td><td>CRF</td><td colspan=\"2\">LSTM-CRF-cap-pos LM-LSTM-CRF</td></tr><tr><td>B\u00e1c H\u1ed3 sinh ng\u00e0y bao nhi\u00eau? What is Bac Ho 's birthday?</td><td>B\u00e1c H\u1ed3 -PER</td><td/><td>B\u00e1c H\u1ed3 -PER</td></tr><tr><td>Tr\u1ea5n Th\u00e0nh l\u00e0 ai? Who is Tran Thanh?</td><td>Tr\u1ea5n Th\u00e0nh -ORG</td><td/><td>Tr\u1ea5n Th\u00e0nh -PER</td></tr><tr><td>H\u01b0\u01a1ng c\u00f3 ch\u1ed3ng hay ch\u01b0a? Is Huong married?</td><td/><td/><td>H\u01b0\u01a1ng -PER</td></tr><tr><td>Nh\u00e0 v\u0103n Hemingway l\u00e0 ai? Who is Hemingway?</td><td/><td>Hemingway -PER</td><td>Hemingway -PER</td></tr></table>", |
| "html": null, |
| "num": null |
| } |
| } |
| } |
| } |