ACL-OCL / Base_JSON /prefixP /json /paclic /2020.paclic-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:02:07.409953Z"
},
"title": "Improving Sequence Tagging for Vietnamese Text using Transformer-based Neural Models",
"authors": [
{
"first": "Viet",
"middle": [],
"last": "The",
"suffix": "",
"affiliation": {},
"email": "vietbt6@fpt.com.vn"
},
{
"first": "",
"middle": [],
"last": "Bui",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "FPT University",
"location": {
"settlement": "Hanoi",
"country": "Vietnam"
}
},
"email": ""
},
{
"first": "Thi",
"middle": [
"Oanh"
],
"last": "Tran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "FPT University",
"location": {
"settlement": "Hanoi",
"country": "Vietnam"
}
},
"email": ""
},
{
"first": "Phuong",
"middle": [],
"last": "Le-Hong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "FPT University",
"location": {
"settlement": "Hanoi",
"country": "Vietnam"
}
},
"email": "phuonglh@vnu.edu.vn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our study on using mutilingual BERT embeddings and some new neural models for improving sequence tagging tasks for the Vietnamese language. We propose new model architectures and evaluate them extensively on two named entity recognition datasets of VLSP 2016 and VLSP 2018, and on two part-of-speech tagging datasets of VLSP 2010 and VLSP 2013. Our proposed models outperform existing methods and achieve new state-of-the-art results. In particular, we have pushed the accuracy of part-of-speech tagging to 95.40% on the VLSP 2010 corpus, to 96.77% on the VLSP 2013 corpus; and the F 1 score of named entity recognition to 94.07% on the VLSP 2016 corpus, to 90.31% on the VLSP 2018 corpus. Our code and pre-trained models viBERT and vELECTRA are released as open source to facilitate adoption and further research.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our study on using mutilingual BERT embeddings and some new neural models for improving sequence tagging tasks for the Vietnamese language. We propose new model architectures and evaluate them extensively on two named entity recognition datasets of VLSP 2016 and VLSP 2018, and on two part-of-speech tagging datasets of VLSP 2010 and VLSP 2013. Our proposed models outperform existing methods and achieve new state-of-the-art results. In particular, we have pushed the accuracy of part-of-speech tagging to 95.40% on the VLSP 2010 corpus, to 96.77% on the VLSP 2013 corpus; and the F 1 score of named entity recognition to 94.07% on the VLSP 2016 corpus, to 90.31% on the VLSP 2018 corpus. Our code and pre-trained models viBERT and vELECTRA are released as open source to facilitate adoption and further research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sequence modeling plays a central role in natural language processing. Many fundamental language processing tasks can be treated as sequence tagging problems, including part-of-speech tagging and named-entity recognition. In this paper, we present our study on adapting and developing the multilingual BERT (Devlin et al., 2019) and ELEC-TRA (Clark et al., 2020 ) models for improving Vietnamese part-of-speech tagging (PoS) and named entity recognition (NER).",
"cite_spans": [
{
"start": 307,
"end": 328,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 342,
"end": 361,
"text": "(Clark et al., 2020",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many natural language processing tasks have been shown to be greatly benefited from large net-work pre-trained models. In recent years, these pretrained models has led to a series of breakthroughs in language representation learning (Radford et al., 2018; Peters et al., 2018; Devlin et al., 2019; Yang et al., 2019; Clark et al., 2020) . Current state-of-the-art representation learning methods for language can be divided into two broad approaches, namely denoising auto-encoders and replaced token detection.",
"cite_spans": [
{
"start": 233,
"end": 255,
"text": "(Radford et al., 2018;",
"ref_id": "BIBREF19"
},
{
"start": 256,
"end": 276,
"text": "Peters et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 277,
"end": 297,
"text": "Devlin et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 298,
"end": 316,
"text": "Yang et al., 2019;",
"ref_id": "BIBREF23"
},
{
"start": 317,
"end": 336,
"text": "Clark et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the denoising auto-encoder approach, a small subset of tokens of the unlabelled input sequence, typically 15%, is selected; these tokens are masked (e.g., BERT (Devlin et al., 2019) ), or attended (e.g., XLNet (Yang et al., 2019) ); and then train the network to recover the original input. The network is mostly transformer-based models which learn bidirectional representation. The main disadvantage of these models is that they often require a substantial compute cost because only 15% of the tokens per example is learned while a very large corpus is usually required for the pre-trained models to be effective. In the replaced token detection approach, the model learns to distinguish real input tokens from plausible but synthetically generated replacements (e.g., ELECTRA (Clark et al., 2020) ) Instead of masking, this method corrupts the input by replacing some tokens with samples from a proposal distribution. The network is pre-trained as a discriminator that predicts for every token whether it is an original or a replacement. The main advantage of this method is that the model can learn from all input tokens instead of just the small masked-out subset. This is therefore much more efficient, requiring less than 1/4 of compute cost as compared to RoBERTa (Liu et al., 2019) and XLNet (Yang et al., 2019) .",
"cite_spans": [
{
"start": 163,
"end": 184,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 213,
"end": 232,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 782,
"end": 802,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 1275,
"end": 1293,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 1304,
"end": 1323,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Both of the approaches belong to the fine-tuning method in natural language processing where we first pretrain a model architecture on a language modeling objective before fine-tuning that same model for a supervised downstream task. A major advantage of this method is that few parameters need to be learned from scratch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose some improvements over the recent transformer-based models to push the state-of-the-arts of two common sequence labeling tasks for Vietnamese. Our main contributions in this work are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose pre-trained language models for Vietnamese which are based on BERT and ELECTRA architectures; the models are trained on large corpora of 10GB and 60GB uncompressed Vietnamese text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose the fine-tuning methods by using attentional recurrent neural networks instead of the original fine-tuning with linear layers. This improvement helps improve the accuracy of sequence tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The remainder of this paper is structured as follows. Section 2 presents the methods used in the current work. Section 3 describes the experimental results. Finally, Section 4 concludes the papers and outlines some directions for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The basic structure of BERT (Devlin et al., 2019 ) (Bidirectional Encoder Representations from Transformers) is summarized on Figure 1 where Trm are In essence, BERT's model architecture is a multilayer bidirectional Transformer encoder based on the original implementation described in (Vaswani et al., 2017) . In this model, each input token of a sentence is represented by a sum of the corresponding token embedding, its segment embedding and its position embedding. The WordPiece embeddings are used; split word pieces are denoted by ##. In our experiments, we use learned positional embedding with supported sequence lengths up to 256 tokens.",
"cite_spans": [
{
"start": 28,
"end": 48,
"text": "(Devlin et al., 2019",
"ref_id": "BIBREF1"
},
{
"start": 287,
"end": 309,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 126,
"end": 134,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "BERT",
"sec_num": "2.1.1"
},
{
"text": "T 1 T 2 \u2022 \u2022 \u2022 T N Trm Trm \u2022 \u2022 \u2022 Trm Trm Trm \u2022 \u2022 \u2022 Trm E 1 E 2 \u2022 \u2022 \u2022 E N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "2.1.1"
},
{
"text": "The BERT model trains a deep bidirectional representation by masking some percentage of the input tokens at random and then predicting only those masked tokens. The final hidden vectors corresponding to the mask tokens are fed into an output softmax over the vocabulary. We use the whole word masking approach in this work. The masked language model objective is a cross-entropy loss on predicting the masked tokens. BERT uniformly selects 15% of the input tokens for masking. Of the selected tokens, 80% are replaced with [MASK], 10% are left unchanged, and 10% are replaced by a randomly selected vocabulary token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "2.1.1"
},
{
"text": "In our experiment, we start with the open-source mBERT package 1 . We keep the standard hyperparameters of 12 layers, 768 hidden units, and 12 heads. The model is optimized with Adam (Kingma and Ba, 2015) using the following parameters: \u03b2 1 = 0.9, \u03b2 2 = 0.999, \u01eb = 1e \u2212 6 and L 2 weight decay of ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "2.1.1"
},
{
"text": "\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 Np V Att Att Att Att Att Att Att Att Att Linear RNN RNN RNN RNN RNN RNN RNN RNN RNN [CLS] \u0110 ##\u00f4ng gi ##\u1edbi th ##i\u1ec7 ##u [SEP]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "2.1.1"
},
{
"text": "B k = \u03b3 w 0 E k + m k=1 w i h ki , where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "2.1.1"
},
{
"text": "\u2022 B k is the BERT output of k-th token;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "2.1.1"
},
{
"text": "\u2022 E k is the embedding of k-th token;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "2.1.1"
},
{
"text": "\u2022 m is the number of hidden layers of BERT;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "2.1.1"
},
{
"text": "\u2022 h ki is the i-th hidden state of of k-th token;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "2.1.1"
},
{
"text": "\u2022 \u03b3, w 0 , w 1 , . . . , w m are trainable parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "2.1.1"
},
{
"text": "Our proposed architecture contains five main layers as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Architecture",
"sec_num": "2.1.2"
},
{
"text": "1. The input layer encodes a sequence of tokens which are substrings of the input sentence, including ignored indices, padding and separators;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Architecture",
"sec_num": "2.1.2"
},
{
"text": "2. A BERT layer;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Architecture",
"sec_num": "2.1.2"
},
{
"text": "3. A bidirectional RNN layer with either LSTM or GRU units;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Architecture",
"sec_num": "2.1.2"
},
{
"text": "4. An attention layer;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Architecture",
"sec_num": "2.1.2"
},
{
"text": "A schematic view of our model architecture is shown in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 63,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "A linear layer;",
"sec_num": "5."
},
{
"text": "ELECTRA (Clark et al., 2020) is currently the latest development of BERT-based model where a more sample-efficient pre-training method is used. This method is called replaced token detection. In this method, two neural networks, a generator G and a discriminator D, are trained simultaneously. Each one consists of a Transformer network (an encoder) that maps a sequence of input tokens",
"cite_spans": [
{
"start": 8,
"end": 28,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ELECTRA",
"sec_num": "2.2"
},
{
"text": "x = [x 1 , x 2 , . . . , x n ] into a sequence of contextualized vectors h( x) = [h 1 , h 2 , . . . , h n ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ELECTRA",
"sec_num": "2.2"
},
{
"text": "For a given position t where x t is the masked token, the generator outputs a probability for generating a particular token x t with a softmax distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ELECTRA",
"sec_num": "2.2"
},
{
"text": "p G (x t | x) = exp(x \u22a4 t h G ( x) t ) u exp(u \u22a4 t h G ( x) t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ELECTRA",
"sec_num": "2.2"
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ELECTRA",
"sec_num": "2.2"
},
{
"text": "For a given position t, the discriminator predicts whether the token x t is \"real\", i.e., that it comes from the data rather than the generator distribution, with a sigmoid function: An overview of the replaced token detection in the ELECTRA model is shown in Figure 3 . The generator is a BERT model which is trained jointly with the discriminator. The Vietnamese example is a real one which is sampled from our training corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 260,
"end": 268,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "ELECTRA",
"sec_num": "2.2"
},
{
"text": "D( x, t) = \u03c3 w \u22a4 h D ( x) t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ELECTRA",
"sec_num": "2.2"
},
{
"text": "To train the proposed models, we use a CPU (Intel Xeon E5-2699 v4 @2.20GHz) and a GPU (NVIDIA GeForce GTX 1080 Ti 11G). The hyper-parameters that we chose are as follows: maximum sequence length is 256, BERT learning rate is 2E \u2212 05, learning rate is 1E \u2212 3, number of epochs is 100, batch size is 16, use apex and BERT weight decay is set to 0, the Adam rate is 1E\u221208. The configuration of our model is as follows: number of RNN hidden units is 256, one RNN layer, attention hidden dimension is 64, number of attention heads is 3 and a dropout rate of 0.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.1.1"
},
{
"text": "To build the pre-training language model, it is very important to have a good and big dataset. This dataset was collected from online newspapers 2 in Vietnamese. To clean the data, we perform the following pre-processing steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.1.1"
},
{
"text": "\u2022 Remove duplicated news",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.1.1"
},
{
"text": "\u2022 Only accept valid letters in Vietnamese",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.1.1"
},
{
"text": "\u2022 Remove too short sentences (less than 4 words) 2 vnexpress.net, dantri.com.vn, baomoi.com, zingnews.vn, vitalk.vn, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.1.1"
},
{
"text": "We obtained approximately 10GB of texts after collection. This dataset was used to further pre-train the mBERT to build our viBERT which better represents Vietnamese texts. About the vocab, we removed insufficient vocab from mBERT because its vocab contains ones for other languages. This was done by keeping only vocabs existed in the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.1.1"
},
{
"text": "In pre-training vELECTRA, we collect more data from two sources:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.1.1"
},
{
"text": "\u2022 NewsCorpus: 27.4 GB 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.1.1"
},
{
"text": "\u2022 OscarCorpus: 31.0 GB 4 Totally, with more than 60GB of texts, we start training different versions of vELECTRA. It is worth noting that pre-training viBERT is much slower than pre-training vELECTRA. For this reason, we pre-trained viBERT on the 10GB corpus rather than on the large 60GB corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.1.1"
},
{
"text": "In performing experiments, for datasets without development sets, we randomly selected 10% for fine-tuning the best parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing and evaluation methods",
"sec_num": "3.1.2"
},
{
"text": "To evaluate the effectiveness of the models, we use the commonly-used metrics which are proposed by the organizers of VLSP. Specifically, we measure the accuracy score on the POS tagging task which is calculated as follows: Table 1 : Performance of our proposed models on the POS tagging task and the F 1 score on the NER task using the following equations:",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 231,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Testing and evaluation methods",
"sec_num": "3.1.2"
},
{
"text": "Acc = #of _words_correcly_tagged #of _words_in_the_test_set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing and evaluation methods",
"sec_num": "3.1.2"
},
{
"text": "F 1 = 2 * P re * Rec P re + Rec",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing and evaluation methods",
"sec_num": "3.1.2"
},
{
"text": "where Pre and Rec are determined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing and evaluation methods",
"sec_num": "3.1.2"
},
{
"text": "P re = N E_true N E_sys Rec = N E_true N E_ref",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing and evaluation methods",
"sec_num": "3.1.2"
},
{
"text": "where NE_ref is the number of NEs in gold data, NE_sys is the number of NEs in recognizing system, and NE_true is the number of NEs which is correctly recognized by the system. Task Table 1 shows experimental results using different proposed architectures on the top of mBERT and viBERT and vELECTRA on two benchmark datasets from the campaign VLSP 2010 and VLSP 2013.",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 190,
"text": "Task Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Testing and evaluation methods",
"sec_num": "3.1.2"
},
{
"text": "As can be seen that, with further pre-training techniques on a Vietnamese dataset, we could significantly improve the performance of the model. On the dataset of VLSP 2010, both viBERT and vELECTRA significantly improved the performance by about 1% in the F 1 scores. On the dataset of VLSP 2013, these two models slightly improved the performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On the PoS Tagging",
"sec_num": "3.2.1"
},
{
"text": "From the table, we can also see the performance of different architectures including fine-tuning, BiL-STM, biGRU, and their combination with attention mechanisms. Fine-tuning mBERT with linear functions in several epochs could produce nearly stateof-the-art results. It is also shown that building different architectures on top slightly improve the performance of all mBERT, viBERT and vELECTRA models. On the VLSP 2010, we got the accuracy of 95.40% using biLSTM with attention on top of vELECTRA. On the VLSP 2013 dataset, we got 96.77% in the accuracy scores using only biLSTM on top of vELECTRA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On the PoS Tagging",
"sec_num": "3.2.1"
},
{
"text": "In comparison to previous work, our proposed model -vELECTRA -outperformed previous ones. It achieved from 1% to 2% higher than existing work using different innovation in deep learning such as CNN, LSTM, and joint learning techniques. Moreover, vELECTRA also gained a slightly better than PhoBERT_base, the same pre-training language model released so far, by nearly 0.1% in the accuracy score. BiLSTM (Pham and Le-Hong, 2017b) 92.02 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On the PoS Tagging",
"sec_num": "3.2.1"
},
{
"text": "NNVLP 92.91 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On the NER Task",
"sec_num": "3.2.2"
},
{
"text": "VnCoreNLP-NER (Vu et al., 2018) 88.6 6.",
"cite_spans": [
{
"start": 14,
"end": 31,
"text": "(Vu et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "On the NER Task",
"sec_num": "3.2.2"
},
{
"text": "VNER (Nguyen, 2019) 89.6 7.",
"cite_spans": [
{
"start": 5,
"end": 19,
"text": "(Nguyen, 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "On the NER Task",
"sec_num": "3.2.2"
},
{
"text": "ETNLP (Vu et al., 2019) 91.1 8.",
"cite_spans": [
{
"start": 6,
"end": 23,
"text": "(Vu et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "On the NER Task",
"sec_num": "3.2.2"
},
{
"text": "PhoBERT_base (Nguyen and Nguyen, 2020) These results once again gave a strong evidence to the above statement that further training mBERT on a small raw dataset could significantly improve the performance of transformation-based language models on downstream tasks. Training vELECTRA from scratch on a big Vietnamese dataset could further enhance the performance. On two datasets, vELECTRA improve the F 1 score by from 1% to 3% in comparison to viBERT and mBERT.",
"cite_spans": [
{
"start": 13,
"end": 38,
"text": "(Nguyen and Nguyen, 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "On the NER Task",
"sec_num": "3.2.2"
},
{
"text": "Looking at the performance of different architectures on top of these pre-trained models, we acknowledged that biLSTM with attention once a gain yielded the SOTA result on VLSP 2016 dataset. On VLSP 2018 dataset, the architecture of biGRU yielded the best performance at 90.31% in the F 1 score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On the NER Task",
"sec_num": "3.2.2"
},
{
"text": "Comparing to previous work, the best proposed model outperformed all work by a large margin on both datasets. Figure 4 and 5 shows the averaged decoding time measured on one sentence. According to our statistics, the averaged length of one sentence in VLSP 2013 and VLSP 2016 datasets are 22.55 and 21.87 words, respectively. For the POS tagging task measured on VLSP 2013 dataset, among three models, the fastest decoding time is of vELECTRA model, followed by viBERT model, and finally by mBERT model. This statement holds for four proposed architectures on top of these three models. However, for the finetuning technique, the decoding time of mBERT is faster than that of viBERT. For the NER task measured on the VLSP 2016 dataset, among three models, the slowest time is of viBERT model with more than 2 milliseconds per sentence. The decoding times on mBERT topped with simple fine-tuning techniques, or bi-GRU, or biLSTM-attention is a little bit faster than on vELECTRA with the same architecture.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 118,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "On the NER Task",
"sec_num": "3.2.2"
},
{
"text": "This experiment shows that our proposed models are of practical use. In fact, they are currently deployed as a core component of our commercial chatbot engine FPT.AI 5 which is serving effectively many customers. More precisely, the FPT.AI platform has been used by about 70 large enterprises, and of over 27,000 frequent developers, serving more than 30 million end ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Time",
"sec_num": "3.3"
},
{
"text": "This paper presents some new model architectures for sequence tagging and our experimental results for Vietnamese part-of-speech tagging and named entity recognition. Our proposed model vELECTRA outperforms previous ones. For part-of-speech tagging, it improves about 2% of absolute point in comparison with existing work which use different innovation in deep learning such as CNN, LSTM, or joint learning techniques. For named entity recognition, the vELECTRA outperforms all previous work by a large margin on both VLSP 2016 and VLSP 2018 datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "Our code and pre-trained models are published as an open source project for facilitate adoption and further research in the Vietnamese language processing community. 7 An online service of the models for demonstration is also accessible at https: //fpt.ai/nlp/bert/. A variant and more advanced version of this model is currently deployed as a core component of our commercial chatbot engine FPT.AI which is serving effectively millions of end users. In particular, these models are being finetuned to improve task-oriented dialogue in mixed and multiple domains (Luong and Le-Hong, 2019) and dependency parsing (Le-Hong et al., 2015 ",
"cite_spans": [
{
"start": 166,
"end": 167,
"text": "7",
"ref_id": null
},
{
"start": 563,
"end": 588,
"text": "(Luong and Le-Hong, 2019)",
"ref_id": "BIBREF7"
},
{
"start": 612,
"end": 633,
"text": "(Le-Hong et al., 2015",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "https://github.com/google-research/ bert/blob/master/multilingual.md",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/binhvq/news-corpus 4 https://traces1.inria.fr/oscar/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://fpt.ai/ 6 These numbers are reported as of August, 2020.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank three anonymous reviewers for their valuable comments for improving our manuscript. 7 viBERT is available at https://github.com/ fpt-corp/viBERT and vELECTRA is available at https://github.com/fpt-corp/vELECTRA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "ELECTRA: Pretraining text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pre- training text encoders as discriminators rather than generators. In Proceedings of ICLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL",
"volume": "34",
"issue": "",
"pages": "283--294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidi- rectional transformers for language understanding. In Proceedings of NAACL, pages 1-16, Minnesota, USA. Nguyen Thi Minh Huyen, Ngo The Quyen, Vu Xuan Lu- ong, Tran Mai Vu, and Nguyen Thi Thu Hien. 2018. VLSP shared task: Named entity recognition. Journal of Computer Science and Cybernetics, 34(4):283-294.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representa- tions (ICLR).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An empirical studyof maximum entropy approach for part-of-speech tagging of Vietnamese texts",
"authors": [
{
"first": "Phuong",
"middle": [],
"last": "Le-Hong",
"suffix": ""
},
{
"first": "Azim",
"middle": [],
"last": "Roussanaly",
"suffix": ""
},
{
"first": "Thi",
"middle": [],
"last": "Minh Huyen Nguyen",
"suffix": ""
},
{
"first": "Mathias",
"middle": [],
"last": "Rossignol",
"suffix": ""
}
],
"year": 2010,
"venue": "Traitement Automatique des Langues Naturelles -TALN",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phuong Le-Hong, Azim Roussanaly, Thi Minh Huyen Nguyen, and Mathias Rossignol. 2010. An empirical studyof maximum entropy approach for part-of-speech tagging of Vietnamese texts. In Traitement Automa- tique des Langues Naturelles -TALN, Jul 2010, Mon- tr\u00e9al, Canada, pages 1-12.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Fast dependency parsing using distributed word representations",
"authors": [
{
"first": "Phuong",
"middle": [],
"last": "Le-Hong",
"suffix": ""
},
{
"first": "Thi-Minh-Huyen",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Thi-Luong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "My-Linh",
"middle": [],
"last": "Ha",
"suffix": ""
}
],
"year": 2015,
"venue": "Trends and Applications in Knowledge Discovery and Data Mining",
"volume": "9441",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phuong Le-Hong, Thi-Minh-Huyen Nguyen, Thi-Luong Nguyen, and My-Linh Ha. 2015. Fast dependency parsing using distributed word representations. In Trends and Applications in Knowledge Discovery and Data Mining, volume 9441 of LNAI. Springer.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Vietnamese named entity recognition using token regular expressions and bidirectional inference",
"authors": [
{
"first": "Phuong",
"middle": [],
"last": "Le-Hong",
"suffix": ""
}
],
"year": 2016,
"venue": "VLSP NER Evaluation Campaign",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phuong Le-Hong. 2016. Vietnamese named entity recognition using token regular expressions and bidi- rectional inference. In VLSP NER Evaluation Cam- paign.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. In Preprint.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Towards task-oriented dialogue in mixed domains",
"authors": [
{
"first": "Chi-Tho",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Phuong",
"middle": [],
"last": "Le-Hong",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference of the Pacific Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "267--266",
"other_ids": {
"DOI": [
"10.1007/978-981-15-6168-9_22"
]
},
"num": null,
"urls": [],
"raw_text": "Chi-Tho Luong and Phuong Le-Hong. 2019. To- wards task-oriented dialogue in mixed domains. In Proceedings of the International Conference of the Pacific Association for Computational Linguis- tics, pages 267-266. Springer, Singapore. DOI: https://doi.org/10.1007/978-981-15-6168-9_22.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "ZA-NER: Vietnamese named entity recognition at VLSP 2018 evaluation campaign",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Viet",
"suffix": ""
},
{
"first": "Long",
"middle": [
"Kim"
],
"last": "Luong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pham",
"suffix": ""
}
],
"year": 2018,
"venue": "the proceedings of VLSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viet-Thang Luong and Long Kim Pham. 2018. ZA- NER: Vietnamese named entity recognition at VLSP 2018 evaluation campaign. In In the proceedings of VLSP workshop 2018.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1064--1074",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional LSTM-CNNs-CRF. In In Proceedings of ACL, pages 1064-1074.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "PhoBERT: Pre-trained language models for Vietnamese",
"authors": [
{
"first": "Anh",
"middle": [
"Tuan"
],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen and Anh Tuan Nguyen. 2020. PhoBERT: Pre-trained language models for Viet- namese. In https://arxiv.org/pdf/2003.00744.pdf.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "RDRPOSTagger: A ripple down rules-based part-of-speech tagger",
"authors": [],
"year": 2014,
"venue": "Proceedings of the Demonstrations at EACL",
"volume": "",
"issue": "",
"pages": "17--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen, Dai Quoc Nguyen, and and Son Bao Pham Dang Duc Pham. 2014. RDRPOSTagger: A ripple down rules-based part-of-speech tagger. In In Proceedings of the Demonstrations at EACL, pages 17-20.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "From word segmentation to POS tagging for Vietnamese",
"authors": [
{
"first": "Thanh",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dai Quoc Nguyen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Mark-Dras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ALTA",
"volume": "",
"issue": "",
"pages": "108--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen, Thanh Vu, Dai Quoc Nguyen, Mark- Dras, and Mark Johnson. 2017. From word segmen- tation to POS tagging for Vietnamese. In In Proceed- ings of ALTA, pages 108-113.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Attentive neural network for named entity recognition in Vietnamese",
"authors": [
{
"first": "Ngan",
"middle": [],
"last": "Kim Anh Nguyen",
"suffix": ""
},
{
"first": "Cam-Tu",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of RIVF",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim Anh Nguyen, Ngan Dong, , and Cam-Tu Nguyen. 2019. Attentive neural network for named entity recognition in Vietnamese. In In Proceedings of RIVF.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A neural joint model for Vietnamese word segmentation, POS tagging and dependency parsing",
"authors": [
{
"first": "",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ALTA",
"volume": "",
"issue": "",
"pages": "28--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen. 2019. A neural joint model for Viet- namese word segmentation, POS tagging and depen- dency parsing. In In Proceedings of ALTA, pages 28- 34.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of NAACL, pages 1-15, Louisiana, USA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "End-toend recurrent neural network models for Vietnamese named entity recognition: Word-level vs. characterlevel",
"authors": [
{
"first": "Thai",
"middle": [],
"last": "Hoang Pham",
"suffix": ""
},
{
"first": "Phuong",
"middle": [],
"last": "Le-Hong",
"suffix": ""
}
],
"year": 2017,
"venue": "PACLING -Conference of the Pacific Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "219--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thai Hoang Pham and Phuong Le-Hong. 2017a. End-to- end recurrent neural network models for Vietnamese named entity recognition: Word-level vs. character- level. In PACLING -Conference of the Pacific Associ- ation of Computational Linguistics, pages 219-232.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The importance of automatic syntactic features in Vietnamese named entity recognition",
"authors": [
{
"first": "Thai",
"middle": [],
"last": "Hoang Pham",
"suffix": ""
},
{
"first": "Phuong",
"middle": [],
"last": "Le-Hong",
"suffix": ""
}
],
"year": 2017,
"venue": "The 31st Pacific Asia Conference on Language, Information and Computation PACLIC",
"volume": "31",
"issue": "",
"pages": "97--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thai Hoang Pham and Phuong Le-Hong. 2017b. The im- portance of automatic syntactic features in Vietnamese named entity recognition. In The 31st Pacific Asia Conference on Language, Information and Computa- tion PACLIC 31 (2017), pages 97-103.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Nnvlp: A neural network-based Vietnamese language processing toolkit",
"authors": [
{
"first": "Xuan",
"middle": [
"Khoai"
],
"last": "Thai Hoang Pham",
"suffix": ""
},
{
"first": "Tuan",
"middle": [
"Anh"
],
"last": "Pham",
"suffix": ""
},
{
"first": "Phuong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le-Hong",
"suffix": ""
}
],
"year": 2017,
"venue": "The 8th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thai Hoang Pham, Xuan Khoai Pham, Tuan Anh Nguyen, and Phuong Le-Hong. 2017. Nnvlp: A neu- ral network-based Vietnamese language processing toolkit. In The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017). Demonstration Paper.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understand- ing by generative pre-training. In Preprint.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "VnCoreNLP: A Vietnamese natural language processing toolkit",
"authors": [
{
"first": "Thanh",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dai Quoc Nguyen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Mark-Dras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL: Demonstrations",
"volume": "",
"issue": "",
"pages": "56--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thanh Vu, Dat Quoc Nguyen, Dai Quoc Nguyen, Mark- Dras, and Mark Johnson. 2018. VnCoreNLP: A Viet- namese natural language processing toolkit. In In Pro- ceedings of NAACL: Demonstrations, pages 56-60.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "ETNLP: A visual-aided systematic approach to select pre-trained embeddings for a downstream task",
"authors": [
{
"first": "Xuan-Son",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "Thanh",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "Son",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2019,
"venue": "In Proceedings of RANLP",
"volume": "",
"issue": "",
"pages": "1285--1294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuan-Son Vu, Thanh Vu, Son Tran, and Lili Jiang. 2019. ETNLP: A visual-aided systematic approach to select pre-trained embeddings for a downstream task. In In Proceedings of RANLP, pages 1285-1294.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "XL-Net: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NeurIPS",
"volume": "",
"issue": "",
"pages": "5754--5764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XL- Net: Generalized autoregressive pretraining for lan- guage understanding. In Proceedings of NeurIPS, pages 5754-5764.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The basic structure of BERT transformation and E k are embeddings of the k-th token.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Our proposed end-to-end architecture 0.01.The output of BERT is computed as follows (Peters et al., 2018):",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "An overview of replaced token detection by the ELECTRA model on a sample drawn from vELECTRA",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Decoding time on PoS task -VLSP 2013",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Decoding time on NER task -VLSP 2016",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td>No.</td><td>VLSP 2016</td><td/><td>VLSP 2018</td><td/></tr><tr><td colspan=\"2\">Existing models</td><td/><td/><td/></tr><tr><td>1.</td><td>TRE+BI (Le-Hong, 2016)</td><td>87.98</td><td>VietNER</td><td>76.63</td></tr><tr><td>2.</td><td>BiLSTM_CNN_CRF (Pham and Le-Hong, 2017a)</td><td>88.59</td><td>ZA-NER</td><td>74.70</td></tr><tr><td>3.</td><td/><td/><td/><td/></tr></table>",
"text": "shows experimental results using different proposed architectures on the top of mBERT, viB-ERT and vELECTRA on two benchmark datasets from the campaign VLSP 2016 and VLSP 2018.",
"html": null,
"num": null
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"text": "Performance of our proposed models on the NER task. ZA-NER(Luong and Pham, 2018) is the best system of VLSP (Huyen et al., 2018. VietNER is from",
"html": null,
"num": null
}
}
}
}