ACL-OCL / Base_JSON /prefixP /json /paclic /2020.paclic-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:01:12.234260Z"
},
"title": "Utilizing BERT for Question Retrieval in Vietnamese E-commerce Sites",
"authors": [
{
"first": "Thi-Thanh",
"middle": [],
"last": "Ha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HaNoi Uni. of Science and Technology",
"location": {
"country": "VietNam, VietNam"
}
},
"email": ""
},
{
"first": "Van-Nha",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kiem-Hieu",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kim-Anh",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Tien-Thanh",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Question retrieval is an important task in question answering. This task is considered to be challenging due to the lexical gap issue, i.e., similar questions could be expressed in different words or phrases. Although there are numerous researches conducted on question retrieval task in English, the corresponding problem in Vietnamese hasn't been studied much. In this investigation, we highlight our efforts on question retrieval in Vietnamese e-commerce sites majorly in two directions: (1) Building a Vietnamese dataset for question retrieval in e-commerce domain. (2) Conducting experiments using recent deep learning techniques including BERT-based classifiers. Our results provide practical examples of effectively employing these models on Vietnamese e-commerce data. Particularly, we demonstrate that a BERT model trained on e-commerce texts yields significant improvement on question retrieval over BERT trained on general-domain texts.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Question retrieval is an important task in question answering. This task is considered to be challenging due to the lexical gap issue, i.e., similar questions could be expressed in different words or phrases. Although there are numerous researches conducted on question retrieval task in English, the corresponding problem in Vietnamese hasn't been studied much. In this investigation, we highlight our efforts on question retrieval in Vietnamese e-commerce sites majorly in two directions: (1) Building a Vietnamese dataset for question retrieval in e-commerce domain. (2) Conducting experiments using recent deep learning techniques including BERT-based classifiers. Our results provide practical examples of effectively employing these models on Vietnamese e-commerce data. Particularly, we demonstrate that a BERT model trained on e-commerce texts yields significant improvement on question retrieval over BERT trained on general-domain texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Community-based Question Answering (CQA) systems 12 have become an increasingly popular online platform. Community websites, where users can post their own questions or answers to other users' questions, provide frameworks for people with dissimilar backgrounds to share their knowledge and experiences. When a user posts a new question on a community website, it usually takes a while for other users to respond. Moreover, in a certain period of time, the number of questions and answers stored in a database gradually becomes enormous and challenging to handle, which means that the possibility of finding duplicated questions increases. As a result, it is time-consuming to retrieve good answers to a given question in an archive of questionanswer pairs. In order to reduce latency, CQA systems should automatically find questions which are similar to a given new question. It is hoped that the answers of these related questions could be useful for the new question.",
"cite_spans": [
{
"start": 49,
"end": 51,
"text": "12",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The problem of question retrieval is defined as follows: Given a query question and a set of existing questions, return the most similar questions to the query. Question retrieval has been extensively investigated with the purpose of answering new questions using previous answers in databases [Zhou et al.2013 , Zhou et al.2015 . Previous studies delved into the lexical gap challenge in which query question might contain words and phrases different from its similar questions. Figure 1 is a typical pair of similar questions in our Vietnamese dataset.",
"cite_spans": [
{
"start": 294,
"end": 310,
"text": "[Zhou et al.2013",
"ref_id": null
},
{
"start": 311,
"end": 328,
"text": ", Zhou et al.2015",
"ref_id": null
}
],
"ref_spans": [
{
"start": 480,
"end": 488,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to deal with lexical gap challenge, previous research applied soft alignment technique originated from machine translation or implicitly disambiguated word meaning using topic models [Cai et al.2011] . A huge number of research methods in recent years have focused on end-to-end approaches based on deep neural networks without depending on feature engineering or external knowledge bases [Wu et al.2018 , Tay et al.2017 . These approaches leverage pre-trained embeddings and specific-purpose network structures aiming at Question 1: L\u00e0m \u01a1n ch\u1ec9 gi\u00f9m t\u00f4i c\u00e1ch t\u1eaft ph\u00edm slide to unclock tr\u00ean samsung s9 plus (Can you please show me how to turn off slide to unlock button on samsung s9 plus) Question 2: C\u00e1ch t\u1eaft m\u00e0n h\u00ecnh slide to unclock ch\u1ec9 \u0111\u1ec3 m\u00e0n h\u00ecnh ki\u1ec3u vu\u1ed1t \u0111\u1ec3 m\u1edf kh\u00f3a m\u00e1y ss j7 pro (how to turn off slide to unlock screen on ss j7 pro) Figure 1 : An example of similar question pair representing syntactic and semantic information in questions. Until recently, BERT, a pre-trained language model, achieves state-of-the-art performance in many natural language processing (NLP) tasks [Devlin et al.2018] . However, to our knowledge, BERT has not been applied to Vietnamese question retrieval.",
"cite_spans": [
{
"start": 192,
"end": 208,
"text": "[Cai et al.2011]",
"ref_id": null
},
{
"start": 398,
"end": 412,
"text": "[Wu et al.2018",
"ref_id": null
},
{
"start": 413,
"end": 429,
"text": ", Tay et al.2017",
"ref_id": null
},
{
"start": 1097,
"end": 1116,
"text": "[Devlin et al.2018]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 850,
"end": 858,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the scope of this paper, we advocate: (1) A public CQA Vietnamese dataset in E-commerce domain for question retrieval problem. (2) Experimentation with various deep learning models on this dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(3) Empirical findings on tuning and visualizing attention of these models. (4) A pre-trained BERT embedding model for Vietnamese E-commerce texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Over the recent years, numerous methods have been proposed to deal with community question answering tasks and achieved state-of-the-art results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Traditional methods attempt to deal with CQA problems by transforming the text in questions into Bag-of-Words (BoW) representation with tfidf weighting scheme, such as BM25 [Robertson et al.1995] . Count-based language models [Cao et al.2009] have also been considered as a popular method to model questions as sequences instead of bags of words. Nonetheless, such models might not be useful when there are a vast number of possible sequences. A sentence should have an exact pattern, such as string or word sequence, matching to a particular part of another sentence. Another popular model based on semantic similarity is Latent Dirichlet Allocation (LDA) [Blei et al.2002] , which is a probabilistic model applied in representing questions through a set of latent topics. The learned topic distribution is then applied to retrieve similar historical questions. In another direction, various methods have been developed based on machine translation techniques, such as the monolingual phrasebased translation model, to measure question simi-larity [Jeon et al.2005] or question-answer similarity.",
"cite_spans": [
{
"start": 173,
"end": 195,
"text": "[Robertson et al.1995]",
"ref_id": null
},
{
"start": 226,
"end": 242,
"text": "[Cao et al.2009]",
"ref_id": null
},
{
"start": 657,
"end": 674,
"text": "[Blei et al.2002]",
"ref_id": null
},
{
"start": 1049,
"end": 1066,
"text": "[Jeon et al.2005]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Top performing systems in SemEval 2017 Task 3 challenge [Nakov et al.2017 ] use sophisticated feature engineering such as exploiting kernel functions or extracting tree kernel features from parse trees. For instance, the best-performance system [Filice et al.2017] , uses similarity features like cosine distance or Euclidean distance and lexical, syntactic, semantic and distributed representations to learn an SVM classifier.",
"cite_spans": [
{
"start": 56,
"end": 73,
"text": "[Nakov et al.2017",
"ref_id": null
},
{
"start": 245,
"end": 264,
"text": "[Filice et al.2017]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recent studies in question retrieval and answer selection [Severyn and Moschitti2015, Tan et al.2015] in CQA highlight the effectiveness of neural network models over time-consuming handcrafted feature engineering. These methods learn distributed vector representation of texts and measure questionquestion or question-answer similarity for question retrieval or answer selection, respectively [Bonadiman et al.2017, Severyn and Moschitti2015] .",
"cite_spans": [
{
"start": 58,
"end": 70,
"text": "[Severyn and",
"ref_id": "BIBREF15"
},
{
"start": 71,
"end": 101,
"text": "Moschitti2015, Tan et al.2015]",
"ref_id": null
},
{
"start": 394,
"end": 428,
"text": "[Bonadiman et al.2017, Severyn and",
"ref_id": null
},
{
"start": 429,
"end": 443,
"text": "Moschitti2015]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "BERT (Bidirectional Encoder Representations from Transformers) was proposed in [Devlin et al.2018 ] as a kind of pre-trained transformer network [Vaswani et al.2017] , which was applied to various NLP tasks with state-of-the-art performance, including sentence classification, question answering, and sentence pair regression. Several prior studies substantiate that BERT could perform well in many cases [Liu et al.2019 , Hao et al.2019 . Particularly, [Liu et al.2019 ] illustrated that the performance of BERT can be further improved by some small adjustments in the pre-training process. Besides, [Hao et al.2019 ] focused on the interpretation of self-attention, which is one of the most fundamental components of BERT.",
"cite_spans": [
{
"start": 79,
"end": 97,
"text": "[Devlin et al.2018",
"ref_id": null
},
{
"start": 145,
"end": 165,
"text": "[Vaswani et al.2017]",
"ref_id": null
},
{
"start": 405,
"end": 420,
"text": "[Liu et al.2019",
"ref_id": null
},
{
"start": 421,
"end": 437,
"text": ", Hao et al.2019",
"ref_id": null
},
{
"start": 454,
"end": 469,
"text": "[Liu et al.2019",
"ref_id": null
},
{
"start": 601,
"end": 616,
"text": "[Hao et al.2019",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Prior researches were generally conducted on English datasets. In this paper, we explore how well recent deep learning models, especially pre-trained BERT, could possibly perform on Vietnamese. At the same time, we visualize some attention layers to illustrate the effectiveness of BERT models on Viet-namese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "[CLS] X ' M X 1 X ' 1 [SEP] X N \u2026. \u2026. Question 1 Question 2 E [CLS] E ' M E 1 E ' 1 E [SEP] E N \u2026. \u2026. C T ' M T 1 T ' 1 T [SEP] T N \u2026.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT for Vietnamese Question Retrieval",
"sec_num": "3"
},
{
"text": "\u2026.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT for Vietnamese Question Retrieval",
"sec_num": "3"
},
{
"text": "Predict Figure 2 : BERT for question retrieval [Devlin el al.2018] 3.1 BERT",
"cite_spans": [
{
"start": 47,
"end": 66,
"text": "[Devlin el al.2018]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 8,
"end": 16,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "BERT BASE",
"sec_num": null
},
{
"text": "BERT is a Bidirectional Encoder Representations achieved from Transformers [Devlin el al.2018 ] that generates a sentence representation by jointly learning two tasks: masked language modeling and nextsentence prediction. BERT models can be fine-tuned well on both sentence level as well as word level tasks.",
"cite_spans": [
{
"start": 75,
"end": 93,
"text": "[Devlin el al.2018",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT BASE",
"sec_num": null
},
{
"text": "BERT has a deep architecture, which has 12 layers of 768 hidden size and 12 self-attention heads. This model begins from the word embeddings layer. In 12 layers, multi-headed attention is calculated using word representations of the previous layer to generate a new intermediate representation. As a result, a token will have 12 intermediate representations with the same size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT BASE",
"sec_num": null
},
{
"text": "In the masked language modeling task, 15% of the tokens are chosen at random to obtain bi-directional pre-trained language model. To avoid mistmatching between pre-training and fine-tuning, in those 15% tokens, a token is replaced with [MASK] 80% of times, 10% of times it is replaced by another random token, and the rest 10% of times it is unchanged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT BASE",
"sec_num": null
},
{
"text": "In the next-sentence prediction task, given a pair of sentences, the aim of this task is to predict whether the second sentence is the true next sentence of the first one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT BASE",
"sec_num": null
},
{
"text": "In this paper, we apply Multilingual BERT-BASE model (Figure 2 ), which is considered to be effective on small datasets. It is proved to be good at the ability of cross-lingual generalization by a multilingual representation without being explicitly trained.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 62,
"text": "(Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "BERT for Vietnamese Question Retrieval",
"sec_num": "3.2"
},
{
"text": "Our experiments consist of two parts: Pre-training BERT on unlabeled 1.1M texts of Vietnamese Ecommerce (see table 2); and fine-tuning for question retrieval problem on a labeled E-commerce dataset. The parameters of all the layers of our model are fine-tuned at once. A special classification token ([CLS]) and separation token ([SEP]) are added as inputs of our model as followed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT for Vietnamese Question Retrieval",
"sec_num": "3.2"
},
{
"text": "Bert \u2212 Input(q 1 , q 2 ) = [CLS]q 1 [SEP ]q 2 [SEP ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT for Vietnamese Question Retrieval",
"sec_num": "3.2"
},
{
"text": ", where q 1 , q 2 are two questions. The final hidden state corresponding to [CLS] token is applied as an aggregate sequence representation for classification tasks. Softmax activation in the last layer is used to predict the label of the considering question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT for Vietnamese Question Retrieval",
"sec_num": "3.2"
},
{
"text": "We collected questions from users in QA section of The gioi Di dong -an e-commerce website on mobiles, laptops and other electronic devices 3 . An Elas-ticSearch engine was built from the corpus. We selected a random subset as original questions. Each question was put into ElasticSearch as query. Thereafter, for the first 10 returned questions, human annotators were asked to assess their equivalence to the original question. To increase the difficulty of the task, we removed original questions that could be easily handled by ElasticSearch (i.e questions that have little lexical gap challenge).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "We divided annotated data into three separated sets: training, development, and test (Table 1 ). In average, 30% of questions were annotated as relevant to the original question.",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 93,
"text": "(Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "We also use the large corpus for pre-trained embeddings (Table 2) .",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 65,
"text": "(Table 2)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "Our models were implemented using Tensorflow and all experiments were conducted on GPU Nvidia Tesla p100 16Gb. We used Mean Average Precision x 1 (1) h 1 (1) x 2 (2) x 2 (1) h 2 (n) \u2026. h 2 (1) h 1 (2) h 2 (2) h 1 (m) \u2026.",
"cite_spans": [
{
"start": 146,
"end": 149,
"text": "(1)",
"ref_id": null
},
{
"start": 154,
"end": 157,
"text": "(1)",
"ref_id": null
},
{
"start": 162,
"end": 165,
"text": "(2)",
"ref_id": null
},
{
"start": 170,
"end": 173,
"text": "(1)",
"ref_id": null
},
{
"start": 189,
"end": 192,
"text": "(1)",
"ref_id": null
},
{
"start": 197,
"end": 200,
"text": "(2)",
"ref_id": null
},
{
"start": 205,
"end": 208,
"text": "(2)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and discussions",
"sec_num": "5"
},
{
"text": "x 1 (2) x 1 (m) h h 2 (0) h 1 (0) x 2 (n) (MAP) for evaluation. Hyper-parameters were tuned on the development set. Table 3 presents detailed experimental results on Thegioididong. The results are divided into three parts: vanilla neural networks with LSTM/CNN encoder; BERT pre-trained on different corpora; and baseline bag-of-word models. In all models except PhoBERT, we used syllables as unit input. In PhoBERT, we used its built-in module for word segmentation 4 . Figure 4 illustrates the accuracy of nine models. In general, both Table 3 and Figure 4 show that deep learning approach is better than baseline models; and there was a substantial rise of BERT models, especially when pre-trained on domain data. Figure 3 shows the architecture of our models.",
"cite_spans": [
{
"start": 4,
"end": 7,
"text": "(2)",
"ref_id": null
},
{
"start": 12,
"end": 15,
"text": "(m)",
"ref_id": null
},
{
"start": 22,
"end": 25,
"text": "(0)",
"ref_id": null
},
{
"start": 30,
"end": 33,
"text": "(0)",
"ref_id": null
},
{
"start": 467,
"end": 468,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 116,
"end": 123,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 471,
"end": 479,
"text": "Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 538,
"end": 545,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 550,
"end": 558,
"text": "Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 717,
"end": 725,
"text": "Figure 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Experiments and discussions",
"sec_num": "5"
},
{
"text": "\u2022 LSTM: Both questions are encoded by a shared-weight bi-directional LSTM. The representation of each question is concatenation of the last hidden units of each direction. The representations of two questions are concatenated and is fed into an MLP for prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM/CNN Networks",
"sec_num": "5.1"
},
{
"text": "\u2022 CNN: Bi-directional LSTM building-block is replaced by CNN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM/CNN Networks",
"sec_num": "5.1"
},
{
"text": "\u2022 ABCNN [Yin et al.2015] : This model employs an attention feature matrix to influence convolution. Attention matrix is generated by matching units of the first question representation feature map with units of the second question representation feature map. It can be viewed as a new feature map of two questions to put into next layer.",
"cite_spans": [
{
"start": 8,
"end": 24,
"text": "[Yin et al.2015]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM/CNN Networks",
"sec_num": "5.1"
},
{
"text": "\u2022 LSTM/CNN-attention: In this model, outputs from all words of both questions are passed through a word-wise dot product to create a word-by-word attention alike matrix. Updated hidden vector of both question from attention serve as inputs of CNN structure. A global max pooling is then applied to collect important features before prediction. This model is close to LSTM siamese networks as in [Tan et al.2016] .",
"cite_spans": [
{
"start": 395,
"end": 411,
"text": "[Tan et al.2016]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM/CNN Networks",
"sec_num": "5.1"
},
{
"text": "We pre-train syllable embeddings using word2vec on the unlabeled e-commerce corpus. Embedding layers were initialized by pre-train vectors. Adam [Kingma and Ba2014] is used as optimization function. Hyper-parameters used in each experiment are shown in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 260,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "LSTM/CNN Networks",
"sec_num": "5.1"
},
{
"text": "As shown in Table 3 , simple concatenation of output from LSTM/CNN and using MLP for prediction slightly outperform baseline models. Learning attention weights as in ABCNN even hurts the performance. In LSTM/CNN-attention, directly calculating word-by-word attention using dot product results in significant improvement.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "LSTM/CNN Networks",
"sec_num": "5.1"
},
{
"text": "BERT experiments are performed using Multiligual BERT-BASE model 5 . We first pre-trained BERT on unlabeled E-commerce Vietnamese with maximum length of 200, batch size of 32, and learning rate of 2e \u22125 with 20000 steps. We call this model BERT4ecommerce. After pre-training, our model was fine-tuned on question retrieval using Thegioididong dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-training and Fine-tuning Bert",
"sec_num": "5.2"
},
{
"text": "We also compare our in-domain pre-trained model with other general-domain pre-train BERTs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-training and Fine-tuning Bert",
"sec_num": "5.2"
},
{
"text": "\u2022 BERT-multilingual [Pires et al.2019 As both shown in Table 3 and Figure 4 , significant improvement was obtained by using BERT. Especially, Bert4E-commerce achieved the highest performance (70.50% in MAP, 77.4% in AUC). These experiments advance the idea that when source domain used in pre-training model and target domain are the same, it could have good impact on the final result. E-commerce vocabulary consists of a wide range of words used for technological devices such as Iphone, Samsung S9, \"mua-tra-gop\" (pay by installments) and so on. Moreover, E-commerce data or social data in general has no guarantee in spelling, grammar and word usage. For instance, numerous spelling mistakes and abbreviations such as \"thoong bao\" (notification), \"mk\" (password) \"ss\" (Samsung), \"f\" (keyboard) were found in our dataset. Thus, retraining word embedding on E-commerce domain is required and much more effective than using pre-trained model on news source data such as Wiki and news in this situation.",
"cite_spans": [
{
"start": 20,
"end": 37,
"text": "[Pires et al.2019",
"ref_id": null
}
],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 67,
"end": 75,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Pre-training and Fine-tuning Bert",
"sec_num": "5.2"
},
{
"text": "So far, all our models were based on syllables. In this section, we use a word-based BERT model and apply it to segmented questions. We chose PhoBERT [Nguyen and Nguyen2020], a pre-trained model on 3B segmented texts from Wikipedia and news. Results show that PhoBERT performs better than BERT-multilingual and BERT4Vn which indicates that word segmentation is helpful for question retrieval in in-domain social texts. Nevertheless, without word segmentation, BERT pre-trained on indomain texts still outperforms PhoBERT in a large margin. This result is encouraging as word segmentation in in-domain texts suffers from unknown words and spelling mistakes that could propagate errors to downstream tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-based BERT",
"sec_num": "5.3"
},
{
"text": "It is argued in [Wiegreffe and Pinter2019] that attention can be use to explain model prediction. In this section, we visualize attention of BERT4 and max-length learning rate steps reach max BERT-multilingual 200 2e \u22125 650 BERT4Vn 200 2e \u22125 1600 PhoBERT 200 2e \u22125 1000 BERT4ecommerce 200 2e \u22125 900 Table 5 : The hyper-parameters set of fine-turning BERT models ABCNN to point out that self attention of BERT could learn semantic relationship in questions better than some commonly known attention mechanism such as ABCNN. An attention matrix of Bert was extracted from the first attention layer. Figure 5 visualizes word-by-word attention between query question (Y-axis) and candidate question (X-axis). This visualization presents alignment weights between two questions, where darker color correlates with larger value.",
"cite_spans": [
{
"start": 16,
"end": 42,
"text": "[Wiegreffe and Pinter2019]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 299,
"end": 306,
"text": "Table 5",
"ref_id": null
},
{
"start": 597,
"end": 605,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Attention visualization",
"sec_num": "5.4"
},
{
"text": "The attention distribution of BERT is sparser than that of ABCNN. This helps to strengthen interaction between important words such as 'slide' with 'm\u00e0n h\u00ecnh' (screen), 'lock', 't\u1eaft ph\u00edm' with 'kh\u00f3a m\u00e1y' as seen in the example. The research in [Cui et al.2019] shows that sparse attention matrix achieved from BERT leads to a more interpretable representation of inputs.",
"cite_spans": [
{
"start": 244,
"end": 260,
"text": "[Cui et al.2019]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention visualization",
"sec_num": "5.4"
},
{
"text": "We carried out a range of experiments with LSTM, LSTM attention, CNN, ABCNN and fine-tuning BERT for question retrieval on a Vietnamese dataset. In particular, our BERT model pre-trained on an ecommerce corpus could be useful for related research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We hope our work can give a boost to applications related to CQA on Vietnamese Ecommerce data. In the future, we are going to investigate the effect of word segmentation to question answering in ecommerce domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://stackoverflow.com/ 2 https://www.qatarliving.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/VinAIResearch/PhoBERT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/google-research/bert 6 https://github.com/lampts/bert4vn",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2002,
"venue": "Advances in Neural Information Processing Systems",
"volume": "14",
"issue": "",
"pages": "601--608",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2002. Latent dirichlet allocation. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 601- 608. MIT Press.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multitask learning with deep neural networks for community question answering",
"authors": [
{
"first": "Daniele",
"middle": [],
"last": "Bonadiman",
"suffix": ""
},
{
"first": "Antonio",
"middle": [
"E"
],
"last": "Uva",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniele Bonadiman, Antonio E. Uva, and Alessandro Moschitti. 2017. Multitask learning with deep neural networks for community question answering. CoRR, abs/1702.03706.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning the latent topics for question retrieval in community QA",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "273--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Cai, Guangyou Zhou, Kang Liu, and Jun Zhao. 2011. Learning the latent topics for question retrieval in com- munity QA. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 273-281, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The use of categorization information in language models for question retrieval",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Gao",
"middle": [],
"last": "Cong",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "S\u00f8ndergaard Jensen",
"suffix": ""
},
{
"first": "Ce",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM '09",
"volume": "",
"issue": "",
"pages": "265--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Cao, Gao Cong, Bin Cui, Christian S\u00f8ndergaard Jensen, and Ce Zhang. 2009. The use of categoriza- tion information in language models for question re- trieval. In Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM '09, page 265-274, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Fine-tune BERT with sparse selfattention mechanism",
"authors": [
{
"first": "Baiyun",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Yingming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhongfei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3548--3553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baiyun Cui, Yingming Li, Ming Chen, and Zhongfei Zhang. 2019. Fine-tune BERT with sparse self- attention mechanism. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3548-3553, Hong Kong, China, November. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "KeLP at SemEval-2017 task 3: Learning pairwise patterns in community question answering",
"authors": [
{
"first": "Simone",
"middle": [],
"last": "Filice",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Da San",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Martino",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "326--333",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Filice, Giovanni Da San Martino, and Alessan- dro Moschitti. 2017. KeLP at SemEval-2017 task 3: Learning pairwise patterns in community question answering. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 326-333, Vancouver, Canada, August. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Visualizing and understanding the effectiveness of bert",
"authors": [
{
"first": "Yaru",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaru Hao, Li Dong, Furu Wei, and Ke Xu. 2019. Visual- izing and understanding the effectiveness of bert. Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Finding similar questions in large question and answer archives",
"authors": [
{
"first": "W",
"middle": [
"Bruce"
],
"last": "Jiwoon Jeon",
"suffix": ""
},
{
"first": "Joon Ho",
"middle": [],
"last": "Croft",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 14th ACM International Conference on Information and Knowledge Management, CIKM '05",
"volume": "",
"issue": "",
"pages": "84--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwoon Jeon, W. Bruce Croft, and Joon Ho Lee. 2005. Finding similar questions in large question and an- swer archives. In Proceedings of the 14th ACM In- ternational Conference on Information and Knowledge Management, CIKM '05, page 84-90, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "SemEval-2017 task 3: Community question answering",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Doris",
"middle": [],
"last": "Hoogeveen",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Verspoor",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "27--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Doris Hoogeveen, Llu\u00eds M\u00e0rquez, Alessandro Moschitti, Hamdy Mubarak, Timothy Baldwin, and Karin Verspoor. 2017. SemEval-2017 task 3: Community question answering. In Proceed- ings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 27-48, Vancouver, Canada, August. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Phobert: Pre-trained language models for vietnamese",
"authors": [
{
"first": "A",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen and A. Nguyen. 2020. Phobert: Pre-trained language models for vietnamese. ArXiv, abs/2003.00744.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "How multilingual is multilingual BERT?",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4996--5001",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Flo- rence, Italy, July. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Okapi at trec-3",
"authors": [
{
"first": ",",
"middle": [
"S"
],
"last": "Stephen Robertson",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "M",
"middle": [
"M"
],
"last": "Jones",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hancock-Beaulieu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gatford",
"suffix": ""
}
],
"year": 1995,
"venue": "Overview of the Third Text REtrieval Conference (TREC-3)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Robertson, S. Walker, S. Jones, M. M. Hancock- Beaulieu, and M. Gatford. 1995. Okapi at trec-3. In Overview of the Third Text REtrieval Conference (TREC-3), January.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning to rank short text pairs with convolutional deep neural networks",
"authors": [
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '15",
"volume": "",
"issue": "",
"pages": "373--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '15, page 373-382, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Lstmbased deep learning models for non-factoid answer selection",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming Tan, Bing Xiang, and Bowen Zhou. 2015. Lstm- based deep learning models for non-factoid answer se- lection. CoRR, abs/1511.04108.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Improved representation learning for question answer matching",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Cicero Dos Santos",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming Tan, Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2016. Improved representation learning for question answer matching. In Proceedings of the 54th",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "464--473",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 464-473, Berlin, Germany, August. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Enabling efficient question answer retrieval via hyperbolic neural networks",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Tay",
"suffix": ""
},
{
"first": "Anh",
"middle": [
"Tuan"
],
"last": "Luu",
"suffix": ""
},
{
"first": "Siu Cheung",
"middle": [],
"last": "Hui",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2017. En- abling efficient question answer retrieval via hyper- bolic neural networks. CoRR, abs/1707.07847.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wal- lach, R. Fergus, S. Vishwanathan, and R. Garnett, ed- itors, Advances in Neural Information Processing Sys- tems 30, pages 5998-6008. Curran Associates, Inc.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Attention is not not explanation",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Pinter",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11-20, Hong Kong, China, November. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Question condensing networks for answer selection in community question answering",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1746--1755",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Wu, Xu Sun, and Houfeng Wang. 2018. Ques- tion condensing networks for answer selection in com- munity question answering. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1746-1755, Melbourne, Australia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "ABCNN: attention-based convolutional neural network for modeling sentence pairs",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin, Hinrich Sch\u00fctze, Bing Xiang, and Bowen Zhou. 2015. ABCNN: attention-based convolutional neural network for modeling sentence pairs. CoRR, abs/1512.05193.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Towards faster and better retrieval models for question search",
"authors": [
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 22nd ACM International Conference on Information Knowledge Management, CIKM13",
"volume": "",
"issue": "",
"pages": "2139--2148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guangyou Zhou, Yubo Chen, Daojian Zeng, and Jun Zhao. 2013. Towards faster and better retrieval models for question search. In Proceedings of the 22nd ACM International Conference on Information Knowledge Management, CIKM13, page 2139-2148, New York, NY, USA. Association for Computing Ma- chinery.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning continuous word embedding with metadata for question retrieval in community question answering",
"authors": [
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Tingting",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Po",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guangyou Zhou, Tingting He, Jun Zhao, and Po Hu. 2015. Learning continuous word embedding with metadata for question retrieval in community question answering. pages 250-259, 01.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Baseline deep learning models in question retrieval"
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "The ROC curves of prediction models."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Visualization of BERT and ABCNN"
},
"TABREF1": {
"num": null,
"html": null,
"type_str": "table",
"text": "Statistics of Thegioididong dataset.",
"content": "<table><tr><td>Corpus size</td><td>1.1M</td></tr><tr><td colspan=\"2\">Vocabulary size (syllable) 151,735</td></tr><tr><td>Average length (syllable)</td><td>31</td></tr></table>"
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"text": "Statistics of unlabeled corpus crawled from The gioi Di dong.",
"content": "<table/>"
},
"TABREF4": {
"num": null,
"html": null,
"type_str": "table",
"text": "MAP score of models on Vietnamese dataset.",
"content": "<table/>"
},
"TABREF6": {
"num": null,
"html": null,
"type_str": "table",
"text": "Emb-size Hid-size/filter-size L-rate P drop Batch size epochs Params (x10 5 )",
"content": "<table><tr><td>LSTM</td><td>300</td><td colspan=\"2\">300 0.0001</td><td>0.2</td><td>64</td><td>25</td><td>21</td></tr><tr><td>LSTM/CNN-attention</td><td>300</td><td colspan=\"2\">300 0.0001</td><td>0.2</td><td>64</td><td>25</td><td>27</td></tr><tr><td>CNN</td><td>300</td><td>3</td><td>0.003</td><td>0.5</td><td>64</td><td>25</td><td>33</td></tr><tr><td>ABCNN</td><td>300</td><td>3</td><td>0.001</td><td>0.2</td><td>32</td><td>25</td><td>34</td></tr></table>"
},
"TABREF7": {
"num": null,
"html": null,
"type_str": "table",
"text": "The hyper-parameters set of LSTM/CNN models",
"content": "<table/>"
}
}
}
}