| { |
| "paper_id": "2022", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:33:37.258511Z" |
| }, |
| "title": "Transfer Learning and Masked Generation for Answer Verbalization", |
| "authors": [ |
| { |
| "first": "Sebastien", |
| "middle": [], |
| "last": "Montella", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Aix-Marseille Univ. CNRS", |
| "location": { |
| "settlement": "LIS / Marseille", |
| "country": "France, France" |
| } |
| }, |
| "email": "sebastien.montella@orange.com" |
| }, |
| { |
| "first": "Lina", |
| "middle": [ |
| "M" |
| ], |
| "last": "Rojas-Barahona", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "linamaria.rojasbarahona@orange.com" |
| }, |
| { |
| "first": "Frederic", |
| "middle": [], |
| "last": "Bechet", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "frederic.bechet@lis-lab.fr" |
| }, |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Heinecke", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "johannes.heinecke@orange.com" |
| }, |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Nasr", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "alexis.nasr@lis-lab.fr" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Structured Knowledge has recently emerged as an essential component to support fine-grained Question Answering (QA). In general, QA systems query a Knowledge Base (KB) to detect and extract the raw answers as final prediction. However, as lacking of context, language generation can offer a much informative and complete response. In this paper, we propose to combine the power of transfer learning and the advantage of entity placeholders to produce high-quality verbalization of extracted answers from a structured KB. We claim that such approach is especially well-suited for answer generation. Our experiments show 44.25%, 3.26% and 29.10% relative gain in BLEU over the state-of-the-art on the VQuAnDA, ParaQA and VANiLLa datasets, respectively. We additionally provide minor hallucinations corrections in VANiLLa standing for 5% of each of the training and testing set. We witness a median absolute gain of 0.81 SacreBLEU. This strengthens the importance of data quality when using automated evaluation.", |
| "pdf_parse": { |
| "paper_id": "2022", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Structured Knowledge has recently emerged as an essential component to support fine-grained Question Answering (QA). In general, QA systems query a Knowledge Base (KB) to detect and extract the raw answers as final prediction. However, as lacking of context, language generation can offer a much informative and complete response. In this paper, we propose to combine the power of transfer learning and the advantage of entity placeholders to produce high-quality verbalization of extracted answers from a structured KB. We claim that such approach is especially well-suited for answer generation. Our experiments show 44.25%, 3.26% and 29.10% relative gain in BLEU over the state-of-the-art on the VQuAnDA, ParaQA and VANiLLa datasets, respectively. We additionally provide minor hallucinations corrections in VANiLLa standing for 5% of each of the training and testing set. We witness a median absolute gain of 0.81 SacreBLEU. This strengthens the importance of data quality when using automated evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Question Answering (QA) has witnessed a massive number of stupendous improvements over the past few years which marked a new era of QA. At the core of this significant progress is the huge leap in the use of Pretrained Language Model (PLM). On several benchmarks, state-of-the art QA systems perform on par with human according to reported evaluation metrics. However, despite remarkable accuracy in answer detection and extraction, few works have considered returning a verbalized response to the user. Indeed, most of QA systems out-puts over Knowledge Bases (KBs) are utterly bereft of any context. To this extent, more works progressively tackled the Answer Verbalization task (AV) which consists in generating a verbalized form of the answer. As a consequence, the user may benefit from a more contextualized response.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Recently, there have been few techniques proposed to perform surface realisation of a raw answer. With the lack of paired training data, Akermi et al. (2020) investigated an unsupervised method to obtain answer verbalizations for both English and French languages. An initial step was to first check whether the question marker (e.g. Who, What) could be straightforwardly substituted with the raw answer or not. For instance, with the question \"Who is the president of the U.S.?\", its raw answer \"Joe Biden\" can directly replace the question marker \"who\" with the question mark substituted with a period. If this is not the case, the question is segmented into chunks based on the syntactic tree parsed with UDPipeFuture (Straka, 2018; Akermi et al., 2021) . After defining the raw answer as a new chunk, all possible permutations of the chunks are collected. The most likely permutation is identified with a PLM such as GPT2 (Radford et al., 2019) . Finally, Akermi et al. (2020) use BERT (Devlin et al., 2019) to find any possibly missing function words around the raw answer such as a, an, to, with, in etc. In spite of its appealing unsupervised mechanism, this method is computationally expensive because of the cost of estimating the likelihood of all (distinct) permutations. Moreover, the likelihood is computed with potential absent words which may jeopardize the final ranking of permutations.", |
| "cite_spans": [ |
| { |
| "start": 137, |
| "end": 157, |
| "text": "Akermi et al. (2020)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 721, |
| "end": 735, |
| "text": "(Straka, 2018;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 736, |
| "end": 756, |
| "text": "Akermi et al., 2021)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 926, |
| "end": 948, |
| "text": "(Radford et al., 2019)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 960, |
| "end": 980, |
| "text": "Akermi et al. (2020)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 990, |
| "end": 1011, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Following, multiple datasets were released to spur the community to apply end-to-end learning (Kacupaj et al., 2020 (Kacupaj et al., , 2021a Biswas et al., 2021) . Kacupaj et al. (2021c) introduced VOGUE, an end-to-end model based on a dual encoder-decoder architecture. More precisely, the input question is encoded with a first Transformer encoder (Vaswani et al., 2017) . On top of that, a logical form of the question is also encoded with an additional Transformer encoder. The logical form is a simplified representation of the question, similar to a query, inspired from Plepi et al. (2021) and Kacupaj et al. (2021b) . Taking our aforementioned question example, its logical form is find(president, U.S). During the decoding phase, VOGUE uses entities placeholders 1 for both the raw answer and the subject entity to generate an abstract version of the response. Following the previous example, the generated verbalization would be \"[ANS] is the president of [ENT]\". In our work, we utilize a comparable mechanism fused with large-scaled pretrained models to leverage efficient transfer learning.", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 115, |
| "text": "(Kacupaj et al., 2020", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 116, |
| "end": 140, |
| "text": "(Kacupaj et al., , 2021a", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 141, |
| "end": 161, |
| "text": "Biswas et al., 2021)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 164, |
| "end": 186, |
| "text": "Kacupaj et al. (2021c)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 350, |
| "end": 372, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 577, |
| "end": 596, |
| "text": "Plepi et al. (2021)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 601, |
| "end": 623, |
| "text": "Kacupaj et al. (2021b)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Specifically, our contribution is twofold:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We propose a masked answer verbalization coupled with transfer learning to verbalize extracted answers over KBs. Placeholders are generated instead of the correct raw answer. This allows a better generalization and scalability of the model. Then, a post-processing step is applied which consists of replacing the placeholder with the raw answer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We provide a minor revision of the VANILLA dataset by correcting entity hallucinations in 5% of the verbalizations. We show evidence that erroneous references may be the culprit of 0.13% absolute median SacreBLEU drop in evaluation and up to 0.81 absolute median gain in SacreBLEU when trained on corrected training data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this section, we present our method based on transfer learning and masked generation. We consider an input question", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "2" |
| }, |
| { |
| "text": "X = {x 1 , x 2 , . . . , x N \u22121 , x N } with x i the i th word", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "2" |
| }, |
| { |
| "text": "and its raw answer A = {a 1 , a 2 , . . . , a K\u22121 , a K } with a j the j th word of the answer 2 . The goal is to generate a verbalized answer", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Y = {y 1 , y 2 , . . . , y M \u22121 , y M }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We model the generation of each token as a conditional \u03b8-parameterized probability distribution. More precisely, we estimate \u03b8 such that", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "2" |
| }, |
| { |
| "text": "P \u03b8 (y i |X, A, y 1 , y 2 , . . . , y i\u22121 ) is maximized.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "2" |
| }, |
| { |
| "text": "As mentioned in Dai and Le (2015), Howard and Ruder (2018) and Montella et al. (2020) , NLG has significantly benefited from transfer learning and very large PLMs (Devlin et al., 2019; Radford et al., 2019) . The generalization ability to unseen data has tremendously improved over the last decades due to the use of excessively large training corpora. As a consequence, we consider two recent PLMs for generation to leverage transfer learning:", |
| "cite_spans": [ |
| { |
| "start": 35, |
| "end": 58, |
| "text": "Howard and Ruder (2018)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 63, |
| "end": 85, |
| "text": "Montella et al. (2020)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 163, |
| "end": 184, |
| "text": "(Devlin et al., 2019;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 185, |
| "end": 206, |
| "text": "Radford et al., 2019)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 BART (Lewis et al., 2020) is based on a Transformer architecture (Vaswani et al., 2017) . More specifically, its encoder and decoder correspond to BERT (Devlin et al., 2019) and to GPT (Radford et al., 2019) , respectively. During training, BART is pretrained with a denoising objective. It consists in corrupting the input of the model (masking, reordering, etc) and to reconstruct the original, i.e. denoised, input.", |
| "cite_spans": [ |
| { |
| "start": 7, |
| "end": 27, |
| "text": "(Lewis et al., 2020)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 67, |
| "end": 89, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 154, |
| "end": 175, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 187, |
| "end": 209, |
| "text": "(Radford et al., 2019)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 T5 (Raffel et al., 2020) is similar to the Transformer-based model (Vaswani et al., 2017) with minor changes. For instance, as positional embeddings, a single scalar is added to the logits used for attention weights computation. Also, a simplified layer normalization is utilized. T5 is trained on multiple tasks at once such as question answering, language modeling, span extraction, paraphrasing, sentiment analysis, etc. To do so, all text processing tasks are cast in a text-to-text framework which allows to reuse the same model, loss function, optimizer and so on. Both input and target are textual content or transformed as text. Thus, for binary, numerical or categorical data types, T5 maps such format to strings. Moreover, a specificity of T5 is that the task is informed within the input thanks to a prefix, e.g. \"translate English to German:\" or \"summarize:\". While finetuning, it is a good practice to reuse the same prefix as the downstream task for efficient transfer learning.", |
| "cite_spans": [ |
| { |
| "start": 5, |
| "end": 26, |
| "text": "(Raffel et al., 2020)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 69, |
| "end": 91, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In order to verbalize the answer, a first step consists in encoding X with the encoder part of T5 or BART model. Then, the decoder part takes learned representations to generate Y . In our case, a placeholder is generated in Y which will be replaced by the raw answer A as explained in next section.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "2" |
| }, |
| { |
| "text": "As humans, our ability to generate a response is independent and agnostic to our own knowledge. For instance, given the question \"What is the capital of Ghana?\", although the answer, i.e. \"Accra\", might not be known, one is still able to generate the response \"The capital of Ghana is [ANSWER]\" where [ANSWER] stands for a placeholder of the correct raw answer. Therefore, this paradigm could remain when modeling any question answering system. This is a two-stage process. First, a template of the verbalized answer is generated. Secondly, we replace the mask with the corresponding raw answer, i.e. a single or several entities, of the input question. We are aware that this approach works especially well in English, but would require adjustment to other languages such as French or German because of gender agreement. However, several benefits can be pointed out. It alleviates the training of the model since it principally learns to generate templates. In addition, it avoids misspelling of entities during the generation. It has been shown that unseen entities are not handled properly by the generative system (Ferreira et al., 2020) . This is further critical when a copy mechanism is not applied. On top of that, using placeholders reduces the complexity of the model by shrinking its vocabulary dimension (last layer). This is also significant regarding training time since a softmax layer is usually applied which is known to be time consuming.", |
| "cite_spans": [ |
| { |
| "start": 1118, |
| "end": 1141, |
| "text": "(Ferreira et al., 2020)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Masked Answer Verbalization", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "More and more efforts have been made to construct and annotate new QA datasets. However, most of proposed corpora do not include a well-formed and informative response. In fact, no verbalization of the retrieved answer is usually given. Only the raw answer acts as the final prediction which puts a curb on possible downstream generation task. To this end, we explore newly released datasets equipped with a natural language form of the response:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 VQuAnDa (Kacupaj et al., 2020) is based on the Large scale Complex Question Answering Dataset (LC-QuAD). VQuAnDa provides a set of 5000 complex questions with their SPARQL queries and their corresponding answer verbalization. A semi-automatic pro-cess is used to derive the answer verbalization of each question. The available templates of the questions in LC-QuAD dataset are paraphrased using strict rules (use of active voice, synonyms, order rearranging, etc.) to get natural response templates. Then, a second step consists in extracting raw answers from DBpedia using the SPARQL queries. In case that the number of retrieved answers is greater than 15, the list of answers is replaced with a single token [answer] to avoid long sequences. Lastly, entities and predicates are filled accordingly to generate the final verbalization. To ensure correctness, resulting verbalization are checked manually according to (Kacupaj et al., 2021a) . There are totally 4000 and 1000 pairs for training and testing sets, respectively.", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 32, |
| "text": "(Kacupaj et al., 2020)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 920, |
| "end": 943, |
| "text": "(Kacupaj et al., 2021a)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 ParaQA (Kacupaj et al., 2021a) extends VQuAnDa by proposing multiple verbalizations for each question. This paraphrasing task was done using different techniques such as back-translation. At least two verbalizations per questions are given, and up to 8 unique paraphrases are provided in some cases. Thus, more pairs in training set can be found for the same question. We record a total of 12,637 pairs in training. Note that the training and testing splits of ParaQA are different than VQuAnDa.", |
| "cite_spans": [ |
| { |
| "start": 9, |
| "end": 32, |
| "text": "(Kacupaj et al., 2021a)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 VANiLLa (Biswas et al., 2021 ) is a compelling dataset due to its size. Covering more than 300 relations, it was built using a semiautomatic framework. First, direct questions with single entity as answer were extracted from the Complex Sequential Question Answering (CSQA) (Saha et al., 2018) and Sim-pleQuestions 3 Datasets. After clustering similar questions based on 4-grams, a templatebased verbalization of a single instance of each cluster was manually annotated thanks to Amazon Mechanical Turk (AMT). Finally, a post-processing aims at using the resulting templates to infer the verbalization for other similar questions in corresponding clusters. These datasets are therefore suitable for the response generation task. Nonetheless, because of the semi-automatic framework, these corpora are prone to errors as we will show in Section 4.4", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 30, |
| "text": "(Biswas et al., 2021", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 276, |
| "end": 295, |
| "text": "(Saha et al., 2018)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In our experiments, we provide empirical results on the introduced datasets in Section 3. In Section 4.2, we compare our transfer learning approach over the existing literature using T5 and BART embedded with a masking strategy. Then, we explore the advantage of placeholders in Section 4.3. Our inputs and outputs with and without our masking approach are depicted in Table 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 369, |
| "end": 376, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We use the pretrained BART and T5 models from HuggingFace. For both PLMs, we use their base models, i.e. facebook/bart-base and t5-base configurations. The input questions and target responses are all lower-cased. Since no validation sets are provided regarding the official splits, we arbitrarily set our hyperparameters for all of our experiments and do not validate them. We choose to finetune models on 10 epochs using a batch size of 32. We use the cross entropy loss and Adam optimizer for optimization. The initial learning rates are set to 1.0 \u00d7 10 \u22125 and 1.0 \u00d7 10 \u22124 for BART and T5 respectively. 4 For T5, we prefix each question with the prefix \"question:\" as it has already been used during T5 pretraining for question-answering. During generation, we use a greedy decoding (no beam search or sampling is applied). For better reproducibility, our code is available at https: //github.com/Anonymous1911272/ answerverbalization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Settings", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Evaluation of natural language remains a critical issue since it is difficult to automate. Besides, human annotations are usually costly and time-consuming. For fair comparison, we follow exactly the same evaluation protocol and metrics as Kacupaj et al. (2021c) , using BLEU (on 4-grams) (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) 5 . Since our predicted verbalizations contain placeholders, we replace them with the raw answers included in the dataset. Therefore, our evaluation does not differ between unmasked approaches. Results on VQuAnDa, ParaQA and VANiLLa datasets are depicted in Table 3 .", |
| "cite_spans": [ |
| { |
| "start": 240, |
| "end": 262, |
| "text": "Kacupaj et al. (2021c)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 289, |
| "end": 312, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 324, |
| "end": 350, |
| "text": "(Banerjee and Lavie, 2005)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 351, |
| "end": 352, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 609, |
| "end": 616, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We can see that transfer learning methods systematically show best (bold) or second-to-best (underlined) performances on all datasets. This is not surprising as large pretraining has shown massive improvements over standard approaches. BART exhibits much better performances than T5 on VQuAnDa and ParaQA. On the contrary, T5 is slightly better on VANiLLa. We conjecture that BART is well-fitted to map question to its answer verbalization. Question and response usually share similar words, but in different orders and few words or preprosition could be missing to go from one to another. This exactly corresponds to the denoising objective on which BART has been pretrained. Therefore, the input question can be viewed as a noisy version of the answer verbalization from which BART attempts to reconstruct. Regardless, pretrained models on average results in 44.25%, 3.26% and 29.10% relative gain in BLEU over VOGUE on VQuAnDa, ParaQA and VANiLLa respectively. VOGUE nonetheless shows interesting results despite its size and no pretraining. This is also explained by the logical form of the question which boils down the question to a simple abstraction. Furthermore, we observe that the unsupervised strategy by Akermi et al. (2020) has strong shortcoming to compete with a basic RNN. Their method is sensitive to the syntax and length of the input question. The longer the question, the worst the generation. While the verbalizations in VQuAnDa and ParaQA are 17 tokens long on average, this might be the reason of low performances on these datasets. Moreover, unnatural questions, as included in VANiLLa, are not handled properly because of the use of PLMs to gauge the likelihood of permutations.", |
| "cite_spans": [ |
| { |
| "start": 1217, |
| "end": 1237, |
| "text": "Akermi et al. (2020)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In the following, our interest lies in measuring the real gain of using placeholders. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In this section, we investigate the impact of using a masking mechanism. We conduct a comparative study between masked and non-masked generation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "To Mask or not to Mask?", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "To do so, we finetune BART and T5 with the same hyperparameters as previous experiments. For non-masked generation, the input question is concatenated with its raw answer. To differentiate question and answer, we make use of the separator token [SEP] . With this setting, models should learn to combine input question and input answer accordingly to form a grammatically correct verbalization. We adopt additional evaluation metric, i.e. SacreBLEU (Post, 2018) , Chrf++ (Popovi\u0107, 2015) and TER (Snover et al., 2006) , yielding much fine-grained analysis. The experiment results for VQuAnDa, ParaQA and VANiLLa are shown in Table 4 , 5 and 6.", |
| "cite_spans": [ |
| { |
| "start": 245, |
| "end": 250, |
| "text": "[SEP]", |
| "ref_id": null |
| }, |
| { |
| "start": 448, |
| "end": 460, |
| "text": "(Post, 2018)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 470, |
| "end": 485, |
| "text": "(Popovi\u0107, 2015)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 494, |
| "end": 515, |
| "text": "(Snover et al., 2006)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 623, |
| "end": 630, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "To Mask or not to Mask?", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "On the three datasets, we observe that using a placeholder leads to systematic gain for all reported metrics. More importantly, the gap can be considerably significant when masking the raw answer. For T5 and BART, we note 23.01%, 13.52%, 4.34% and 21.56%, 13.09%, 3.25% absolute gain in Sacre-BLEU for VQuAnDa, ParaQA and VANiLLa respectively. Thus, generating a more abstract verbalization alleviates the learning. Following, we inspect the effect of the size of training set. We thereby finetune BART and T5 on different (random) proportion of training data. We report Sacre-BLEU scores for each portion of training data in Fig. 1 . At first glance, the gap between masked and non-masked generation remains very distinctive despite using less training data. We remark for T5 and BART about 23.09%, 13.81%, 3.44% and 21.65%, 13.00%, 3.40% absolute gain on average in SacreBLEU on VQuAnDa, ParaQA and VANiLLa while tuning amount of data fed to models. We observe that both masked and unmasked strategies keep increasing performances when new samples are given. Contrary to expectation, despite the use of placeholder, masked generation keeps benefiting of some significant performance leaps. For BART on VQuAnDa and ParaQA, SacreBLEU reaches a limit with only 40% of the training data in both configurations. On VANiLLa, models show much more variance, but a positive trend remains overall.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 626, |
| "end": 632, |
| "text": "Fig. 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "To Mask or not to Mask?", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Semi-automatic dataset construction is a convenient yet effective technique to automatically generate sizeable corpora. Few handcrafted annotations are needed as initial seed. However, resulting samples are highly prone to errors or not natural. This remains a major drawback in the NLG community where the low quality or diversity of the available data jeopardize comparison between approaches. Within the VANiLLa dataset, we particularly reveal some verbalizations where the subject entity of the question differs with the subject entity of the reference. For example, given the question \"Which sex does Doris Miller belong to?\", the assigned reference is \"Sterjo is a male\", with \"Sterjo\" a hallucinated entity, which should be corrected with \"Doris Miller\". Those hallucinated entities in gold references especially occur with specific and redundant entities (e.g. \"Sterjo\"). We assume the semi-automatic pipeline to be the culprit of such mismatch. Fortunately, those errors can be corrected automatically since the subject entity of each question is explicitly inquired in the original dataset. We identified 12 repeated hallucinated entities over the whole training set of VANiLLa. We thus interchange erroneous entities with correct ones. This stands for 5% for each of the training and testing set. The quality and diversity of references was proved to be at the core of variations of automated metrics outcomes (Freitag et al., 2020) . Errors in references directly jeopardize resulting performances of models. Indeed, good predictions might be rated as bad quality while being correct. Furthermore, automatic metrics are critically sensitive to any changes in chosen words in target verbalization. We hence investigate the shift in reported results with corrected references. Precisely, we finetune T5 and BART with same hyperparameters as previously mentioned in Section 4.1. We train and evaluate models on the original VANiLLa dataset (\"Raw\") and the corrected version (\"Corrected\"). The SacreBLEU scores are given in Table 7 .", |
| "cite_spans": [ |
| { |
| "start": 1421, |
| "end": 1443, |
| "text": "(Freitag et al., 2020)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 2032, |
| "end": 2040, |
| "text": "Table 7", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "References are not Innocent", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "With only 5% of corrections in both training and testing sets, we record small improvements in SacreBLEU. Although the increases are relatively insignificant, those results clearly indicates that the quality of the references is crucial to precisely assess models performances. More and more works are competing in improving those metrics. Several contributions in generation considered slight improvements as predominance of their approaches over previous methods. However, we show in Table 7 that evaluating models on a corrected version lead to different results that are not systematically better. In contrast, when trained on much higher quality samples, results on corrected testing examples exhibits more important gain as seen in Table 7 . The absolute median gain reaches 0.81 Sacre-BLEU with a cleaner training set while barely 0.13 with the standard training set. As a consequence, it is hard to compare and to draw any conclusions between models on noisy datasets. It is then important to raise awareness toward automatic dataset construction.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 738, |
| "end": 746, |
| "text": "Table 7", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "References are not Innocent", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "We proposed to verbalize answers usually returned by any question-answering system from a structured knowledge base. We combined the ad-vantages of transfer learning and masked generation. We compared our strategies with and without masks using T5 and BART. We showed that using massively pretrained models with answer placeholders alleviates the learning and led to unprecedented results on VQuAnDa, ParaQA and VANiLLa datasets. Furthermore, we revealed multiple redundant entity hallucinations in the VANiLLa dataset. By automatically correcting 5% of them, we observed shifts in performances. This further demonstrates the limitation of automatic metrics when references are not reliable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We use the term placeholder and mask interchangeably.2 The raw answer can be of multiple words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Available athttps://github.com/ davidgolub/SimpleQA/tree/master/ datasets/SimpleQuestions", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We witness divergence when learning rate is set to 1.0 \u00d7 10 \u22124 for BART.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Kacupaj et al. (2021c) average the BLEU and METEOR of each verbalization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Transformer based natural language generation for question-answering", |
| "authors": [ |
| { |
| "first": "Imen", |
| "middle": [], |
| "last": "Akermi", |
| "suffix": "" |
| }, |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Heinecke", |
| "suffix": "" |
| }, |
| { |
| "first": "Fr\u00e9d\u00e9ric", |
| "middle": [], |
| "last": "Herledan", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)", |
| "volume": "", |
| "issue": "", |
| "pages": "349--359", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Imen Akermi, Johannes Heinecke, and Fr\u00e9d\u00e9ric Herledan. 2020. Transformer based natural language generation for question-answering. In Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+), page 349-359, Dublin, Ireland (Virtual). Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "G\u00e9n\u00e9ration automatique de texte en langage naturel pour les syst\u00e8mes de questionsr\u00e9ponses", |
| "authors": [ |
| { |
| "first": "Imen", |
| "middle": [], |
| "last": "Akermi", |
| "suffix": "" |
| }, |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Heinecke", |
| "suffix": "" |
| }, |
| { |
| "first": "Fr\u00e9d\u00e9ric", |
| "middle": [], |
| "last": "Herledan", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Traitement Automatique des Langues", |
| "volume": "62", |
| "issue": "1", |
| "pages": "13--37", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Imen Akermi, Johannes Heinecke, and Fr\u00e9d\u00e9ric Herledan. 2021. G\u00e9n\u00e9ration automatique de texte en langage naturel pour les syst\u00e8mes de questions- r\u00e9ponses. Traitement Automatique des Langues, 62(1):13-37.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", |
| "authors": [ |
| { |
| "first": "Satanjeev", |
| "middle": [], |
| "last": "Banerjee", |
| "suffix": "" |
| }, |
| { |
| "first": "Alon", |
| "middle": [], |
| "last": "Lavie", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization", |
| "volume": "", |
| "issue": "", |
| "pages": "65--72", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65-72, Ann Arbor, Michigan. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Vanilla: Verbalized answers in natural language at large scale", |
| "authors": [ |
| { |
| "first": "Debanjali", |
| "middle": [], |
| "last": "Biswas", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohnish", |
| "middle": [], |
| "last": "Dubey", |
| "suffix": "" |
| }, |
| { |
| "first": "Md Rashad Al", |
| "middle": [], |
| "last": "Hasan Rony", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Lehmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2105.11407" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Debanjali Biswas, Mohnish Dubey, Md Rashad Al Hasan Rony, and Jens Lehmann. 2021. Vanilla: Verbalized answers in natural language at large scale. arXiv:2105.11407.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Semi-supervised sequence learning", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Andrew", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc V", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "28", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in Neural Informa- tion Processing Systems, volume 28. Curran Asso- ciates, Inc.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "4171--4186", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N19-1423" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The 2020 Bilingual, Bi-Directional WebNLG+ Shared Task Overview and Evaluation Results", |
| "authors": [ |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Thiago Castro Ferreira", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikolai", |
| "middle": [], |
| "last": "Gardent", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Ilinykh", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Van Der", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Diego", |
| "middle": [], |
| "last": "Mille", |
| "suffix": "" |
| }, |
| { |
| "first": "Anastasia", |
| "middle": [], |
| "last": "Moussallem", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Shimorina", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thiago Castro Ferreira, Claire Gardent, Nikolai Ilinykh, Chris Van Der Lee, Simon Mille, Diego Moussallem, and Anastasia Shimorina. 2020. The 2020 Bilingual, Bi-Directional WebNLG+ Shared Task Overview and Evaluation Results (WebNLG+ 2020). In Pro- ceedings of the 3rd International Workshop on Nat- ural Language Generation from the Semantic Web (WebNLG+), Dublin/Virtual, Ireland.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "BLEU might be guilty but references are not innocent", |
| "authors": [ |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Freitag", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Grangier", |
| "suffix": "" |
| }, |
| { |
| "first": "Isaac", |
| "middle": [], |
| "last": "Caswell", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "61--71", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.emnlp-main.5" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Markus Freitag, David Grangier, and Isaac Caswell. 2020. BLEU might be guilty but references are not innocent. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 61-71, Online. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Universal language model fine-tuning for text classification", |
| "authors": [ |
| { |
| "first": "Jeremy", |
| "middle": [], |
| "last": "Howard", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Ruder", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "328--339", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P18-1031" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Australia. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Paraqa: A question answering dataset with paraphrase responses for single-turn conversation", |
| "authors": [ |
| { |
| "first": "Endri", |
| "middle": [], |
| "last": "Kacupaj", |
| "suffix": "" |
| }, |
| { |
| "first": "Barshana", |
| "middle": [], |
| "last": "Banerjee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kuldeep", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Lehmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "ESWC 2021", |
| "volume": "", |
| "issue": "", |
| "pages": "598--613", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/978-3-030-77385-4_36" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Endri Kacupaj, Barshana Banerjee, Kuldeep Singh, and Jens Lehmann. 2021a. Paraqa: A question answer- ing dataset with paraphrase responses for single-turn conversation. In ESWC 2021, pages 598-613.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Conversational question answering over knowledge graphs with transformer and graph attention networks", |
| "authors": [ |
| { |
| "first": "Endri", |
| "middle": [], |
| "last": "Kacupaj", |
| "suffix": "" |
| }, |
| { |
| "first": "Joan", |
| "middle": [], |
| "last": "Plepi", |
| "suffix": "" |
| }, |
| { |
| "first": "Kuldeep", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Harsh", |
| "middle": [], |
| "last": "Thakkar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Lehmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Maria", |
| "middle": [], |
| "last": "Maleshkova", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", |
| "volume": "", |
| "issue": "", |
| "pages": "850--862", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Endri Kacupaj, Joan Plepi, Kuldeep Singh, Harsh Thakkar, Jens Lehmann, and Maria Maleshkova. 2021b. Conversational question answering over knowledge graphs with transformer and graph atten- tion networks. In Proceedings of the 16th Conference of the European Chapter of the Association for Com- putational Linguistics: Main Volume, pages 850-862, Online. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Vogue: Answer verbalization through multi-task learning", |
| "authors": [ |
| { |
| "first": "Endri", |
| "middle": [], |
| "last": "Kacupaj", |
| "suffix": "" |
| }, |
| { |
| "first": "Shyamnath", |
| "middle": [], |
| "last": "Premnadh", |
| "suffix": "" |
| }, |
| { |
| "first": "Kuldeep", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Lehmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Maria", |
| "middle": [], |
| "last": "Maleshkova", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Machine Learning and Knowledge Discovery in Databases. Research Track", |
| "volume": "", |
| "issue": "", |
| "pages": "563--579", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Endri Kacupaj, Shyamnath Premnadh, Kuldeep Singh, Jens Lehmann, and Maria Maleshkova. 2021c. Vogue: Answer verbalization through multi-task learning. In Machine Learning and Knowledge Dis- covery in Databases. Research Track, pages 563-579, Cham. Springer International Publishing.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Vquanda: Verbalization question answering dataset", |
| "authors": [ |
| { |
| "first": "Endri", |
| "middle": [], |
| "last": "Kacupaj", |
| "suffix": "" |
| }, |
| { |
| "first": "Hamid", |
| "middle": [], |
| "last": "Zafar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Lehmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Maria", |
| "middle": [], |
| "last": "Maleshkova", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "The Semantic Web", |
| "volume": "", |
| "issue": "", |
| "pages": "531--547", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Endri Kacupaj, Hamid Zafar, Jens Lehmann, and Maria Maleshkova. 2020. Vquanda: Verbalization question answering dataset. In The Semantic Web, pages 531- 547, Cham. Springer International Publishing.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Yinhan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Naman", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "Marjan", |
| "middle": [], |
| "last": "Ghazvininejad", |
| "suffix": "" |
| }, |
| { |
| "first": "Abdelrahman", |
| "middle": [], |
| "last": "Mohamed", |
| "suffix": "" |
| }, |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "7871--7880", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.acl-main.703" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Denoising pre-training and data augmentation strategies for enhanced RDF verbalization with transformers", |
| "authors": [ |
| { |
| "first": "Sebastien", |
| "middle": [], |
| "last": "Montella", |
| "suffix": "" |
| }, |
| { |
| "first": "Betty", |
| "middle": [], |
| "last": "Fabre", |
| "suffix": "" |
| }, |
| { |
| "first": "Tanguy", |
| "middle": [], |
| "last": "Urvoy", |
| "suffix": "" |
| }, |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Heinecke", |
| "suffix": "" |
| }, |
| { |
| "first": "Lina", |
| "middle": [], |
| "last": "Rojas-Barahona", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)", |
| "volume": "", |
| "issue": "", |
| "pages": "89--99", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sebastien Montella, Betty Fabre, Tanguy Urvoy, Jo- hannes Heinecke, and Lina Rojas-Barahona. 2020. Denoising pre-training and data augmentation strate- gies for enhanced RDF verbalization with transform- ers. In Proceedings of the 3rd International Work- shop on Natural Language Generation from the Se- mantic Web (WebNLG+), pages 89-99, Dublin, Ire- land (Virtual). Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Bleu: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Kishore", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "311--318", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/1073083.1073135" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Context transformer with stacked pointer networks for conversational question answering over knowledge graphs", |
| "authors": [ |
| { |
| "first": "Joan", |
| "middle": [], |
| "last": "Plepi", |
| "suffix": "" |
| }, |
| { |
| "first": "Endri", |
| "middle": [], |
| "last": "Kacupaj", |
| "suffix": "" |
| }, |
| { |
| "first": "Kuldeep", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Harsh", |
| "middle": [], |
| "last": "Thakkar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Lehmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "The Semantic Web", |
| "volume": "", |
| "issue": "", |
| "pages": "356--371", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joan Plepi, Endri Kacupaj, Kuldeep Singh, Harsh Thakkar, and Jens Lehmann. 2021. Context trans- former with stacked pointer networks for conversa- tional question answering over knowledge graphs. In The Semantic Web, pages 356-371, Cham. Springer International Publishing.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "chrF: character n-gram F-score for automatic MT evaluation", |
| "authors": [ |
| { |
| "first": "Maja", |
| "middle": [], |
| "last": "Popovi\u0107", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "392--395", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W15-3049" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "A call for clarity in reporting BLEU scores", |
| "authors": [ |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Post", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "186--191", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W18-6319" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Brussels, Belgium. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Language Models are Unsupervised Multitask Learners", |
| "authors": [ |
| { |
| "first": "Alec", |
| "middle": [], |
| "last": "Radford", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Rewon", |
| "middle": [], |
| "last": "Child", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Luan", |
| "suffix": "" |
| }, |
| { |
| "first": "Dario", |
| "middle": [], |
| "last": "Amodei", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Lan- guage Models are Unsupervised Multitask Learners. https://openai.com/blog/better-language-models/.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", |
| "authors": [ |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Raffel", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Roberts", |
| "suffix": "" |
| }, |
| { |
| "first": "Katherine", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharan", |
| "middle": [], |
| "last": "Narang", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Matena", |
| "suffix": "" |
| }, |
| { |
| "first": "Yanqi", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [ |
| "J" |
| ], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "21", |
| "issue": "140", |
| "pages": "1--67", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph", |
| "authors": [ |
| { |
| "first": "Amrita", |
| "middle": [], |
| "last": "Saha", |
| "suffix": "" |
| }, |
| { |
| "first": "Vardaan", |
| "middle": [], |
| "last": "Pahuja", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Mitesh", |
| "suffix": "" |
| }, |
| { |
| "first": "Karthik", |
| "middle": [], |
| "last": "Khapra", |
| "suffix": "" |
| }, |
| { |
| "first": "Sarath", |
| "middle": [], |
| "last": "Sankaranarayanan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Chandar", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1801.10314" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amrita Saha, Vardaan Pahuja, Mitesh M. Khapra, Karthik Sankaranarayanan, and Sarath Chandar. 2018. Complex sequential question answer- ing: Towards learning to converse over linked question answer pairs with a knowledge graph. arXiv:1801.10314.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "A study of translation edit rate with targeted human annotation", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Snover", |
| "suffix": "" |
| }, |
| { |
| "first": "Bonnie", |
| "middle": [], |
| "last": "Dorr", |
| "suffix": "" |
| }, |
| { |
| "first": "Rich", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "Linnea", |
| "middle": [], |
| "last": "Micciulla", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Makhoul", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "223--231", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of trans- lation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers, pages 223-231, Cambridge, Massachusetts, USA. Association for Machine Translation in the Americas.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "UDPipe 2.0 Prototype at CoNLL 2018 UD Shared Task", |
| "authors": [ |
| { |
| "first": "Milan", |
| "middle": [], |
| "last": "Straka", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", |
| "volume": "", |
| "issue": "", |
| "pages": "197--207", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Milan Straka. 2018. UDPipe 2.0 Prototype at CoNLL 2018 UD Shared Task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 197-207, Brussels. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "\u0141ukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "30", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "mask Who is the president of the U.S.? [SEP] J. Biden The president of the U.S. is J. Biden. w/ mask Who is the president of the U.S.? The president of the U.S. is [ANSWER].", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "text": "Tuning proportion of training data", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "html": null, |
| "num": null, |
| "text": "Datasets Statistics", |
| "content": "<table/>", |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "html": null, |
| "num": null, |
| "text": "Examples of model input and output with and without placeholders. During evaluation, the placeholder is replaced with the raw answer J. Biden.", |
| "content": "<table><tr><td>Models</td><td colspan=\"6\">BLEU \u2191 VQuAnDa ParaQA VANiLLa VQuAnDa ParaQA VANiLLa METEOR \u2191</td></tr><tr><td>RNN \u271d</td><td>15.43</td><td>22.45</td><td>16.66</td><td>53.15</td><td>58.41</td><td>58.67</td></tr><tr><td>Transformer \u271d</td><td>18.37</td><td>23.61</td><td>30.80</td><td>56.83</td><td>59.63</td><td>62.16</td></tr><tr><td>Akermi et al. (2020)</td><td>22.70</td><td>18.25</td><td>18.30</td><td>48.04</td><td>44.27</td><td>48.27</td></tr><tr><td>VOGUE \u271d T5 (masking) BART (masking)</td><td>28.76 39.07 43.90</td><td>32.05 30.62 35.57</td><td>35.46 45.87 45.69</td><td>67.21 67.70 71.92</td><td>68.85 59.81 65.40</td><td>65.04 67.15 66.71</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "html": null, |
| "num": null, |
| "text": "Answer Verbalization Results. (\u271d) results are taken fromKacupaj et al. (2021c).", |
| "content": "<table><tr><td>\u2193</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "html": null, |
| "num": null, |
| "text": "Results with and without placeholders on VQuAnDa.", |
| "content": "<table><tr><td>\u2193</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "html": null, |
| "num": null, |
| "text": "Results with and without placeholders on ParaQA.", |
| "content": "<table><tr><td>\u2193</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "html": null, |
| "num": null, |
| "text": "Results with and without placeholders on VANiLLa.", |
| "content": "<table/>", |
| "type_str": "table" |
| }, |
| "TABREF8": { |
| "html": null, |
| "num": null, |
| "text": "SacreBLEU scores of T5 and BART trained on raw VANiLLa (left) and corrected VANiLLa (right)", |
| "content": "<table/>", |
| "type_str": "table" |
| } |
| } |
| } |
| } |