ACL-OCL / Base_JSON /prefixW /json /webnlg /2020.webnlg-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:06:32.593779Z"
},
"title": "Controllable Neural Natural Language Generation: comparison of state-of-the-art control strategies",
"authors": [
{
"first": "Yuanmin",
"middle": [],
"last": "Leng",
"suffix": "",
"affiliation": {
"laboratory": "LIG",
"institution": "Grenoble INP",
"location": {
"postCode": "F-38000",
"settlement": "Grenoble",
"country": "France"
}
},
"email": "yuanmin.leng@outlook.com"
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Portet",
"suffix": "",
"affiliation": {
"laboratory": "LIG",
"institution": "Grenoble INP",
"location": {
"postCode": "F-38000",
"settlement": "Grenoble",
"country": "France"
}
},
"email": "francois.portet@imag.fr"
},
{
"first": "Cyril",
"middle": [],
"last": "Labb\u00e9",
"suffix": "",
"affiliation": {
"laboratory": "LIG",
"institution": "Grenoble INP",
"location": {
"postCode": "F-38000",
"settlement": "Grenoble",
"country": "France"
}
},
"email": "cyril.labbe@imag.fr"
},
{
"first": "Raheel",
"middle": [],
"last": "Qader",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Lingua Custodia",
"location": {
"settlement": "Paris",
"country": "France"
}
},
"email": "raheel.qader@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most NLG systems target text fluency and grammatical correctness, disregarding control over text structure and length. However, control over the output plays an important part in industrial NLG applications. In this paper, we study different strategies of control in triple-totext generation systems particularly from the aspects of text structure and text length. Regarding text structure, we present an approach that relies on aligning the input entities with the facts in the target side. It makes sure that the order and the distribution of entities in both the input and the text are the same. As for control over text length, we show two different approaches. One is to supply length constraint as input while the other is to force the end-ofsentence tag to be included at each step when using top-k decoding strategy. Finally, we propose four metrics to assess the degree to which these methods will affect a NLG system's ability to control text structure and length. Our analyses demonstrate that all the methods enhance the system's ability with a slight decrease in text fluency. In addition, constraining length at the input level performs much better than control at decoding level.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Most NLG systems target text fluency and grammatical correctness, disregarding control over text structure and length. However, control over the output plays an important part in industrial NLG applications. In this paper, we study different strategies of control in triple-totext generation systems particularly from the aspects of text structure and text length. Regarding text structure, we present an approach that relies on aligning the input entities with the facts in the target side. It makes sure that the order and the distribution of entities in both the input and the text are the same. As for control over text length, we show two different approaches. One is to supply length constraint as input while the other is to force the end-ofsentence tag to be included at each step when using top-k decoding strategy. Finally, we propose four metrics to assess the degree to which these methods will affect a NLG system's ability to control text structure and length. Our analyses demonstrate that all the methods enhance the system's ability with a slight decrease in text fluency. In addition, constraining length at the input level performs much better than control at decoding level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "If many researches have focused on end-to-end Neural NLG (NNLG), approaches depending on a pipeline architecture, where the generation can be divided into multiple steps and the steps are accomplished separately using either neural networks or other models, are still competitive (Nayak et al., 2017; Reed et al., 2018; Balakrishnan et al., 2019; Ferreira et al., 2019) . Besides improvement in text fluency and grammatical correctness, succeeding in controlling different aspects of text such as style, structure and length is a key enabler for reliable systems fit for industrial applications and for better understanding of NNLG models.",
"cite_spans": [
{
"start": 280,
"end": 300,
"text": "(Nayak et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 301,
"end": 319,
"text": "Reed et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 320,
"end": 346,
"text": "Balakrishnan et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 347,
"end": 369,
"text": "Ferreira et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In end-to-end models, control can be applied at various stages of the neural generation process such as input, hidden states and decoding (Prabhumoye et al., 2020) . Plug and Play Language Model (PPLM) proposed by (Dathathri et al., 2019) takes an external input and performs computations on hidden states. It combines a pretrained LM with one or more simple attribute classifiers that guide text generation towards the desired topic or sentiment. Gehrmann et al., 2018 describe a training method based on diverse ensembling to encourage models to learn distinct text styles. Welleck et al., 2020 intervene during the decoding stage to relieve the problem of infinite generation. The end-to-end models can be equipped with the ability to control style and length.",
"cite_spans": [
{
"start": 138,
"end": 163,
"text": "(Prabhumoye et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 214,
"end": 238,
"text": "(Dathathri et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 448,
"end": 469,
"text": "Gehrmann et al., 2018",
"ref_id": "BIBREF6"
},
{
"start": 576,
"end": 596,
"text": "Welleck et al., 2020",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In pipeline systems, control can be applied to various steps especially the step of text planning where controlling text content is the main objective. Mille et al., 2019 apply a series of rule-based graph-transducers and aggregation grammar while Moryossef et al., 2019 and Ferreira et al., 2019 allow RDF input triples to be rearranged in line with target.",
"cite_spans": [
{
"start": 248,
"end": 274,
"text": "Moryossef et al., 2019 and",
"ref_id": "BIBREF10"
},
{
"start": 275,
"end": 296,
"text": "Ferreira et al., 2019",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To better understand the effect of such control in black-box models, it is necessary to look into stateof-the-art control strategies through independent quantitative analyses. In this paper, we focus on controlling text structure and text length for RDF triple-to-text generation. Text structure consists in sentence split and entity order which care about how the facts are distributed in the sentences and in what relative order the entities are stated. Text length is the word count of a produced text. In this work, we assume a two-step pipeline models which consist of text planning and realization (Moryossef et al., 2019) . A text plan generated from the input facts in the first step is feeding the second step of realization. We make two main contributions.",
"cite_spans": [
{
"start": 604,
"end": 628,
"text": "(Moryossef et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "First, a systematic study of how the method of alignment affects control over text structure. The method aligns input with target in terms of sentence split and entity order. Furthermore, we propose three metrics to evaluate its effects. Second, a study of the text length control using two different approaches. One approach, applied to sentence planning, is to add the length constraint into input while the other, applied to surface realization, is to force the end-of-sentence tag to be included at each step when using top-k decoding strategy (Welleck et al., 2020) . We design one metric to evaluate the effects of the two different methods to control text length. None of the metrics we propose is used to optimize model training.",
"cite_spans": [
{
"start": 548,
"end": 570,
"text": "(Welleck et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we discuss the methods of controlling sentence split, entity order and text length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "Control of sentence split consists in determining the number of sentences and how the facts will be distributed in the sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence split",
"sec_num": "2.1"
},
{
"text": "Similar to Moryossef et al., 2019; Ferreira et al., 2019 , the method of controlling sentence split is to organize triples into sentences. However, RDF triples of training set are not provided with such syntactic features. For instance, let's take the following triples and target text. In this example, the triples should be split into two groups, and each group of entities should be consistent with those in the corresponding sentence. To recover the information about sentence boundaries, the entities should first be identified. The identification works by generating all n-grams of the text and comparing each of them with each entity based on Levenshtein distance. The n-gram whose distance from the entity is the smallest is seen as the appearance of the entity in the text. This method has been tested with a spare dataset of 15370 entities and is able to recover 99.18% of the entities (15245). The input after this step is as follow where SNT is the end-of-sentence token: We propose two metrics to evaluate the effects of this method. The first metric takes into account consistency between input and output in sentence count. It calculates the percentage of the produced texts which contain the same sentence count as that of inputs.",
"cite_spans": [
{
"start": 11,
"end": 34,
"text": "Moryossef et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 35,
"end": 56,
"text": "Ferreira et al., 2019",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence split",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P = 1 N N i=1 bool(s count(inputi) == s count(outputi))",
"eq_num": "(1)"
}
],
"section": "Sentence split",
"sec_num": "2.1"
},
{
"text": "where s count(x) returns sentence count of x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence split",
"sec_num": "2.1"
},
{
"text": "The second metric calculates the percentage of the entities that are distributed into the right sentences, irrespective of the order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence split",
"sec_num": "2.1"
},
{
"text": "P = 1 N N i=1 |entities correctly distributed| |entities| (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence split",
"sec_num": "2.1"
},
{
"text": "Let's take the example of input facts and output text below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence split",
"sec_num": "2.1"
},
{
"text": "Andrews County Airport location Texas SNT Texas language Spanish SNT Texas is the location of Andrews County Airport, where Spanish is spoken SNT In this example, the facts are described in two sentences in the input but the system only produced one sentence as output. Indeed, four entities of the two RDF triples in the input must be expressed, each in one sentence. But in the output only three entities are realised in only one sentence. The result of the second metric is therefore 0.5 since only one 2 entities are correctly realised over the four of the input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence split",
"sec_num": "2.1"
},
{
"text": "Control of entity order involves defining in what relative order the entities will be produced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Order",
"sec_num": "2.2"
},
{
"text": "The method of controlling entity order is to align input with target, ensuring that entity order in input is consistent to that in target. For our example mentioned in 2.1, the text plan will be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Order",
"sec_num": "2.2"
},
{
"text": "Soundplate [ > products > record label ] SNT [ 2010 < founded in < ] [ Matt Benn < founded by < ] Soundplate [ > products > website ] SNT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Order",
"sec_num": "2.2"
},
{
"text": "For datasets where no triples overlap, each group of triples is considered as an undirected graph and each entity as a node. Following Moryossef et al., 2019, we use depth-first search from each node to traverse the whole graph. For datasets where some triples share a same entity, the common entity is first located and then we make it direct towards the rest of the entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Order",
"sec_num": "2.2"
},
{
"text": "We propose one metric to evaluate effects of the alignment constraint. The metric is based on calculation of similarity between two lists of entities using edit distance. Edit distance is the minimum number of operations on entities (including deleting an entity, inserting an entity or swapping the positions of two entities) required to transform one list into the other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Order",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "similarity = 1 \u2212 edit distance(l1, l2) max(|l1|, |l2|)",
"eq_num": "(3)"
}
],
"section": "Entity Order",
"sec_num": "2.2"
},
{
"text": "l 1 and l 2 represent lists of entities extracted respectively from text plan and its corresponding text. Final scores of the metric are mean similarities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Order",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P = similarity = 1 N N i=1 (similarity)",
"eq_num": "(4)"
}
],
"section": "Entity Order",
"sec_num": "2.2"
},
{
"text": "where N means the total number of text plans. For example, given the text plan and the output:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Order",
"sec_num": "2.2"
},
{
"text": "Agnes Kant [ > nationality > Netherlands [ > leader name > Mark Rutte ] ] SNT Netherlands [ > currency > euro ] SNT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Order",
"sec_num": "2.2"
},
{
"text": "Agnes Kant has the nationality of Netherlands where the leader is Mark Rutte and the currency is the euro SNT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Order",
"sec_num": "2.2"
},
{
"text": "The second Netherlands is omitted in the output, so the edit distance between input and output is 1 and the similarity 0.8 since one of the four input entities is omitted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Order",
"sec_num": "2.2"
},
{
"text": "Control of text length is to constrain the output to contain len words. To implement this functionality two methods have been used. The first method works by inserting the text length constraint len into the input. At test time, len is predicted by a linear regression model. This model was trained using triple count and word count of text plan as input. The second method, proposed by Welleck et al., 2020 , forces the end-ofsentence tag to be included at each step of decoding after the generated text already reaches the given length.",
"cite_spans": [
{
"start": 387,
"end": 407,
"text": "Welleck et al., 2020",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text length",
"sec_num": "2.3"
},
{
"text": "To evaluate performances of the two approaches, we use mean squared error to compute difference between expected output length and actual output length:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text length",
"sec_num": "2.3"
},
{
"text": "M SE = 1 N N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text length",
"sec_num": "2.3"
},
{
"text": "(|tokens in output| \u2212 len) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text length",
"sec_num": "2.3"
},
{
"text": "3 Experiments ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text length",
"sec_num": "2.3"
},
{
"text": "The approaches were experimented on two datasets: the Wikipedia Company corpus (Qader et al., 2018) and WebNLG 2020 dataset (Ferreira et al., 2020) . Table 1 shows statistics about the two datasets. Wikipedia Company corpus is collected from Wikipedia without manual annotation, where some of input facts are not described in target while target contains facts which are not mentioned in input. To denoise the corpus, we delete the triples having entities that do not appear in target and drop the sentences without any entity in them. However, a large quantity of useless information still exists in target. As a result, we do not use it to perform experiments on control over text length. Since only the training set of WebNLG 2020 Challenge was released at the time of writing we used the test set from WebNLG 2017 dataset (Gardent et al., 2017) for system evaluation. We made sure that none of the instances in the test set was seen while training. Hence, when we report results on consistency to target. Input of the test set for both systems is produced by the same way as we generate input of the train set for the baseline system. Additionally, with the aim of further investigating the aligned system's ability, we also generated a another test set with the same examples but where input triples are randomly split into several groups and each group of triples are randomly merged (e.g., random plan).",
"cite_spans": [
{
"start": 79,
"end": 99,
"text": "(Qader et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 124,
"end": 147,
"text": "(Ferreira et al., 2020)",
"ref_id": null
},
{
"start": 826,
"end": 848,
"text": "(Gardent et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 150,
"end": 157,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "Control of text length We propose three systems: the first system (System 1) uses the train set where text length is prepended into input; in the second system (System 2), the end-of-sentence tag is forced to be included at each step when using top-k strategy; the last system is the baseline system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "The model of all systems consists of a 4-layer Transformer with 4-head attention (Vaswani et al., 2017) on both the encoder/decoder. All experiments were performed using the OpenNMT toolkit (Klein et al., 2017) .",
"cite_spans": [
{
"start": 81,
"end": 103,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 190,
"end": 210,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "Besides the metrics presented in Section 2, standard automatic measures BLEU (Papineni et al., 2002) , ROUGE-L (Lin, 2004) and METEOR (Denkowski and Lavie, 2014) were also used to evaluate the variations of performance when applying various controls. We use the program proposed by E2E NLG Challenge 1 to compute scores of the automatic metrics.",
"cite_spans": [
{
"start": 77,
"end": 100,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF12"
},
{
"start": 111,
"end": 122,
"text": "(Lin, 2004)",
"ref_id": "BIBREF8"
},
{
"start": 134,
"end": 161,
"text": "(Denkowski and Lavie, 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3"
},
{
"text": "Control of sentence split & entity order Table 2 and Table 3 show the comparison between the aligned system and the baseline system on the same The aligned system succeeds in producing texts consistent to plans while the baseline system fails.",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 49,
"text": "Table 2",
"ref_id": "TABREF5"
},
{
"start": 54,
"end": 61,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3"
},
{
"text": "The results of experiments performed on random plans in Table 3 make a stronger case for the ability of the aligned system. However, models learned with aligned data have lower scores in corpus-based metrics. The low scores in automatic metrics (BLEU, ROUGE-L and METEOR) can be explained by the fact that the baseline system -not being constrained by a plan during training -can learn to arrange output in a way that better fits the corpus on which it has been trained. System 1 generates a fluent text of the given number of words which contains all facts. In contrast, System 2 produces a text unfaithful to input, stopping the generation 'roughly' when there are ten tokens in output.",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 63,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3"
},
{
"text": "In this study, we demonstrate that the alignment between the input and the target regarding sentence split and entity order leads to a substantial increase in the ability of NNLG models to control text structure. As for control of text length, we show that a control at the input level is highly preferable over a control at decoding level since it gives the model the opportunity to integrate the length constraint during the processing to avoid ending up generating an incomplete text. As shown in the study, different types of control do not seem independent (e.g. length influences number of sentences). The next step is to get more insight about these interdependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and further-work",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Constrained Decoding for Neural NLG from Compositional Representations in Task-Oriented Dialogue",
"authors": [
{
"first": "Anusha",
"middle": [],
"last": "Balakrishnan",
"suffix": ""
},
{
"first": "Jinfeng",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Kartikeya",
"middle": [],
"last": "Upasani",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Subba",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. 2019. Con- strained Decoding for Neural NLG from Compo- sitional Representations in Task-Oriented Dialogue. In Proc. of the 57th Annual Meeting of the Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Plug and play language models: A simple approach to controlled text generation",
"authors": [
{
"first": "Sumanth",
"middle": [],
"last": "Dathathri",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Madotto",
"suffix": ""
},
{
"first": "Janice",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Jane",
"middle": [],
"last": "Hung",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Piero",
"middle": [],
"last": "Molino",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Yosinski",
"suffix": ""
},
{
"first": "Rosanne",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2019. Plug and play language mod- els: A simple approach to controlled text generation. In International Conference on Learning Represen- tations.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Meteor universal: Language specific translation evaluation for any target language",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the ninth workshop on statistical machine translation",
"volume": "",
"issue": "",
"pages": "376--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation, pages 376-380.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Diego Moussalem, and Anastasia Shimorina. 2020. Webnlg",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Thiago Castro Ferreira",
"suffix": ""
},
{
"first": "Nikolai",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Ilinykh",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Van Der Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mille",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thiago Castro Ferreira, Claire Gardent, Niko- lai Ilinykh, Chris van der Lee, Simon Mille, Diego Moussalem, and Anastasia Shimorina. 2020. Webnlg challenge 2020.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Neural datato-text generation: A comparison between pipeline and end-to-end architectures",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Thiago Castro Ferreira",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Der Lee",
"suffix": ""
},
{
"first": "Emiel",
"middle": [],
"last": "Emiel Van Miltenburg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Krahmer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "552--562",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thiago Castro Ferreira, Chris van der Lee, Emiel van Miltenburg, and Emiel Krahmer. 2019. Neural data- to-text generation: A comparison between pipeline and end-to-end architectures. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 552-562.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Creating training corpora for NLG micro-planners",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "Anastasia",
"middle": [],
"last": "Shimorina",
"suffix": ""
},
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Perez-Beltrachini",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "179--188",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1017"
]
},
"num": null,
"urls": [],
"raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating train- ing corpora for NLG micro-planners. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 179-188, Vancouver, Canada. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "End-to-end content and plan selection for data-to-text generation",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Falcon",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Elder",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "46--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Gehrmann, Falcon Dai, Henry Elder, and Alexander M Rush. 2018. End-to-end content and plan selection for data-to-text generation. In Pro- ceedings of the 11th International Conference on Natural Language Generation, pages 46-56.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Opennmt: Open-source toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P17-4012"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander M. Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proc. ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text summarization branches out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A portable grammar-based nlg system for verbalization of structured data",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Stamatia",
"middle": [],
"last": "Dasiopoulou",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing",
"volume": "",
"issue": "",
"pages": "1054--1056",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Mille, Stamatia Dasiopoulou, and Leo Wanner. 2019. A portable grammar-based nlg system for ver- balization of structured data. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Comput- ing, pages 1054-1056.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Step-by-step: Separating planning from realization in neural data-to-text generation",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Moryossef",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "2267--2277",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019. Step-by-step: Separating planning from realization in neural data-to-text generation. In Proceedings of NAACL-HLT, pages 2267-2277.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "To plan or not to plan? discourse planning in slot-value informed sequence to sequence models for language generation",
"authors": [
{
"first": "Neha",
"middle": [],
"last": "Nayak",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Heck",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neha Nayak, Dilek Hakkani-Tur, Marilyn Walker, and Larry Heck. 2017. To plan or not to plan? dis- course planning in slot-value informed sequence to sequence models for language generation. In Proc. of Interspeech.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Exploring controllable text generation techniques",
"authors": [
{
"first": "Shrimai",
"middle": [],
"last": "Prabhumoye",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.01822"
]
},
"num": null,
"urls": [],
"raw_text": "Shrimai Prabhumoye, Alan W Black, and Rus- lan Salakhutdinov. 2020. Exploring control- lable text generation techniques. arXiv preprint arXiv:2005.01822.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Generation of company descriptions using concept-to-text and text-to-text deep models: dataset collection and systems evaluation",
"authors": [
{
"first": "Raheel",
"middle": [],
"last": "Qader",
"suffix": ""
},
{
"first": "Khoder",
"middle": [],
"last": "Jneid",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Portet",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Labb\u00e9",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "254--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raheel Qader, Khoder Jneid, Fran\u00e7ois Portet, and Cyril Labb\u00e9. 2018. Generation of company descriptions using concept-to-text and text-to-text deep models: dataset collection and systems evaluation. In Pro- ceedings of the 11th International Conference on Natural Language Generation, pages 254-263.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Can Neural Generators for Dialogue Learn Sentence Planning and Discourse Structuring?",
"authors": [
{
"first": "Lena",
"middle": [],
"last": "Reed",
"suffix": ""
},
{
"first": "Shereen",
"middle": [],
"last": "Oraby",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of the 11th International Conference on Natural Language Generation (INLG)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lena Reed, Shereen Oraby, and Marilyn Walker. 2018. Can Neural Generators for Dialogue Learn Sentence Planning and Discourse Structuring? In Proc. of the 11th International Conference on Natural Language Generation (INLG). Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Consistency of a recurrent language model with respect to incomplete decoding",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Welleck",
"suffix": ""
},
{
"first": "Ilia",
"middle": [],
"last": "Kulikov",
"suffix": ""
},
{
"first": "Jaedeok",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"Yuanzhe"
],
"last": "Pang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.02492"
]
},
"num": null,
"urls": [],
"raw_text": "Sean Welleck, Ilia Kulikov, Jaedeok Kim, Richard Yuanzhe Pang, and Kyunghyun Cho. 2020. Consistency of a recurrent language model with respect to incomplete decoding. arXiv preprint arXiv:2002.02492.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>Soundplate products record label SNT Soundplate foundedIn 2010 Soundplate products website Soundplate foundedBy Matt Benn SNT</td></tr></table>",
"html": null,
"num": null,
"text": "Soundplate is a London-based record label and music platform SNT Originally founded in 2010 by Matt Benn, Soundplate started as a website covering all aspects of dance music and all genres of the global scene therein SNT"
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Statistics about the two datasets used in the study"
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td>System</td><td colspan=\"6\">BLEU ROUGE-L METEOR Sentence count Entity distribution Entity order</td></tr><tr><td colspan=\"2\">Aligned system Aligned system (random plan) 0.5021 0.5116 Baseline 0.5696</td><td>0.6145 0.5927 0.7035</td><td>0.6291 0.616 0.6774</td><td>0.9937 0.8232 0.6891</td><td>0.9301 0.8443 0.8181</td><td>0.906 0.8117 0.4556</td></tr></table>",
"html": null,
"num": null,
"text": "Control of sentence split and entity order: Wikipedia Company corpus"
},
"TABREF6": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Control of sentence split and entity order: WebNLG dataset (2020 as training and 2017 as testing)"
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td>test set. The aligned system outperforms the base-</td></tr><tr><td>line system. The former achieves nearly full marks</td></tr><tr><td>regarding consistency in sentence count, which</td></tr><tr><td>reveals its great capacity of controlling sentence</td></tr><tr><td>count when there is only one sentence in text plan.</td></tr><tr><td>For example, given the outputs of the aligned sys-</td></tr><tr><td>tem and the baseline system,</td></tr><tr><td>1 https://github.com/tuetschek/e2e-metrics</td></tr></table>",
"html": null,
"num": null,
"text": "Alex Plante, is 1.9304m tall and has played for the club Anyang Halla SNT Baseline system: Alex Plante was born in Canada and is 1.9304 m. tall SNT He played for the club Anyang Halla SNT"
},
"TABREF8": {
"type_str": "table",
"content": "<table><tr><td>shows a gap</td></tr></table>",
"html": null,
"num": null,
"text": ""
},
"TABREF9": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Control of text length: WebNLG dataset (random) System 1: Eberhard van der Laan is the leader of Amsterdam SNT System 2: Eberhard van der Laan is the name of the leader , SNT Baseline system: Eberhard van der Laan is the name of the leader of Amsterdam SNT"
}
}
}
}