| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T06:06:34.475022Z" |
| }, |
| "title": "Train Hard, Finetune Easy: Multilingual Denoising for RDF-to-Text Generation", |
| "authors": [ |
| { |
| "first": "Zden\u011bk", |
| "middle": [], |
| "last": "Kasner", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Charles University", |
| "location": { |
| "settlement": "Prague", |
| "country": "Czech Republic" |
| } |
| }, |
| "email": "kasner@ufal.mff.cuni.cz" |
| }, |
| { |
| "first": "Ond\u0159ej", |
| "middle": [], |
| "last": "Du\u0161ek", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Charles University", |
| "location": { |
| "settlement": "Prague", |
| "country": "Czech Republic" |
| } |
| }, |
| "email": "odusek@ufal.mff.cuni.cz" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We describe our system for the RDF-to-text generation task of the WebNLG Challenge 2020. We base our approach on the mBART model, which is pre-trained for multilingual denoising. This allows us to use a simple, identical, end-to-end setup for both English and Russian. Requiring minimal task-or languagespecific effort, our model placed in the first third of the leaderboard for English and first or second for Russian on automatic metrics, and it made it into the best or second-best system cluster on human evaluation.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We describe our system for the RDF-to-text generation task of the WebNLG Challenge 2020. We base our approach on the mBART model, which is pre-trained for multilingual denoising. This allows us to use a simple, identical, end-to-end setup for both English and Russian. Requiring minimal task-or languagespecific effort, our model placed in the first third of the leaderboard for English and first or second for Russian on automatic metrics, and it made it into the best or second-best system cluster on human evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The landscape of approaches for text generation has evolved since the first edition of the WebNLG challenge. Self-supervised pre-training objectivessuch as language modelling and text denoisinghave proven efficient for training neural models with excellent surface realization capabilities (Devlin et al., 2019; . Pre-training is used to improve the performance of models on downstream tasks, requiring only a small amount of task-specific data (Chen et al., 2020) .", |
| "cite_spans": [ |
| { |
| "start": 290, |
| "end": 311, |
| "text": "(Devlin et al., 2019;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 445, |
| "end": 464, |
| "text": "(Chen et al., 2020)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Pre-trained models can exploit shared representations across languages, following the success of multilingual word embeddings (Chen and Cardie, 2018; Lample and Conneau, 2019) . Although multilingual pre-training (i.e., pre-training on a collection of corpora from multiple languages) may slightly hurt performance for high-resource languages, it allows using the models for crosslingual tasks .", |
| "cite_spans": [ |
| { |
| "start": 126, |
| "end": 149, |
| "text": "(Chen and Cardie, 2018;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 150, |
| "end": 175, |
| "text": "Lample and Conneau, 2019)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Neural architectures for text generation also gave rise to end-to-end approaches, where inputs and outputs are linearized and the task is solved by a single neural sequence-to-sequence model. Despite its disproportionate simplicity, this approach can be hard to beat using task-specific, modular approaches (Du\u0161ek et al., 2020) .", |
| "cite_spans": [ |
| { |
| "start": 307, |
| "end": 327, |
| "text": "(Du\u0161ek et al., 2020)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In our submission, we took advantage of recent advances in pre-trained denoising autoencoders, multilingual representations, and sequenceto-sequence approaches. They enabled us to approach RDF-to-text generation both in English and Russian with a simple, identical, end-to-end setup. We finetune the pre-trained mBART model on the provided training data individually for each language. We feed tokenized and trivially linearized input RDF triples into the model and train it to output ground-truth references. We do not use any additional preprocessing, postprocessing, or other intermediate steps.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Originally, this approach was just a baseline that we planned to improve. However, the baseline approach yielded results of such quality that we decided to use it for our official WebNLG submission. The results of automatic metrics 1 and human evaluation as well as our manual inspections confirmed our expectations. In automatic metrics, our solution placed in the top third of the field (out of 35 submissions) for English and first or second (out of 12 submissions) for Russian. In human evaluation, it scored in the best or second-best system cluster. We believe that our approach-with its excessive simplicity-can serve as a benchmark for a trade-off between the output quality and the setup complexity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The WebNLG Challenge 2020 (Castro-Ferreira et al., 2020) 2 is the second edition of the shared task in mapping structured data to text. The data contains sets of RDF triples extracted from DBpedia accompanied with verbalizations which were crowdsourced from human annotators. Figure 1: Our setup is simple: after tokenizing and linearizing the RDF triples, we finetune two separate mBART models for English and Russian using provided training data. We submit the unprocessed output from each model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The original challenge (Gardent et al., 2017a,b) included 10 categories in the training data: Airport, Astronaut, Building, City, ComicsCharacter, Food, Monument, SportsTeam, University, and Written-Work. Each set of triples included several verbalizations to promote lexical variability. WebNLG 2020 includes several extensions:", |
| "cite_spans": [ |
| { |
| "start": 23, |
| "end": 48, |
| "text": "(Gardent et al., 2017a,b)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(1) It is bilingual: in addition to original English data, a new portion of the dataset with Russian lexicalizations is provided, giving rise to a new task of generating text in Russian.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(2) It is bidirectional: in addition to RDF-to-text generation, the challenge also includes a task on text-to-RDF semantic parsing. (We did not participate in this task.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(3) It includes 6 new categories: 5 unseen categories from WebNLG Challenge 2017 (Athlete, Artist, CelestialBody, MeanOfTransportation, Politician) and 1 new category (Company).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Denoising autoencoders are trained to take a partially corrupted input and restore the original undistorted input by minimizing the reconstruction error (Vincent et al., 2010) . On top of regular autoencoders, the model is forced to extract high-level features from the input distribution to filter out the noise. With a suitable noise function, denoising autoencoders can be trained in a self-supervised way on large datasets. BART ) is a denoising autoencoder with an objective of restoring a corrupted document. The model uses an encoder-decoder architecture: the bi-directional encoder encodes the corrupted input; the left-to-right decoder aims to restore the original, undistorted input. The model can be seen as a generalization of both BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 153, |
| "end": 175, |
| "text": "(Vincent et al., 2010)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 749, |
| "end": 770, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 775, |
| "end": 803, |
| "text": "GPT-2 (Radford et al., 2019)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multilingual Denoising", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Adopting BART's objective and architecture, mBART is pre-trained on the large-scale CC25 corpus extracted from Common Crawl, which contains data in 25 languages . The data is tokenized using a Sen-tencePiece model (Kudo and Richardson, 2018) trained on the training corpus with a vocabulary of 250,000 subword tokens. The noise function of mBART replaces text spans of arbitrary length with a mask token (35% of the words in each instance) and permutes the order of sentences. The model uses the Transformer architecture (Vaswani et al., 2017) with 12 layers for the encoder and 12 layers for the decoder (\u223c680M parameters).", |
| "cite_spans": [ |
| { |
| "start": 214, |
| "end": 241, |
| "text": "(Kudo and Richardson, 2018)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 521, |
| "end": 543, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multilingual Denoising", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We formulate the RDF-to-text task as text denoising and train mBART to solve the task individually for each language (see Figure 1) . We use the provided XML WebNLG data reader 3 to load and linearize the triples. For each triple, we use the flat_triple() method which converts each triple into the following format:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 122, |
| "end": 131, |
| "text": "Figure 1)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Our Submission", |
| "sec_num": "4" |
| }, |
| { |
| "text": "subject | property | object", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Submission", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Note that the constituents of the triple (subject, predicate, object) are only marked positionally, without any extra tags. We use a token not present in the training data (\" \") for delimiting individual triples to avoid extending the model vocabulary. We linearize the triples in their default order. : Results of our approach on English (all data, seen categories, unseen categories, unseen entities), compared to the baseline. The numbers in brackets show the rank of each model (out of 35 submissions) with respect to the given metric.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Submission", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Similarly to Freitag and Roy (2018), we observe that in English, linearized triples can be seen as a noisy version of the output text, where:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Submission", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 subjects and objects are copied verbatim,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Submission", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 predicates are shortened or reworded,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Submission", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 function words are deleted,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Submission", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 order of the entities is shuffled. mBART's pretraining objective is different from this, but we hypothesize that it is similar enough to be relevant for our task. For denoising Russian, our intuition stems from mBART's successful application in machine translation .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Submission", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We finetune the pre-trained mbart.CC25 4 model from the FAIRSEQ toolkit (Ott et al., 2019) . We follow the example instructions for finetuning the model, changing only the total_updates to 10,000 to reflect the smaller size of our data. We show the capabilities of our model in Table 1 .", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 90, |
| "text": "(Ott et al., 2019)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 278, |
| "end": 285, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Our Submission", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We report on WebNLG automatic and human evaluation results, as well as our own error analysis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Automatic metrics used in the challenge include BLEU (Papineni et al., 2002) , METEOR (Lavie and Agarwal, 2007) , ChrF++ (Popovi\u0107, 2017) , TER (Snover et al., 2006) , BERTScore (Zhang et al., 2020) , and BLEURT (only used for English; Sellam et al., 2020). The results of our approach for English are shown in Table 2 , comparing to the baseline. 5 We can see that our approach comfortably beats the baseline in all metrics and places in the first third of the submissions. While it does lose performance on unseen categories, the drop is not as dramatic as for many other competing approaches; our system is able to hold or improve its rank in the results table. Compare the baseline's ranking for seen categories, where it placed near the bottom of the list, and the ranking for unseen categories, where it scores in the first half -this shows that many approaches fared worse than the baseline on unseen categories, unlike our system.", |
| "cite_spans": [ |
| { |
| "start": 53, |
| "end": 76, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 86, |
| "end": 111, |
| "text": "(Lavie and Agarwal, 2007)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 121, |
| "end": 136, |
| "text": "(Popovi\u0107, 2017)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 143, |
| "end": 164, |
| "text": "(Snover et al., 2006)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 177, |
| "end": 197, |
| "text": "(Zhang et al., 2020)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 347, |
| "end": 348, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 310, |
| "end": 317, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Automatic Metrics", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The results for Russian are shown in Table 3 . There were fewer submissions for Russian, and our system not only beats the baseline by a large Table 3 : Results of our approach on Russian data, compared to the baseline. The numbers in brackets show the rank of each model (out of 12 submissions) if ordered by the given metric. margin (as did all competing submissions), but it is able to rank first in 3 metrics out of 5 (BLEU, TER, BERTScore) and second in the remaining ones.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 37, |
| "end": 44, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 143, |
| "end": 150, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Automatic Metrics", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The challenge organizers ran a human evaluation campain 6 , where annotators were asked to ratefive aspects of the output texts: data coverage, relevance, correctness, text structure and fluency. Each criterion has been rated with a number in the range from \"0\" (completely disagree) to \"100\" (completely agree). The scores were clustered into groups (1-5; 1 being the best) among which there are no statistically significant differences according to the Wilcoxon rank-sum test (Wilcoxon, 1992) .", |
| "cite_spans": [ |
| { |
| "start": 478, |
| "end": 494, |
| "text": "(Wilcoxon, 1992)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Human Evaluation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Our systems placed in the top clusters (1 or 2) for both English and Russian. For English, our system ranks first for all the categories in seen domains, and first or second in unseen entities and unseen domains. In total, our English system achieved rank 1 for relevance, correctness and text structure, and rank 2 for data coverage and fluency. For Russian, our system ranks second for correctness and first in all other categories.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Human Evaluation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "To better understand the nature of errors made by our system, we manually inspected a sample of 50 outputs in each language. 7 We found factual errors in 12 English outputs, mostly concentrated along the unseen categories (Scientist, Movie, Musical Record). The model tends to describe musical works and movies in terms of written works (\"written\", \"published\" etc.), i.e., the closest seen category. There are also several swaps in roles of the entities (e.g., \"is to southeast\" instead of \"has to its southeast\", \"follows\" instead of \"is followed by\" etc.). In a few cases, the model hallucinates a relation not specified in the data (e.g., \"born on January 1, 1934 in Istanbul\" when a date of birth and current residence is given, not the birthplace) or is not able to infer background knowledge not given on the input (it talks about a dead person in the present tense). The swaps in roles and hallucinated relations also occured in Russian; in addition, we found a hallucinated (correct) airport name and a few forgotten ingredients for a dish from a long list. Factual errors in Russian were less frequent (9 sentences), which is expected as there are no unseen categories. Moreover, the system shows an impressive performance at translating entity names from the English RDF into Russian.", |
| "cite_spans": [ |
| { |
| "start": 125, |
| "end": 126, |
| "text": "7", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Manual Analysis", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We further found 10 outputs with suboptimal phrasing in English and 9 in Russian, where the model did not connect properties of the same type in a coordination (e.g., two musical genres for a record) or gave numbers without proper units (e.g., \"runtime of 89.0\" or \"area of 250493000000.0\").", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Manual Analysis", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Our solution benefits from the denoising skills of the pre-trained mBART model, which to a certain extent combines all the tasks of the micro-planning pipeline (lexicalization, aggregation, surface realization, referring expression generation, sentence segmentation). Finetuning on task-specific data then mostly helps to specify the task at hand. Moreover, multilingual pre-training allows us to use a single architecture for both English and Russian.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "That being said, we note the RDF-to-text task is far from solved. The performance of our model is noticeably lower on categories unseen in training, and it is prone to swapping relations of entities or hallucinating relations. Even though the longest examples in the WebNLG dataset fit into the model, the length of the input sequence is still limited and the model does not generalize for inputs of arbitrary size. Moreover, English and Russian are coincidentally the two most represented languages in the mBART pre-training corpora (ca. 300 GB of data each) and the performance of our model would probably be lower with low-resource languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We presented a simple setup for RDF-to-text generation, consisting of triple linearization and text denosing. With the help of a multilingual pretrained model, this approach is language-agnostic and yields high-quality results with minimal effort. We hope that it will serve as a baseline for more complex approaches to RDF-to-text generation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "https://gerbil-nlg.dice-research.org/ gerbil/webnlg2020results2 https://webnlg-challenge.loria.fr/ challenge_2020/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://gitlab.com/webnlg/corpus-reader", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/pytorch/fairseq/tree/ master/examples/mbart", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "See https://gerbil-nlg.dice-research.org/ gerbil/webnlg2020results for full results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "See https://beng.dice-research.org/gerbil/ webnlg2020resultshumaneval for full results.7 While one of the authors has some knowledge of Russian, it is nowhere near a native level. Automatic back-translation to English was used in a few cases to facilitate understanding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was supported by the Charles University GAUK grant No. 140320, the SVV project No. 260575, and the Charles University project PRIMUS/19/SCI/10.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The 2020 bilingual, bi-directional WebNLG+ shared task: Overview and evaluation results", |
| "authors": [ |
| { |
| "first": "Thiago", |
| "middle": [], |
| "last": "Castro-Ferreira", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Gardent", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikolai", |
| "middle": [], |
| "last": "Ilinykh", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Van Der Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Mille", |
| "suffix": "" |
| }, |
| { |
| "first": "Diego", |
| "middle": [], |
| "last": "Moussalem", |
| "suffix": "" |
| }, |
| { |
| "first": "Anastasia", |
| "middle": [], |
| "last": "Shimorina", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 3rd WebNLG Workshop on Natural Language Generation from the Semantic Web", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thiago Castro-Ferreira, Claire Gardent, Nikolai Ilinykh, Chris van der Lee, Simon Mille, Diego Moussalem, and Anastasia Shimorina. 2020. The 2020 bilingual, bi-directional WebNLG+ shared task: Overview and evaluation results (WebNLG+ 2020). In Proceedings of the 3rd WebNLG Work- shop on Natural Language Generation from the Se- mantic Web (WebNLG+ 2020), Online.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Unsupervised multilingual word embeddings", |
| "authors": [ |
| { |
| "first": "Xilun", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "261--270", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xilun Chen and Claire Cardie. 2018. Unsupervised multilingual word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 261-270, Brussels, Belgium.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Few-shot NLG with pre-trained language model", |
| "authors": [ |
| { |
| "first": "Zhiyu", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Harini", |
| "middle": [], |
| "last": "Eavani", |
| "suffix": "" |
| }, |
| { |
| "first": "Wenhu", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yinyin", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [ |
| "Yang" |
| ], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "183--190", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, and William Yang Wang. 2020. Few-shot NLG with pre-trained language model. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 183-190, Online.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Unsupervised cross-lingual representation learning at scale", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kartikay", |
| "middle": [], |
| "last": "Khandelwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Naman", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "Vishrav", |
| "middle": [], |
| "last": "Chaudhary", |
| "suffix": "" |
| }, |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Wenzek", |
| "suffix": "" |
| }, |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Guzm\u00e1n", |
| "suffix": "" |
| }, |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "Myle", |
| "middle": [], |
| "last": "Ott", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "8440--8451", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.acl-main.747" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "4171--4186", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N19-1423" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG Challenge", |
| "authors": [ |
| { |
| "first": "Ond\u0159ej", |
| "middle": [], |
| "last": "Du\u0161ek", |
| "suffix": "" |
| }, |
| { |
| "first": "Jekaterina", |
| "middle": [], |
| "last": "Novikova", |
| "suffix": "" |
| }, |
| { |
| "first": "Verena", |
| "middle": [], |
| "last": "Rieser", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Computer Speech & Language", |
| "volume": "59", |
| "issue": "", |
| "pages": "123--156", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.csl.2019.06.009" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ond\u0159ej Du\u0161ek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG Chal- lenge. Computer Speech & Language, 59:123-156.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Unsupervised Natural Language Generation with Denoising Autoencoders", |
| "authors": [ |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Freitag", |
| "suffix": "" |
| }, |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "3922--3929", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Markus Freitag and Scott Roy. 2018. Unsupervised Natural Language Generation with Denoising Au- toencoders. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 3922-3929, Brussels, Belgium.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Creating training corpora for NLG micro-planning", |
| "authors": [ |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Gardent", |
| "suffix": "" |
| }, |
| { |
| "first": "Anastasia", |
| "middle": [], |
| "last": "Shimorina", |
| "suffix": "" |
| }, |
| { |
| "first": "Shashi", |
| "middle": [], |
| "last": "Narayan", |
| "suffix": "" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Perez-Beltrachini", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "55th annual meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "179--188", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P17-1017" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017a. Creating train- ing corpora for NLG micro-planning. In 55th annual meeting of the Association for Computa- tional Linguistics (ACL), pages 179-188, Vancouver, Canada.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "The WebNLG challenge: Generating text from RDF data", |
| "authors": [ |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Gardent", |
| "suffix": "" |
| }, |
| { |
| "first": "Anastasia", |
| "middle": [], |
| "last": "Shimorina", |
| "suffix": "" |
| }, |
| { |
| "first": "Shashi", |
| "middle": [], |
| "last": "Narayan", |
| "suffix": "" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Perez-Beltrachini", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 10th International Conference on Natural Language Generation", |
| "volume": "", |
| "issue": "", |
| "pages": "124--133", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017b. The WebNLG challenge: Generating text from RDF data. In Pro- ceedings of the 10th International Conference on Natural Language Generation, pages 124-133, San- tiago de Compostela, Spain.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", |
| "authors": [ |
| { |
| "first": "Taku", |
| "middle": [], |
| "last": "Kudo", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Richardson", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "66--71", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D18-2012" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Crosslingual language model pretraining", |
| "authors": [ |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Lample", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1901.07291" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. arXiv preprint arXiv:1901.07291.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Meteor: An Automatic Metric for MT Evaluation with High Levels of Correlation with Human Judgments", |
| "authors": [ |
| { |
| "first": "Alon", |
| "middle": [], |
| "last": "Lavie", |
| "suffix": "" |
| }, |
| { |
| "first": "Abhaya", |
| "middle": [], |
| "last": "Agarwal", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Second Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "228--231", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alon Lavie and Abhaya Agarwal. 2007. Meteor: An Automatic Metric for MT Evaluation with High Lev- els of Correlation with Human Judgments. In Pro- ceedings of the Second Workshop on Statistical Ma- chine Translation, pages 228-231, Prague, Czech Republic.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Yinhan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Naman", |
| "middle": [], |
| "last": "Goyal ; Abdelrahman Mohamed", |
| "suffix": "" |
| }, |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "7871--7880", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.acl-main.703" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Multilingual denoising pre-training for neural machine translation", |
| "authors": [ |
| { |
| "first": "Yinhan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiatao", |
| "middle": [], |
| "last": "Gu", |
| "suffix": "" |
| }, |
| { |
| "first": "Naman", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "Xian", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Sergey", |
| "middle": [], |
| "last": "Edunov", |
| "suffix": "" |
| }, |
| { |
| "first": "Marjan", |
| "middle": [], |
| "last": "Ghazvininejad", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2001.08210" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. arXiv preprint arXiv:2001.08210.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Nikolai Ilinykh, and Axel-Cyrille Ngonga Ngomo. 2020. A general benchmarking framework for text generation", |
| "authors": [ |
| { |
| "first": "Diego", |
| "middle": [], |
| "last": "Moussalem", |
| "suffix": "" |
| }, |
| { |
| "first": "Paramjot", |
| "middle": [], |
| "last": "Kaur", |
| "suffix": "" |
| }, |
| { |
| "first": "Thiago", |
| "middle": [], |
| "last": "Castro-Ferreira", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Van Der Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Anastasia", |
| "middle": [], |
| "last": "Shimorina", |
| "suffix": "" |
| }, |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Conrads", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "R\u00f6der", |
| "suffix": "" |
| }, |
| { |
| "first": "Ren\u00e9", |
| "middle": [], |
| "last": "Speck", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Gardent", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Mille", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 3rd WebNLG Workshop on Natural Language Generation from the Semantic Web", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diego Moussalem, Paramjot Kaur, Thiago Castro- Ferreira, Chris van der Lee, Anastasia Shimorina, Felix Conrads, Michael R\u00f6der, Ren\u00e9 Speck, Claire Gardent, Simon Mille, Nikolai Ilinykh, and Axel- Cyrille Ngonga Ngomo. 2020. A general bench- marking framework for text generation. In Pro- ceedings of the 3rd WebNLG Workshop on Natu- ral Language Generation from the Semantic Web (WebNLG+ 2020), Online.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "fairseq: A fast, extensible toolkit for sequence modeling", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", |
| "volume": "", |
| "issue": "", |
| "pages": "48--53", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N19-4009" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, MN, USA.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "BLEU: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Kishore", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "311--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computa- tional Linguistics, pages 311-318, Philadelphia, PA, USA.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "chrF++: words helping character n-grams", |
| "authors": [ |
| { |
| "first": "Maja", |
| "middle": [], |
| "last": "Popovi\u0107", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Second Conference on Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "612--618", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W17-4770" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maja Popovi\u0107. 2017. chrF++: words helping character n-grams. In Proceedings of the Second Conference on Machine Translation, pages 612-618, Copen- hagen, Denmark.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Language Models are Unsupervised Multitask Learners", |
| "authors": [ |
| { |
| "first": "Alec", |
| "middle": [], |
| "last": "Radford", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Rewon", |
| "middle": [], |
| "last": "Child", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Luan", |
| "suffix": "" |
| }, |
| { |
| "first": "Dario", |
| "middle": [], |
| "last": "Amodei", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. Tech- nical report, OpenAI.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "BLEURT: Learning Robust Metrics for Text Generation", |
| "authors": [ |
| { |
| "first": "Thibault", |
| "middle": [], |
| "last": "Sellam", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Ankur", |
| "middle": [ |
| "P" |
| ], |
| "last": "Parikh", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "7881--7892", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thibault Sellam, Dipanjan Das, and Ankur P. Parikh. 2020. BLEURT: Learning Robust Metrics for Text Generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 7881-7892, Online.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "A study of translation edit rate with targeted human annotation", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Snover", |
| "suffix": "" |
| }, |
| { |
| "first": "Bonnie", |
| "middle": [], |
| "last": "Dorr", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "Linnea", |
| "middle": [], |
| "last": "Micciulla", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Makhoul", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 7th Conference of the Association for Machine Translation in the Americas", |
| "volume": "", |
| "issue": "", |
| "pages": "223--231", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Associa- tion for Machine Translation in the Americas, pages 223-231, Cambridge, MA, USA.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "\u0141ukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in Neural Information Processing Systems (NeurIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "5998--6008", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems (NeurIPS), pages 5998-6008, Long Beach, CA, USA.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", |
| "authors": [ |
| { |
| "first": "Pascal", |
| "middle": [], |
| "last": "Vincent", |
| "suffix": "" |
| }, |
| { |
| "first": "Hugo", |
| "middle": [], |
| "last": "Larochelle", |
| "suffix": "" |
| }, |
| { |
| "first": "Isabelle", |
| "middle": [], |
| "last": "Lajoie", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre-Antoine", |
| "middle": [], |
| "last": "Manzagol", |
| "suffix": "" |
| }, |
| { |
| "first": "L\u00e9on", |
| "middle": [], |
| "last": "Bottou", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "", |
| "issue": "12", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, Pierre-Antoine Manzagol, and L\u00e9on Bottou. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(12).", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "CC-Net: Extracting high quality monolingual datasets from web crawl data", |
| "authors": [ |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Wenzek", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Anne", |
| "middle": [], |
| "last": "Lachaux", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Vishrav", |
| "middle": [], |
| "last": "Chaudhary", |
| "suffix": "" |
| }, |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Guzm\u00e1n", |
| "suffix": "" |
| }, |
| { |
| "first": "Armand", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "\u00c9douard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of The 12th Language Resources and Evaluation Conference (LREC)", |
| "volume": "", |
| "issue": "", |
| "pages": "196--202", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guillaume Wenzek, Marie-Anne Lachaux, Alexis Con- neau, Vishrav Chaudhary, Francisco Guzm\u00e1n, Ar- mand Joulin, and \u00c9douard Grave. 2020. CC- Net: Extracting high quality monolingual datasets from web crawl data. In Proceedings of The 12th Language Resources and Evaluation Conference (LREC), pages 4003-4012, Marseille, France. Frank Wilcoxon. 1992. Individual comparisons by ranking methods. In Breakthroughs in statistics, pages 196-202. Springer.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "BERTScore: Evaluating Text Generation with BERT", |
| "authors": [ |
| { |
| "first": "Tianyi", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Varsha", |
| "middle": [], |
| "last": "Kishore", |
| "suffix": "" |
| }, |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kilian", |
| "middle": [ |
| "Q" |
| ], |
| "last": "Weinberger", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Artzi", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the International Conference on Learning Representations (ICLR)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating Text Generation with BERT. In Proceed- ings of the International Conference on Learning Representations (ICLR), Online.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "mBART", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>input output [en]</td><td>Piotr_Hallmann | weight | 70.308</td><td>Piotr_Hallmann | birthDate | 1987-08-25</td></tr><tr><td>input output [en]</td><td colspan=\"2\">Ciudad_Ayala | populationMetro | 1777539</td></tr><tr><td>input output [ru]</td><td colspan=\"2\">Bakewell_tart | ingredient | Frangipane</td></tr></table>", |
| "type_str": "table", |
| "text": "Born on August 25th 1987, Piotr Hallmann has a weight of 70.308. The population metro of Ciudad Ayala is 1777539.\u00d4\u00f0\u00e0\u00edae\u00e8\u00ef\u00e0\u00ed -\u00ee\u00e4\u00e8\u00ed \u00e8\u00e7 \u00e8\u00ed\u00e3\u00f0\u00e5\u00e4\u00e8\u00e5\u00ed\u00f2\u00ee\u00e2 \u00f2\u00e0\u00f0\u00f2\u00e0 \u00c1\u00e5\u00e9\u00ea\u00e2\u00e5\u00eb\u00eb.transcription Franzhipan -odin iz ingredientov tarta Bejkvell. translationFrangipane is one of the ingredients of the Bakewell tart." |
| }, |
| "TABREF2": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td/><td>BLEU</td><td>METEOR</td><td>ChrF++</td><td colspan=\"2\">TER</td><td>BERTScore</td><td>BLEURT</td></tr><tr><td>All</td><td colspan=\"6\">Ours Baseline 40.57 (14) 0.373 (15) 0.621 (15) 0.517 (14) 0.943 (14) 0.47 (12) 50.34 (10) 0.398 (8) 0.666 (8) 0.435 (7) 0.951 (8) 0.57 (8)</td></tr><tr><td>Seen Cat.</td><td colspan=\"6\">Ours Baseline 42.95 (31) 0.387 (27) 0.650 (28) 0.563 (31) 0.943 (31) 0.41 (31) 59.13 (10) 0.422 (10) 0.712 (9) 0.403 (7) 0.960 (9) 0.58 (14)</td></tr><tr><td>Unseen Cat.</td><td colspan=\"3\">Ours Baseline 37.56 (12) 0.357 (15) 0.584 (15) 42.24 (10) 0.375 (13) 0.617 (10)</td><td>0.46 0.51</td><td colspan=\"2\">(7) (13) 0.940 (12) 0.44 (12) 0.943 (11) 0.52 (10)</td></tr><tr><td>Unseen Ent.</td><td colspan=\"6\">Ours Baseline 40.22 (17) 0.384 (15) 0.648 (15) 0.476 (14) 0.949 (13) 0.55 (12) 51.23 (4) 0.406 (8) 0.687 (7) 0.417 (9) 0.959 (8) 0.63 (8)</td></tr></table>", |
| "type_str": "table", |
| "text": "Example outputs from the mBART model(s) finetuned for RDF-to-text generation. (1) The model can work with unseen entities, dates and numbers.(2)The model is quite robust to unseen properties, such as populationMetro. However, the surface form of the property deviates too much from its meaning and the sentence is incorrect.(3)The model trained on Russian targets can use English data to form sentences in Russian, transcribing the entities to Cyrillic." |
| }, |
| "TABREF3": { |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "text": "" |
| } |
| } |
| } |
| } |