| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T03:13:28.348902Z" |
| }, |
| "title": "NILC at SR'20: Exploring Pre-Trained Models in Surface Realisation", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Antonio", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Universidade de S\u00e3o Paulo Avenida Trabalhador S\u00e3o-carlense", |
| "location": { |
| "addrLine": "400. S\u00e3o Carlos -SP", |
| "country": "Brazil" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Sobrevilla", |
| "middle": [], |
| "last": "Cabezudo", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Universidade de S\u00e3o Paulo Avenida Trabalhador S\u00e3o-carlense", |
| "location": { |
| "addrLine": "400. S\u00e3o Carlos -SP", |
| "country": "Brazil" |
| } |
| }, |
| "email": "msobrevillac@usp.br" |
| }, |
| { |
| "first": "Thiago", |
| "middle": [], |
| "last": "Alexandre", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Salgueiro", |
| "middle": [], |
| "last": "Pardo", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "taspardo@icmc.usp.br" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper describes the submission by the NILC Computational Linguistics research group of the University of S\u00e3o Paulo/Brazil to the English Track 2 (closed sub-track) at the Surface Realisation Shared Task 2020. The success of the current pre-trained models like BERT or GPT-2 in several tasks is well-known, however, this is not the case for data-to-text generation tasks and just recently some initiatives focused on it. This way, we explore how a pre-trained model (GPT-2) performs on the UD-to-text generation task. In general, the achieved results were poor, but there are some interesting ideas to explore. Among the learned lessons we may note that it is necessary to study strategies to represent UD inputs and to introduce structural knowledge into these pre-trained models.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper describes the submission by the NILC Computational Linguistics research group of the University of S\u00e3o Paulo/Brazil to the English Track 2 (closed sub-track) at the Surface Realisation Shared Task 2020. The success of the current pre-trained models like BERT or GPT-2 in several tasks is well-known, however, this is not the case for data-to-text generation tasks and just recently some initiatives focused on it. This way, we explore how a pre-trained model (GPT-2) performs on the UD-to-text generation task. In general, the achieved results were poor, but there are some interesting ideas to explore. Among the learned lessons we may note that it is necessary to study strategies to represent UD inputs and to introduce structural knowledge into these pre-trained models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "that current pre-trained models can handle these representations even if the knowledge is not explicitly structured.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In this context, this paper presents the description of the system submitted by the NILC team to the track 2 of the Surface Realisation Shared Task 2020 (Track 2 -SR'20) (Mille et al., 2020) . Our proposal is an End-to-End approach inspired by the work of Mager et al. (2020) . We explore some strategies to sequentially represent UD structures and to fine-tune GPT-2 (Radford et al., 2019) on the pre-processed dataset 2 .", |
| "cite_spans": [ |
| { |
| "start": 170, |
| "end": 190, |
| "text": "(Mille et al., 2020)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 256, |
| "end": 275, |
| "text": "Mager et al. (2020)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 368, |
| "end": 390, |
| "text": "(Radford et al., 2019)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The dataset for the Track 2 is composed by UD structures and their corresponding sentences. The UD structure is similar to a dependency tree, however, some information are modified:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Track Dataset -SR'20", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 word order is removed by randomised scrambling;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Track Dataset -SR'20", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 words are replaced by their lemmas;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Track Dataset -SR'20", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 some prepositions and conjunctions (that can be inferred from other lexical units or from the syntactic structure) are removed;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Track Dataset -SR'20", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 determiners and auxiliaries are replaced (when needed) by attribute/value pairs;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Track Dataset -SR'20", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 edge labels were generalised into predicate argument labels in the PropBank/NomBank fashion;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Track Dataset -SR'20", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 morphological information coming from the syntactic structure or from agreement is removed;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Track Dataset -SR'20", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 only coarse-grained Part-of-Speech tags are kept. Figure 1 and 2 show the CoNLL and graphic representation for the sentence \"Two of them were being run by 2 officials of the Ministry of the Interior!\". In Figure 1 , we may see \"idX\" and \"original id\" attributes, where \"X\" can be a number. This attributes are related to the track 1 and the original ids (positions) of the tokens in the sentence and are removed from the test set. The corresponding source code is available at https://github.com/msobrevillac/ pretrained-amr-to-text Finally, the dataset contains subsets from different domains. For English (our target language in this task), there are 4 files in the training and development (dev) set each, and 7 files for the test set from the previous edition (Mille et al., 2019) and 1 file for the test set in this edition.", |
| "cite_spans": [ |
| { |
| "start": 766, |
| "end": 786, |
| "text": "(Mille et al., 2019)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 52, |
| "end": 60, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 207, |
| "end": 215, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Track Dataset -SR'20", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In first edition of this shared task, we submit a system in which we use a data augmentation strategy (Sobrevilla Cabezudo and Pardo, 2018) to deal with the track 1. However, for this edition, we focus on the track 2 and use only resources allowed by the shared task (closed sub-track). Differently from most of the work found in the literature, we propose an End-to-End approach, jointly learning the inflection generation and word ordering tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Description", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Inspired by the work of Mager et al. (2020), we use GPT-2 and fine-tune on the joint distribution of UD structure and text. Given a tokenized sentence w 1 w N and the sequential UD structure a 1 ...a M , we maximize the joint probability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GPT-2 for UD structures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "p GP T \u22122 (w, a) = N j=1 p GP T \u22122 (w j |w 1:j\u22121 , u 1:M ) M i=1 p GP T \u22122 (u i |u 1:i\u22121 ) (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GPT-2 for UD structures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "A special separator token is added to mark the end of the sequential UD structure. Relations that should not be interpreted literally are assigned tokens from the GPT-2 unused token list (adding a \":\" to mark the token as a relation). Furthermore, in the case of morphological information, values in feature namevalue pairs are considered common tokens and feature names are considered relations. For example, in Figure 1 , the token \"run\" has \"Tense=Pres\" as a feature name-value pair. This way, \"Pres\" is considered a common token and \"Tense\" a relation. At test time, we provide the UD structure as context as in conventional conditional text generation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 413, |
| "end": 421, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "GPT-2 for UD structures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "It is worth noting that we explore different ways to build the sequential (linearised) UD structure, and all these sequential versions are derived from the PENMAN notation. We explore the following linearised versions: (A) PENMAN format: this format is the same one used in Abstract Meaning Representation (AMR);", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GPT-2 for UD structures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(B) PENMAN format without morphological relations: the same format, but removing all morphological relations (and others in the same column), such as \"Tense\" and \"Aspect\", among others;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GPT-2 for UD structures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(C) PENMAN format without morphological relations and parentheses;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GPT-2 for UD structures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(D) PENMAN format without parentheses: the same as the first one but removing the parentheses;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GPT-2 for UD structures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(E) PENMAN format without relations: the same as the first one but removing all the relations;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GPT-2 for UD structures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(F) PENMAN format without relations and parentheses: the same as the first one but removing all the relations and parentheses. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GPT-2 for UD structures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We use the small GPT-2 model provided by HuggingFace (Wolf et al., 2019) 3 . The model is trained on the joint of all training subsets. The fine-tuning is executed in 7 epochs (as the model converges at this time), using a batch size of 8, the AdamW optimizer with a learning rate of 6.25e-5, a max length of 350 in the source and target, and freezing the embeddings. For the decoding, we use a beam size of 15.", |
| "cite_spans": [ |
| { |
| "start": 53, |
| "end": 72, |
| "text": "(Wolf et al., 2019)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Settings", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "At test time, we get tokenised sentences. We then post-process them by using the Moses detokeniser 4 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Settings", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The automatic performance of the diverse proposals at the shared task is computed by the following measures:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 BLEU (Papineni et al., 2002) : precision metric that computes the geometric mean of the n-gram precisions between the generated and reference texts, adding a brevity penalty for shorter sentences (we use the smoothed version and report results for n = 1, 2, 3, and 4);", |
| "cite_spans": [ |
| { |
| "start": 7, |
| "end": 30, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 NIST (Doddington, 2002) : related n-gram similarity metric weighted in favor of less frequent ngrams, which are taken to be more informative;", |
| "cite_spans": [ |
| { |
| "start": 7, |
| "end": 25, |
| "text": "(Doddington, 2002)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 Normalized edit distance (DIST): inverse, normalized, character-based string-edit distance that starts by computing the minimum number of character insertions, deletions and substitutions (all at cost 1) required to turn the system output into the (single) reference text;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 BertScore (Zhang et al., 2020) : leverages the pre-trained contextual embeddings from BERT and matches words in candidate and reference sentences by cosine similarity. Table 1 shows the results of the different linearised versions (described in Section 3.1 of the UD structures on the development set). The results are the ones obtained on the join of all dev subsets provided by the shared-task.", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 32, |
| "text": "(Zhang et al., 2020)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 170, |
| "end": 177, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In general, morphological relations do not seem to be necessary as the performance improves when these are removed (B in Table 1 ). However, a possible noise in this analysis could be generated by the input length since including morphological relations (linearised version A in Table 1 ) could make the input length larger and the max length parameter could delete some important tokens, resulting in a lower performance in comparison with linearised version B.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 121, |
| "end": 128, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 279, |
| "end": 286, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To make the analysis of omitting morphological relations clearer, we may see versions C (without parentheses and morphological relations) and D (without parentheses). Both versions contain fewer tokens (in relation to A and B versions) and one may see that disregarding morphological relations produces improvements (results in version C are better than in D).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Another point to note is that parentheses are the most important tokens as they represent the structure of the input. Therefore, removing them from the input leads to a significant drop in the performance (D). Furthermore, this drop is bigger than the one obtained by leaving out all the relations (E), showing that parentheses could encode some information about the relations among the nodes. Finally, the results of the F version could suggest that, although parentheses encode some information about the relations, there are more information that is not encoded, making the use of relations (in general) necessary. Table 2 shows the results of the automatic evaluation on several test sets. The model that we use to get these results is the one that got the best results in the dev set. In general, Table 2 shows that all values are close to the average (except for the test set presented in this edition). This could suggest that GPT-2 can keep a similar performance across the domains, i.e., it can generalise well. Finally, our approach got a performance lower than other approaches for the same track. However, we expect that these differences could be reduced if we use a bigger model such as medium or large GPT-2 (similar to the results of Mager et al. (2020) ). Table 3 shows the results of the human evaluation on two test sets predefined by the organizing committee. Specifically, Direct Assessment (Graham et al., 2017) is applied to conduct this evaluation. Candidate outputs are presented to human assessors, who rate their (i) meaning similarity (relative to a human-authored reference sentence) and (ii) readability (no reference sentence) on a 0-100 rating scale. The metric used for ranking the different systems is the average standardised score (avg. z in Table 3 . We may see that our approach still have problems representing the correct reference as this gets the worst performance according to the meaning similarity (last cluster). However, when readability is evaluated, we obtain the second best results (second cluster), even compared with approaches in the open subtrack (20b) in which all kinds of resources are allowed. These results are expected as the automatic evaluation shows a low performance for our approach and it is reflected in the meaning similarity evaluation, while GPT-2 is a robust language model and knows how to build coherent sentences (we have to stress that readability is evaluated without references). More experiments could be done in order to explore how to get improvements in meaning similarity. Experiments performed by Mager et al. (2020) show that the performance improves significantly when a bigger version of GPT-2 is used. Besides, we may see that the performance varies widely according to the linearisation strategy, which would be an interesting research line to explore in the future.", |
| "cite_spans": [ |
| { |
| "start": 1251, |
| "end": 1270, |
| "text": "Mager et al. (2020)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1413, |
| "end": 1434, |
| "text": "(Graham et al., 2017)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 2582, |
| "end": 2601, |
| "text": "Mager et al. (2020)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 619, |
| "end": 626, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 803, |
| "end": 810, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 1274, |
| "end": 1281, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 1779, |
| "end": 1786, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "This paper describes the application of a pre-trained model, the GPT-2, to the UD-to-text generation task in the context of the Surface Realisation shared task. Results show that the way as the UD structures are linearised is important for the model in this task. Thus, an interesting research line for future work could be investigating other ways to represent/linearise UD structures and to introduce the knowledge about structure in this kind of model. As future work, we also plan to apply this approach to other languages and use a bigger version of GPT-2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We use only the small GPT-2 version as we could not execute this on our current server.4 We use the perl code available at https://github.com/moses-smt/mosesdecoder/tree/master/ scripts/tokenizer", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was financed in part by the Coordena\u00e7\u00e3o de Aperfei\u00e7oamento de Pessoal de N\u00edvel Superior -Brasil (CAPES) -Finance Code 88882.328822/2019-01. The authors are also grateful to USP Research Office (PRP 668) for supporting this work, and would like to thank NVIDIA for donating the GPU. This work is part of the OPINANDO project (https://sites.google.com/icmc.usp.br/ opinando/) and the USP/FAPESP/IBM Center for Artificial Intelligence (C4AIhttp://c4ai. inova.usp.br/). Finally, this research is carried out using the computational resources of the Center for Mathematical Sciences Applied to Industry (CeMEAI) funded by FAPESP (grant 2013/07375-0).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "4171--4186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Doddington", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the Second International Conference on Human Language Technology Research, HLT '02", |
| "volume": "", |
| "issue": "", |
| "pages": "138--145", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In Proceedings of the Second International Conference on Human Language Technology Research, HLT '02, pages 138-145, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Can machine translation systems be evaluated by the crowd alone", |
| "authors": [ |
| { |
| "first": "Yvette", |
| "middle": [], |
| "last": "Graham", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| }, |
| { |
| "first": "Alistair", |
| "middle": [], |
| "last": "Moffat", |
| "suffix": "" |
| }, |
| { |
| "first": "Justin", |
| "middle": [], |
| "last": "Zobel", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Natural Language Engineering", |
| "volume": "23", |
| "issue": "1", |
| "pages": "3--30", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2017. Can machine translation systems be evaluated by the crowd alone. Natural Language Engineering, 23(1):3-30.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "GPT-too: A language-model-first approach for AMR-to-text generation", |
| "authors": [ |
| { |
| "first": "Manuel", |
| "middle": [], |
| "last": "Mager", |
| "suffix": "" |
| }, |
| { |
| "first": "Ram\u00f3n", |
| "middle": [], |
| "last": "Fernandez Astudillo", |
| "suffix": "" |
| }, |
| { |
| "first": "Tahira", |
| "middle": [], |
| "last": "Naseem", |
| "suffix": "" |
| }, |
| { |
| "first": "Arafat", |
| "middle": [], |
| "last": "Md", |
| "suffix": "" |
| }, |
| { |
| "first": "Young-Suk", |
| "middle": [], |
| "last": "Sultan", |
| "suffix": "" |
| }, |
| { |
| "first": "Radu", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Florian", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1846--1852", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Manuel Mager, Ram\u00f3n Fernandez Astudillo, Tahira Naseem, Md Arafat Sultan, Young-Suk Lee, Radu Florian, and Salim Roukos. 2020. GPT-too: A language-model-first approach for AMR-to-text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1846-1852, Online, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The first multilingual surface realisation shared task (SR'18): Overview and evaluation results", |
| "authors": [ |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Mille", |
| "suffix": "" |
| }, |
| { |
| "first": "Anja", |
| "middle": [], |
| "last": "Belz", |
| "suffix": "" |
| }, |
| { |
| "first": "Bernd", |
| "middle": [], |
| "last": "Bohnet", |
| "suffix": "" |
| }, |
| { |
| "first": "Yvette", |
| "middle": [], |
| "last": "Graham", |
| "suffix": "" |
| }, |
| { |
| "first": "Emily", |
| "middle": [], |
| "last": "Pitler", |
| "suffix": "" |
| }, |
| { |
| "first": "Leo", |
| "middle": [], |
| "last": "Wanner", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the First Workshop on Multilingual Surface Realisation", |
| "volume": "", |
| "issue": "", |
| "pages": "1--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simon Mille, Anja Belz, Bernd Bohnet, Yvette Graham, Emily Pitler, and Leo Wanner. 2018. The first multi- lingual surface realisation shared task (SR'18): Overview and evaluation results. In Proceedings of the First Workshop on Multilingual Surface Realisation, pages 1-12, Melbourne, Australia, July. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "The second multilingual surface realisation shared task (SR'19): Overview and evaluation results", |
| "authors": [ |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Mille", |
| "suffix": "" |
| }, |
| { |
| "first": "Anja", |
| "middle": [], |
| "last": "Belz", |
| "suffix": "" |
| }, |
| { |
| "first": "Bernd", |
| "middle": [], |
| "last": "Bohnet", |
| "suffix": "" |
| }, |
| { |
| "first": "Yvette", |
| "middle": [], |
| "last": "Graham", |
| "suffix": "" |
| }, |
| { |
| "first": "Leo", |
| "middle": [], |
| "last": "Wanner", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019)", |
| "volume": "", |
| "issue": "", |
| "pages": "1--17", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simon Mille, Anja Belz, Bernd Bohnet, Yvette Graham, and Leo Wanner. 2019. The second multilingual surface realisation shared task (SR'19): Overview and evaluation results. In Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019), pages 1-17, Hong Kong, China, November. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The third multilingual surface realisation shared task (SR'20): Overview and evaluation results", |
| "authors": [ |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Mille", |
| "suffix": "" |
| }, |
| { |
| "first": "Anya", |
| "middle": [], |
| "last": "Belz", |
| "suffix": "" |
| }, |
| { |
| "first": "Bernd", |
| "middle": [], |
| "last": "Bohnet", |
| "suffix": "" |
| }, |
| { |
| "first": "Thiago", |
| "middle": [], |
| "last": "Castro Ferreira", |
| "suffix": "" |
| }, |
| { |
| "first": "Yvette", |
| "middle": [], |
| "last": "Graham", |
| "suffix": "" |
| }, |
| { |
| "first": "Leo", |
| "middle": [], |
| "last": "Wanner", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 3nd Workshop on Multilingual Surface Realisation (MSR 2020)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simon Mille, Anya Belz, Bernd Bohnet, Thiago Castro Ferreira, Yvette Graham, and Leo Wanner. 2020. The third multilingual surface realisation shared task (SR'20): Overview and evaluation results. In Proceedings of the 3nd Workshop on Multilingual Surface Realisation (MSR 2020), Dublin, Ireland, December. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Bleu: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Kishore", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "311--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evalua- tion of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Deep contextualized word representations", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Peters", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Neumann", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Iyyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Gardner", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "2227--2237", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettle- moyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Language models are unsupervised multitask learners", |
| "authors": [ |
| { |
| "first": "Alec", |
| "middle": [], |
| "last": "Radford", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Rewon", |
| "middle": [], |
| "last": "Child", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Luan", |
| "suffix": "" |
| }, |
| { |
| "first": "Dario", |
| "middle": [], |
| "last": "Amodei", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "NILC-SWORNEMO at the surface realization shared task: Exploring syntax-based word ordering using neural models", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Antonio", |
| "suffix": "" |
| }, |
| { |
| "first": "Sobrevilla", |
| "middle": [], |
| "last": "Cabezudo", |
| "suffix": "" |
| }, |
| { |
| "first": "Thiago", |
| "middle": [], |
| "last": "Pardo", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the First Workshop on Multilingual Surface Realisation", |
| "volume": "", |
| "issue": "", |
| "pages": "58--64", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Antonio Sobrevilla Cabezudo and Thiago Pardo. 2018. NILC-SWORNEMO at the surface realization shared task: Exploring syntax-based word ordering using neural models. In Proceedings of the First Workshop on Multilingual Surface Realisation, pages 58-64, Melbourne, Australia, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Huggingface's transformers: State-of-theart natural language processing", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| }, |
| { |
| "first": "Lysandre", |
| "middle": [], |
| "last": "Debut", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Sanh", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Chaumond", |
| "suffix": "" |
| }, |
| { |
| "first": "Clement", |
| "middle": [], |
| "last": "Delangue", |
| "suffix": "" |
| }, |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Moi", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierric", |
| "middle": [], |
| "last": "Cistac", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Rault", |
| "suffix": "" |
| }, |
| { |
| "first": "R'emi", |
| "middle": [], |
| "last": "Louf", |
| "suffix": "" |
| }, |
| { |
| "first": "Morgan", |
| "middle": [], |
| "last": "Funtowicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Jamie", |
| "middle": [], |
| "last": "Brew", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "ArXiv", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the- art natural language processing. ArXiv, abs/1910.03771.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Bertscore: Evaluating text generation with bert", |
| "authors": [ |
| { |
| "first": "Tianyi", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "*", |
| "middle": [], |
| "last": "Varsha Kishore", |
| "suffix": "" |
| }, |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "*", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Kilian", |
| "middle": [ |
| "Q" |
| ], |
| "last": "Weinberger", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Artzi", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tianyi Zhang, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Deep track example (\"Two of them were being run by 2 officials of the Ministry of the Interior!\") in CoNLL format.", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Representation of the example in graphic format.", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "num": null, |
| "type_str": "figure", |
| "text": "2", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "num": null, |
| "type_str": "figure", |
| "text": "shows three representations for the example inFigure 1: (A) PENMAN notation, (B) PEN-MAN notation without morphological relations, and (C) PENMAN notation without morphological relations and parentheses. It is interesting to add that the parentheses in PENMAN notation provide information about the graph structure of the input.", |
| "uris": null |
| }, |
| "FIGREF4": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Sequential UD structures for the sentence \"Two of them were being run by 2 officials of the Ministry of the Interior!\". (A) PENMAN notation, (B) PENMAN notation without morphological relations, and (C) PENMAN notation without morphological relations and parentheses.", |
| "uris": null |
| }, |
| "TABREF2": { |
| "content": "<table><tr><td/><td/><td colspan=\"2\">Meaning Similarity</td><td/><td/><td/><td/><td/><td>Readability</td></tr><tr><td>Test set</td><td>Team</td><td colspan=\"3\">Sub-track Avg. Avg. z</td><td>n</td><td>N</td><td>Test set</td><td>Team</td><td colspan=\"2\">Sub-track Avg. Avg. z</td><td>n</td><td>N</td></tr><tr><td/><td>IMS</td><td>20b</td><td colspan=\"4\">85.1 0.272 1,667 1,927</td><td/><td colspan=\"2\">Concordia 20a</td><td>71.8 0.321 806 908</td></tr><tr><td/><td>IMS</td><td>20a</td><td colspan=\"4\">84.7 0.259 1,701 1,942</td><td/><td>Ours</td><td>20a</td><td>68.6 0.185 823 947</td></tr><tr><td>English (EWT)</td><td colspan=\"2\">Concordia 20a</td><td colspan=\"4\">84.7 0.245 1,675 1,897</td><td>English (EWT)</td><td>IMS</td><td>20b</td><td>67.3 0.159 807 936</td></tr><tr><td/><td>IMS</td><td>19</td><td colspan=\"4\">82.7 0.201 1,692 1,920</td><td/><td>IMS</td><td>20a</td><td>65.8 0.109 753 866</td></tr><tr><td/><td>Ours</td><td>20a</td><td colspan=\"4\">75.6 -0.079 1,657 1,892</td><td/><td>IMS</td><td>19</td><td>63.6 0.027 808 923</td></tr><tr><td/><td>IMS</td><td>20b</td><td colspan=\"2\">87.3 0.157</td><td colspan=\"2\">700 1,016</td><td/><td colspan=\"2\">Concordia 20a</td><td>80.6</td><td>0.37</td><td>952 1,283</td></tr><tr><td/><td>IMS</td><td>20a</td><td colspan=\"2\">85.6 0.057</td><td colspan=\"2\">755 1,078</td><td/><td>Ours</td><td>20a</td><td>75.4 0.213 930 1,273</td></tr><tr><td>English (Wiki)</td><td colspan=\"2\">IMS Concordia 20a 19</td><td colspan=\"2\">85.5 0.025 84.7 -0.029</td><td colspan=\"2\">698 1,023 715 1,036</td><td>English (Wiki)</td><td>IMS IMS</td><td>20b 20a</td><td>70.2 0.055 932 1,256 69 -0.03 963 1,284</td></tr><tr><td/><td>RALI</td><td>19</td><td>76</td><td>-0.463</td><td colspan=\"2\">720 1,044</td><td/><td>IMS</td><td>19</td><td>67.3 -0.095 932 1,233</td></tr><tr><td/><td>Ours</td><td>20a</td><td colspan=\"2\">76.6 -0.491</td><td colspan=\"2\">721 1,088</td><td/><td>RALI</td><td>19</td><td>56.1 -0.562 940 1,329</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "Results of our system on the test set." |
| }, |
| "TABREF3": { |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "Results of the human evaluation for the track 2. Meaning Similarity and Readability are computed. Avg. is the average 0-100% received by systems. Avg. z is the corresponding average standardised scores. \"n\" is the total number of distinct test sentences assessed, and N is the total number of human judgments. The results are sorted by avg. z. and horizontal lines indicate clusters, such that systems in a cluster all significantly outperform all the systems in lower ranked clusters." |
| } |
| } |
| } |
| } |