ACL-OCL / Base_JSON /prefixM /json /msr /2020.msr-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:13:27.039688Z"
},
"title": "ADAPT at SR'20: How Preprocessing and Data Augmentation Help to Improve Surface Realization",
"authors": [
{
"first": "Henry",
"middle": [],
"last": "Elder",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ADAPT Centre Dublin City University",
"location": {}
},
"email": "henry.elder@adaptcentre.ie"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we describe the ADAPT submission to the Surface Realization Shared Task 2020. We present a neural-based system trained on the English Web Treebank and an augmented dataset, automatically created from existing text corpora.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we describe the ADAPT submission to the Surface Realization Shared Task 2020. We present a neural-based system trained on the English Web Treebank and an augmented dataset, automatically created from existing text corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Surface realization is the final step of an NLG system (Reiter and Dale, 2000) . The prior steps provide guidance on the content and structure of a sentence that is to be generated. The goal of this shared task is to generate sentences from structured data with high accuracy (Mille et al., 2020) . Once this goal has been achieved, we believe neural-based surface realization systems could be incorporated into real-world NLG systems, such as task-oriented dialogue systems (Balakrishnan et al., 2019) and personalised marketing systems 1",
"cite_spans": [
{
"start": 55,
"end": 78,
"text": "(Reiter and Dale, 2000)",
"ref_id": "BIBREF17"
},
{
"start": 276,
"end": 296,
"text": "(Mille et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 475,
"end": 502,
"text": "(Balakrishnan et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We made a submission to the Surface Realization Shared Task 2020, for the English language dataset: Universal Dependencies English Web Treebank (Silveira et al., 2014) . We use a neural-based system; a sequence-to-sequence model trained on linearized trees. We submitted test outputs to both the open and closed tracks. For the open track we trained the same system on a large augmented dataset.",
"cite_spans": [
{
"start": 144,
"end": 167,
"text": "(Silveira et al., 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "Ablation analysis performed on our previous systems (Elder and Hokamp, 2018; Elder et al., 2020) showed that much of the performance comes from different preprocessing steps we apply to the original CoNLLU formatted data. Figure 1 contains a formatted example of a linearized tree that is used as the input sequence when training the model. The output sequence used is the tokenized form of the original sentence. Below, we discuss the four key preprocessing features we use. More details can be found in the Python module used for preprocessing 2 ; for each feature we point to the relevant lines of code in footnotes.",
"cite_spans": [
{
"start": 52,
"end": 76,
"text": "(Elder and Hokamp, 2018;",
"ref_id": "BIBREF4"
},
{
"start": 77,
"end": 96,
"text": "Elder et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 222,
"end": 230,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "Depth First Linearizations To get the input sequence from a tree, we perform a depth first search of the tree 3 . This provides us with a linear sequence of tokens. Where a parent token has multiple child tokens, we choose randomly between the children. To ensure our system is robust to the order of the linearization, we obtain multiple random linearizations of each sentence to train the system with.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "1 For example https://phrasee.co/ and https://www.persado.com/ This work is licensed under a Creative Commons Attribution 4.0 International License.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "License details: http://creativecommons.org/licenses/by/4.0/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "2 https://github.com/Henry-E/surface-realization-shallow-task/blob/master/modules/ create_source_and_target.py 3 https://github.com/Henry-E/surface-realization-shallow-task/blob/master/modules/ create_source_and_target.py\\#L12-L36",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "come\uffe8VBZ\uffe81\uffe80\uffe8root\uffe8_ _(\uffe8_\uffe8_\uffe8_\uffe8_\uffe8_ ap\uffe8NNP\uffe82\uffe81\uffe8obl\uffe8_ _(\uffe8_\uffe8_\uffe8_\uffe8_\uffe8_ from\uffe8IN\uffe83\uffe82\uffe8case\uffe8_ the\uffe8DT\uffe84\uffe82\uffe8det\uffe8_ )_\uffe8_\uffe8_\uffe8_\uffe8_\uffe8_ story\uffe8NN\uffe85\uffe81\uffe8nsubj\uffe8_ _(\uffe8_\uffe8_\uffe8_\uffe8_\uffe8_ this\uffe8DT\uffe86\uffe85\uffe8det\uffe8_ )_\uffe8_\uffe8_\uffe8_\uffe8_\uffe8_ :\uffe8:\uffe87\uffe81\uffe8punct\uffe8+1 )_\uffe8_\uffe8_\uffe8_\uffe8_\uffe8_ _form_suggestions_\uffe8_\uffe8_\uffe8_\uffe8_\uffe8_ comes\uffe8VBZ\uffe81\uffe8_\uffe8_\uffe8_ aps\uffe8NNP\uffe82\uffe8_\uffe8_\uffe8_",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "From the AP comes this story:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "Figure 1: A formatted example of the linear input sequence for the sentence From the AP comes this story:. Token level features appear in the order: Lemma, XPOS, ID, Head, DepRel, Lin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "Scoping Brackets Similar to Konstas et al. (2017) , we apply scoping brackets around child nodes. This provides further indication of the tree structure to the model, despite using a linear sequence as input 4 .",
"cite_spans": [
{
"start": 28,
"end": 49,
"text": "Konstas et al. (2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "We append a number of features to each token; XPOS, ID, HEAD, DepRel, Lin. This enables us to use factored sequence models (Sennrich and Haddow, 2016) , which we will discuss in Section 2.3. To use this modelling feature a special pipe symbol, |, is required between each of the token's features 5 .",
"cite_spans": [
{
"start": 123,
"end": 150,
"text": "(Sennrich and Haddow, 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Token Level Features",
"sec_num": null
},
{
"text": "Form Suggestions Finally, we address the problem of generating a token's form when only given its lemma. To do this, we provide the model with form suggestions 6 . Form suggestions are a list of possible forms that other lemmas, with the same XPOS tag, were observed to take. To obtain the form suggestions, we use the automatically parsed corpus, discussed in Section 2.2, to create a dictionary 7 . The dictionary is structured as such: each key is a concatenated lemma and XPOS tag, and the value is a list of possible forms observed in the automatically parsed corpus 8 . For example: {\"VBN bootstrap\": \"bootstrapped\"}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token Level Features",
"sec_num": null
},
{
"text": "To augment the existing training data we create a dataset by parsing sentences from publicly available corpora. The two corpora we investigated are Wikitext 103 (Merity et al., 2017) and the CNN stories portion of the DeepMind Q&A dataset (Hermann et al., 2015) . Each corpus requires some cleaning and formatting, after which they can be sentence tokenized using CoreNLP . Sentences are filtered by length -min 5 tokens and max 50 -and for vocabulary overlap with the original training data -set to 80% of tokens in a sentence required to appear in the original vocabulary. These sentences are then parsed using the Stanford NLP UD parser (Qi et al., 2018) . This leaves us with 2.4 million parsed sentences from the CNN stories corpus and 2.1 million from Wikitext. To convert a parse tree into the shared task format: word order information is removed by shuffling the IDs of the parse tree and tokens are lemmatised by removing the form column.",
"cite_spans": [
{
"start": 161,
"end": 182,
"text": "(Merity et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 239,
"end": 261,
"text": "(Hermann et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 640,
"end": 657,
"text": "(Qi et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Augmented Dataset",
"sec_num": "2.2"
},
{
"text": "While it has been noted that the use of automatically created data is problematic in NLG tasks -WeatherGov (Liang et al., 2009) being the notable example -our data is created differently. The WeatherGov dataset is constructed by pairing a table with the output of a rule-based NLG system. This means any system trained on WeatherGov only re-learns the rules used to generate the text. Our approach is the reverse; we parse an existing, naturally occurring sentence, and, thus, the model must learn to reverse the parsing algorithm.",
"cite_spans": [
{
"start": 107,
"end": 127,
"text": "(Liang et al., 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Augmented Dataset",
"sec_num": "2.2"
},
{
"text": "The system is trained using a custom fork 9 of the OpenNMT-py framework (Klein et al., 2017) , the only change made was to the beam search decoding code. The model used is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with long short term memory (LSTM) cells (Hochreiter and Schmidhuber, 1997) . We trained two systems; one with the EWT dataset 10 and one with both the EWT dataset and our augmented dataset 11 . Hyperparameter details and replication instructions are available in our project's repository 12 , in particular in the config directory. All hyperparameters stayed the same when training with the augmented dataset, except for vocabulary size and training time. Vocabulary size varies based on the datasets in use. It is determined by using any tokens which appears 10 times or more. When training on the EWT dataset, the vocabulary size is 2,193 tokens, training is done for 38 epochs and takes about 1 hour on two Nvidia 1080 Ti GPUs. For the combined EWT, Wikitext and CNN datasets the vocabulary size is 89,233, training time increases to around 2 days, and uses 60 random linearizations of the EWT dataset and 8 of the Wikitext and CNN datasets. The best performing checkpoint on the development set is chosen for testing.",
"cite_spans": [
{
"start": 72,
"end": 92,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 220,
"end": 248,
"text": "(Schuster and Paliwal, 1997)",
"ref_id": "BIBREF18"
},
{
"start": 290,
"end": 324,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF7"
},
{
"start": 376,
"end": 378,
"text": "10",
"ref_id": null
},
{
"start": 439,
"end": 441,
"text": "11",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.3"
},
{
"text": "Our system uses three non-standard modelling features, each of which performs a key function for the task:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.3"
},
{
"text": "Copy Attention Copy attention (Vinyals et al., 2015; See et al., 2017) gives models the ability to copy a token directly from the source sequence to the generated text, even if that token does not appear in the source vocabulary. Vocabularies are usually limited based on available data or computational constraints, so it's likely that at least some words the model sees during testing may not have been added to the vocabulary during training.",
"cite_spans": [
{
"start": 30,
"end": 52,
"text": "(Vinyals et al., 2015;",
"ref_id": "BIBREF22"
},
{
"start": 53,
"end": 70,
"text": "See et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.3"
},
{
"text": "Factored Sequence Models Factored sequence models (Sennrich and Haddow, 2016) permit token level features to be used as part of training. The key idea is to create a separate embedding representation for each feature type, and to concatenate the embeddings to each token embedding to create a dense representation 13 .",
"cite_spans": [
{
"start": 50,
"end": 77,
"text": "(Sennrich and Haddow, 2016)",
"ref_id": "BIBREF20"
},
{
"start": 314,
"end": 316,
"text": "13",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.3"
},
{
"text": "In an attempt to reduce unnecessary errors during decoding, our beam search looks at the input sequence and restricts the available vocabulary to only tokens from the input, and tokens which have not yet appeared in the output sequence. This is similar to the approach used by King and White (2018) ",
"cite_spans": [
{
"start": 277,
"end": 298,
"text": "King and White (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Restricted Beam Search",
"sec_num": null
},
{
"text": "In this section we report our results on the shared task. An explanation of the evaluation methodology, as well as a comparison with other participants, can be found in the shared task description paper (Mille et al., 2020) . Table 2 contains human evaluation results for the readability metric. Rather surprisingly, the readability for our system with the augmented corpora is almost equivalent to the readability of the original human text. However, the readability metric only reflects how well written the annotators deemed a sentence to be. Readability scores don't take into account whether the generated sentence has managed to capture the meaning of the original sentence. Table 3 contains human evaluation results for the meaning similarity metric. This metric describes how successful the system has been at generating sentences with the same meaning as the original sentence. Sentences generated by the augmented corpora are on average 92.6% similar in meaning to the original sentence. While this may seem like a strong result 14 , ultimately we are aiming for 100% meaning similarity in order to have a system that is reliable enough to be used with real world NLG systems.",
"cite_spans": [
{
"start": 203,
"end": 223,
"text": "(Mille et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 226,
"end": 233,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 681,
"end": 688,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "https://github.com/Henry-E/surface-realization-shallow-task/blob/master/modules/ create_source_and_target.py\\#L23-L285 https://github.com/Henry-E/surface-realization-shallow-task/blob/master/modules/ create_source_and_target.py\\#L79-L956 https://github.com/Henry-E/surface-realization-shallow-task/blob/master/modules/ create_source_and_target.py\\#L101-L1237 https://github.com/Henry-E/surface-realization-shallow-task/blob/master/modules/ get_form_suggestions.py 8 Dictionary: https://github.com/Henry-E/surface-realization-shallow-task/blob/master/ inflection_dicts/18th_october_tests/lemma_form_dict_sorted.json",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The highest recorded meaning similarity on the same test set in last year's shared task was 86.6%(Mille et al., 2019)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF3": {
"ref_id": "b3",
"title": "Constrained Decoding for Neural NLG from Compositional Representations in Task-Oriented Dialogue",
"authors": [
{
"first": "Anusha",
"middle": [],
"last": "Balakrishnan",
"suffix": ""
},
{
"first": "Jinfeng",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Kartikeya",
"middle": [],
"last": "Upasani",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Subba",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "831--844",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. 2019. Constrained Decoding for Neural NLG from Compositional Representations in Task-Oriented Dialogue. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 831-844, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Generating High-Quality Surface Realizations Using Data Augmentation and Factored Sequence Models",
"authors": [
{
"first": "Henry",
"middle": [],
"last": "Elder",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Hokamp",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Multilingual Surface Realisation",
"volume": "",
"issue": "",
"pages": "49--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henry Elder and Chris Hokamp. 2018. Generating High-Quality Surface Realizations Using Data Augmentation and Factored Sequence Models. In Proceedings of the First Workshop on Multilingual Surface Realisation, pages 49-53, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Shape of Synth to Come: Why We Should Use Synthetic Data for English Surface Realization",
"authors": [
{
"first": "Henry",
"middle": [],
"last": "Elder",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Burke",
"suffix": ""
},
{
"first": "Alexander O'",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7465--7471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henry Elder, Robert Burke, Alexander O'Connor, and Jennifer Foster. 2020. Shape of Synth to Come: Why We Should Use Synthetic Data for English Surface Realization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7465-7471, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Teaching Machines to Read and Comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Kocisky",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "; C",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "N D",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "D D",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sugiyama",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Garnett",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "28",
"issue": "",
"pages": "1693--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching Machines to Read and Comprehend. In C Cortes, N D Lawrence, D D Lee, M Sugiyama, and R Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1693-1701. Curran Associates, Inc.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Long Short-Term Memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "Jurgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation, 9(8):1735- 1780.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The OSU Realizer for SRST '18: Neural Sequence-to-Sequence Inflection and Incremental Locality-Based Linearization",
"authors": [
{
"first": "David",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the First Workshop on Multilingual Surface Realisation",
"volume": "",
"issue": "",
"pages": "39--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David King and Michael White. 2018. The OSU Realizer for SRST '18: Neural Sequence-to-Sequence Inflection and Incremental Locality-Based Linearization. In Proceedings of the First Workshop on Multilingual Surface Realisation, number 2009, pages 39-48, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "OpenNMT: Open-Source Toolkit for Neural Machine Translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-Source Toolkit for Neural Machine Translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Neural AMR: Sequenceto-Sequence Models for Parsing and Generation",
"authors": [
{
"first": "Ioannis",
"middle": [],
"last": "Konstas",
"suffix": ""
},
{
"first": "Srinivasan",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "146--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR: Sequence- to-Sequence Models for Parsing and Generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 146-157, Stroudsburg, PA, USA, 4. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning Semantic Correspondences with Less Supervision",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "91--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning Semantic Correspondences with Less Supervision. Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, (August):91-99.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The {Stanford} {CoreNLP} Natural Language Processing Toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Association for Computational Linguistics (ACL) System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, and David McClosky. 2014. The {Stanford} {CoreNLP} Natural Language Processing Toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Pointer Sentinel Mixture Models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer Sentinel Mixture Models. In 5th International Conference on Learning Representations, {ICLR} 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The Second Multilingual Surface Realisation Shared Task (SR'19): Overview and Evaluation Results",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019), number Msr",
"volume": "",
"issue": "",
"pages": "1--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Mille, Anja Belz, Bernd Bohnet, Yvette Graham, and Leo Wanner. 2019. The Second Multilingual Surface Realisation Shared Task (SR'19): Overview and Evaluation Results. In Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019), number Msr, pages 1-17, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The Third Multilingual Surface Realisation Shared Task ({SR}{'}20): Overview and Evaluation Results",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Thiago",
"middle": [],
"last": "Castro Ferreira",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 3nd Workshop on Multilingual Surface Realisation (MSR 2020)",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Mille, Anya Belz, Bernd Bohnet, Thiago Castro Ferreira, Yvette Graham, and Leo Wanner. 2020. The Third Multilingual Surface Realisation Shared Task ({SR}{'}20): Overview and Evaluation Results. In Pro- ceedings of the 3nd Workshop on Multilingual Surface Realisation (MSR 2020), Dublin, Ireland, 12. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the (CoNLL",
"volume": "",
"issue": "",
"pages": "160--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Timothy Dozat, Yuhao Zhang, and Christopher D Manning. 2018. Universal Dependency Parsing from Scratch. In Proceedings of the (CoNLL) 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 160-170, Brussels, Belgium, 10. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Building Natural Language Generation Systems",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems. Cambridge University Press, New York, NY, USA.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "M",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "K",
"middle": [
"K"
],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on Signal Processing",
"volume": "45",
"issue": "11",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Schuster and K.K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Get To The Point: Summarization with Pointer-Generator Networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get To The Point: Summarization with Pointer- Generator Networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1073-1083, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Linguistic Input Features Improve Neural Machine Translation",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "1",
"issue": "",
"pages": "83--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich and Barry Haddow. 2016. Linguistic Input Features Improve Neural Machine Translation. In Pro- ceedings of the First Conference on Machine Translation: Volume 1, Research Papers, pages 83-91, Strouds- burg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A Gold Standard Dependency Corpus for {E}nglish",
"authors": [
{
"first": "Natalia",
"middle": [],
"last": "Silveira",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Miriam",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}'14)",
"volume": "",
"issue": "",
"pages": "2897--2904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Chris Manning. 2014. A Gold Standard Dependency Corpus for {E}nglish. In Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}'14), pages 2897-2904, Reykjavik, Iceland, 5. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Pointer Networks",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Meire",
"middle": [],
"last": "Fortunato",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "28",
"issue": "",
"pages": "2692--2700",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer Networks. In Advances in Neural Information Processing Systems 28, pages 2692-2700, 6.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>System</td><td colspan=\"2\">Ave. Ave. z</td><td>n</td><td>N</td></tr><tr><td colspan=\"5\">EWT + Augmented Corpora 75.7 0.426 797 913</td></tr><tr><td>HUMAN</td><td colspan=\"4\">75.7 0.417 669 1,402</td></tr><tr><td>EWT</td><td>72.5</td><td>0.32</td><td colspan=\"2\">830 953</td></tr></table>",
"text": "contains automated evaluation metrics on the EWT test set. As in previous experiments(Elder et al., 2020), we find that the augmented dataset greatly improves the performance of our system."
},
"TABREF2": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"5\">: SR'20 Test set results -Human Evaluation: Readability</td></tr><tr><td>System</td><td colspan=\"2\">Ave. Ave. z</td><td>n</td><td>N</td></tr><tr><td colspan=\"2\">EWT + Augmented Corpora 92.6</td><td>0.54</td><td colspan=\"2\">1,698 1,931</td></tr><tr><td>EWT</td><td colspan=\"4\">90.7 0.476 1,685 1,914</td></tr></table>",
"text": ""
},
"TABREF3": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": ""
}
}
}
}