ACL-OCL / Base_JSON /prefixW /json /webnlg /2020.webnlg-1.19.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:06:34.904269Z"
},
"title": "The UPC RDF-to-Text System at WebNLG Challenge 2020",
"authors": [
{
"first": "David",
"middle": [],
"last": "Berg\u00e9s",
"suffix": "",
"affiliation": {},
"email": "david.berges@estudiantat.upc.edu"
},
{
"first": "Roser",
"middle": [],
"last": "Cantenys",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Roger",
"middle": [],
"last": "Creus",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Domingo",
"suffix": "",
"affiliation": {},
"email": "oriol.domingo.roig@estudiantat.upc.edu"
},
{
"first": "Jos\u00e9",
"middle": [
"A R"
],
"last": "Fonollosa",
"suffix": "",
"affiliation": {},
"email": "jose.fonollosa@upc.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This work describes the end-to-end system architecture presented at WebNLG Challenge 2020. The system follows the traditional Machine Translation (MT) pipeline, based on the Transformer model, applied in most text-totext problems. Our solution is enriched by means of a Back Translation step over the original corpus. Thus, the system directly relies on lexicalise format since the synthetic data limits the use of delexicalisation.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This work describes the end-to-end system architecture presented at WebNLG Challenge 2020. The system follows the traditional Machine Translation (MT) pipeline, based on the Transformer model, applied in most text-totext problems. Our solution is enriched by means of a Back Translation step over the original corpus. Thus, the system directly relies on lexicalise format since the synthetic data limits the use of delexicalisation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural Language Generation (NLG) can be divided into: text-to-text generation or data-to-text generation, according to Gatt and Krahmer (2017) . The WebNLG Challenge 2020 consists in mapping data-to-text. More specifically, the data is a set of Resource Description Framework (RDF) triples extracted from DBpedia and the corresponding text is a verbalisation of these triples 1 .",
"cite_spans": [
{
"start": 120,
"end": 143,
"text": "Gatt and Krahmer (2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The information structure is based on RDF, which consist of three elements: subject, predicate, object . Thus, it establishes relations (predicate) between entities (subject, object). This can be appreciated in Figure 1 . However, this information structure is not easy readable neither understandable, hence, it is hard for people to comprehend the meaning of such data.",
"cite_spans": [],
"ref_spans": [
{
"start": 211,
"end": 219,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There has been a lot of previous work in the NLG domain (Reiter and Dale, 2000) in the past two decades. Bontcheva et al. (2004) work in the medical domain, where they use a traditional NLG approach to generate sentences from RDF data filtering repetitive RDF, and then group coherent triples aggregating the generated sentences in order to produce the final ones. (Cimiano et al., 2013) generate cooking recipes from semantic web 1 https://webnlg-challenge.loria.fr/ challenge_2020/ data, using a large corpus to extract lexicon in the cooking domain, which is then used in conjunction with a traditional NLG approach to generate cooking receipts. (Duma and Klein, 2013) use a method which works well on RDF triples in a seen domain but fails with unseen ones. Their aim is to learn sentence templates from parallel RDF data and text corpora by means of aligning entities in RDF triples with entities mentioned in sentences, and then extracting these templates from the aligned sentences by replacing the entity men-tion with a unique token.",
"cite_spans": [
{
"start": 56,
"end": 79,
"text": "(Reiter and Dale, 2000)",
"ref_id": "BIBREF7"
},
{
"start": 105,
"end": 128,
"text": "Bontcheva et al. (2004)",
"ref_id": "BIBREF0"
},
{
"start": 365,
"end": 387,
"text": "(Cimiano et al., 2013)",
"ref_id": "BIBREF2"
},
{
"start": 649,
"end": 671,
"text": "(Duma and Klein, 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We decided to only participate in the English version of the RDF-to-Text challenge (Castro Ferreira et al., 2020) . We used a model based on the Transformer encoded-decoder architecture (Vaswani et al., 2017) . Moreover, inspired by previous work in the MT field, we enlarged the original corpus by means of Back Translation (BT) (Sennrich et al., 2016) .",
"cite_spans": [
{
"start": 83,
"end": 113,
"text": "(Castro Ferreira et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 186,
"end": 208,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 330,
"end": 353,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the document is organised as follows. First, in Section 2 we take a deeper dive into the task formulation. Next, in Section 3 the preprocessing plan is explained. Then, in Section 4 we depict the Transformer model architecture adapted to our problem. Thereafter, we briefly describe postprocessing in Section 5. Finally, in Section 6 we summarize the implementation of BT over the original challenge followed by brief results and conclusions in Sections 7 and 8 respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of the RDF-to-Text task is to generate text from a set of triples, which are words establishing relations between them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formulation",
"sec_num": "2"
},
{
"text": "The input to our system is data in the form of triples that can be denoted as a set of RDF, i.e. K := {r 1 , ..., r n }. Each RDF r i can be defined as s i , p i , o i , these elements stand for subject, pred- icate and object, respectively. Notice that each element can contain more than one word, there is no prior restriction in that sense. For instance, the subject 'Barack Obama' would be encoded as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formulation",
"sec_num": "2"
},
{
"text": "s i = [Barack, Obama] = s ij = [s i1 , s i2 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formulation",
"sec_num": "2"
},
{
"text": ", so i indicates the RDF in K and index j denotes the word position in each subject, predicate or object element.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formulation",
"sec_num": "2"
},
{
"text": "Finally, we aim to generate a discourse S, which consists of a sequence of words [w 1 , ..., w m ]. The resulting discourse in S should be grammatically correct and should also contain all the information present in the triples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formulation",
"sec_num": "2"
},
{
"text": "In this section we describe the first steps performed on data. There is a common step, delexicalisation, that has been performed since the very beginning of this challenge, back to 2017 2 . We decided to avoid this step due to the implementation of our BT method that does not contain the required mapping: from individual entities to generic words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3"
},
{
"text": "The very first data processing guide is defined as follows, and it is exemplified in Table 1 . First of all, we linearise the RDF input and split the camel-Case notation. Then, Moses Tokenizer (Koehn et al., 2007) is applied to separate punctuation from words, preserving special tokens such as dates, and normalize characters. Finally, Byte Pair Encoding (BPE) (Sennrich et al., 2015) is applied to enable the model to be more robust to unseen data. This is a traditional technique that increases the translation quality of models. BPE is learned with the training plus validation procedure and is used for the source and target vocabulary. This way, the model is trained for both receiving and predicting BPE encoded vocabulary, also for the test set. Finally,",
"cite_spans": [
{
"start": 193,
"end": 213,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF6"
},
{
"start": 362,
"end": 385,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 85,
"end": 92,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3"
},
{
"text": "The Transformer (Vaswani et al., 2017) is considered a state-of-the-art encoder-decoder architecture, with great success in a vast field of applications such as MT. Following this success and taking advantage of its simpler architecture, we proposed a simple transformer approach trained in an end-toend fashion. One of the main traits that enables these models to attain such surprising results, is the attention mechanism, that allows to model dependencies regardless their distance in the input or output sequences. This capability is a fundamental feature for RDF-to-Text, as automated generation of text takes into account the relationship between words that may not appear consecutively.",
"cite_spans": [
{
"start": 16,
"end": 38,
"text": "(Vaswani et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Transformer Model",
"sec_num": "4"
},
{
"text": "For the model's architecture, we used a total of 3 layers with 1,024-dimensional Feed Forward Networks (FFN) and 8 attention heads, performing cross + self attention at each layer. We used 256dimensional embeddings with fixed sinusoidal positional encodings, shared across the entire network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Parameters and Optimization",
"sec_num": "4.1"
},
{
"text": "We used the Adam optimizer with b1 = 0.9, b2 = 0.98 and = 10 \u2212 9. We increased the learning rate linearly for a total of 4,000 warming steps to 1e-03, and decreased it following an inverse square root formula from thereon. Additionally, we applied several dropout techniques such as dropout, gradient clipping and label smoothing for our loss formula.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Parameters and Optimization",
"sec_num": "4.1"
},
{
"text": "With this model configuration, the performed experiments concluded that the best choice was to use 7,000 subwords for the BPE encoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Parameters and Optimization",
"sec_num": "4.1"
},
{
"text": "Baku Turkish Martyrs ' Memorial, nativeName, \" T\u00fcrk \u015e ehitleri An\u0131t\u0131 \" Baku Turkish Martyrs ' Memorial, location, Azerbaijan Linearise Baku Turkish Martyrs ' Memorial nativeName \" T\u00fcrk \u015e ehitleri An\u0131t\u0131 \" Baku Turkish Martyrs ' Memorial location Azerbaijan camelCase Removal Baku Turkish Martyrs ' Memorial native Name \" T\u00fcrk \u015e ehitleri An\u0131t\u0131 \" Baku Turkish Martyrs ' Memorial location Azerbaijan BPE & Tokenization Baku Turkish Mart@@ yrs ' Memorial native Name \" T@@\u00fcrk \u015e @@ eh@@ it@@ l@@ er@@ i An@@ \u0131@@ t@@ \u0131 \" Baku Turkish Mart@@ yrs ' Memorial location Azerbaijan Transformer",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RDF Input",
"sec_num": null
},
{
"text": "The Baku Turkish Mart@@ yrs ' Memorial is located in Azerbaijan . The native name of the Baku Turkish Mart@@ yrs ' \" T@@ urk \u015e @@ eh@@ it@@ l@@ er@@ i An@@ \u0131@@ t@@ \u0131 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RDF Input",
"sec_num": null
},
{
"text": "The Baku Turkish Martyrs ' Memorial is located in Azerbaijan . The native name of the Baku Turkish Martyrs ' Memorial is Turk Sehitleri An\u0131t\u0131 . Table 1 : Exemplification of each step in the system architecture using a test instance.",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 151,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Output",
"sec_num": null
},
{
"text": "The Transformer model outputs a sequence of predicted words, then, the system removes the tokenization as well as BPE. One example of the output of the system is shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 171,
"end": 178,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Postprocessing",
"sec_num": "5"
},
{
"text": "BT runs in a semi-supervised environment where both parallel corpora and monolingual data in the target language are available (Sennrich et al., 2015) . First, BT trains an intermediate system on the parallel data which is used to translate the target monolingual data into the source language, i.e. text-to-RDF. The latter, results in a parallel corpus where the source is synthetic MT output while the target is genuine text written by humans. Afterwards, the generated synthetic parallel corpus is added to the real bitext in order to train a final model that will translate from the source to the target language, equivalently RDF-to-text. The parallel dataset was already provided by the challenge and contains the translation from RDFto-text and vice-versa. Hence, we just needed an external monolingual corpus of the target language to perform augmentation of the source data. In order to do so, we implemented a distance-based approach to the training data since entities appearing in the corpus were annotated. Taking this into account, not only was the team capable of scrapping Wikipedia pages of most similar entities to the ones in the original corpus, but we were also able to limit scrapping to the first three paragraph in each page without loss of quality. In order to determine the most similar entities, an embedding distance was computed regarding Wikipedia2Vec (Yamada et al., 2020 ) that allows to query for entities rather than words.",
"cite_spans": [
{
"start": 127,
"end": 150,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF8"
},
{
"start": 1382,
"end": 1402,
"text": "(Yamada et al., 2020",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Back Translation",
"sec_num": "6"
},
{
"text": "The current approach to solve the Back-Translation, text-to-RDF, implies using parsing trees that guarantee that elements in the RDF appear in the text. Consequently, this implementation generated coherent data with respect to text. The final dataset, which integrated the real corpus and the synthetic one, has around 340,000 instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back Translation",
"sec_num": "6"
},
{
"text": "A detailed description of this experiment can be found in (Domingo et al., 2020) .",
"cite_spans": [
{
"start": 58,
"end": 80,
"text": "(Domingo et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Back Translation",
"sec_num": "6"
},
{
"text": "In Table 2 , we show the results obtained in the test set for the competition. One remarkable aspect is that there is not a significant difference between the performance in the seen and unseen domain regarding the METEOR, TER and chrF++ metric. On the other hand, there exists a performance drop based on BLEU score in the unseen data with respect to the seen one. ",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "An interesting capability that was not implemented would have been to obtain delexicalised synthetic data, so the model learned more generic representations. Furthermore, it would be interesting to enlarge the synthetic corpus and use this synthetic corpus to train more relevant models in the RDFto-Text domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Open discussion",
"sec_num": "8"
},
{
"text": "https://webnlg-challenge.loria.fr/ challenge_2017/ the system implementation allows to learn embeddings from scratch from the vocabulary that has been already encoded with BPE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We want to thank anonymous reviewers for their comments on the paper. This work was supported by the project ADAVOICE, PID2019-107579RB-I00 / AEI / 10.13039/501100011033.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic report generation from ontologies: The miakt approach",
"authors": [
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "Yorick",
"middle": [],
"last": "Wilks",
"suffix": ""
}
],
"year": 2004,
"venue": "Natural Language Processing and Information Systems",
"volume": "",
"issue": "",
"pages": "324--335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalina Bontcheva and Yorick Wilks. 2004. Automatic report generation from ontologies: The miakt ap- proach. In Natural Language Processing and Infor- mation Systems, pages 324-335, Berlin, Heidelberg. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Diego Moussalem, and Anastasia Shimorina. 2020. The 2020 bilingual, bi-directional webnlg+ shared task overview and evaluation results (webnlg+ 2020)",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Thiago Castro Ferreira",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "Nikolai",
"middle": [],
"last": "Van Der Lee",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Ilinykh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mille",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 3rd WebNLG Workshop on Natural Language Generation from the Semantic Web",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thiago Castro Ferreira, Claire Gardent, Chris van der Lee, Nikolai Ilinykh, Simon Mille, Diego Mous- salem, and Anastasia Shimorina. 2020. The 2020 bilingual, bi-directional webnlg+ shared task overview and evaluation results (webnlg+ 2020). In Proceedings of the 3rd WebNLG Workshop on Nat- ural Language Generation from the Semantic Web (WebNLG+ 2020), Dublin, Ireland (Virtual). Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Exploiting ontology lexica for generating natural language texts from RDF data",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Cimiano",
"suffix": ""
},
{
"first": "Janna",
"middle": [],
"last": "L\u00fcker",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Nagel",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Unger",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 14th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "10--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Cimiano, Janna L\u00fcker, David Nagel, and Christina Unger. 2013. Exploiting ontology lex- ica for generating natural language texts from RDF data. In Proceedings of the 14th European Work- shop on Natural Language Generation, pages 10- 19, Sofia, Bulgaria. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enhancing sequence-to-sequence modelling for RDF triples to natural text",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Domingo",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Berg\u00e9s",
"suffix": ""
},
{
"first": "Roser",
"middle": [],
"last": "Cantenys",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Creus",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [
"A R"
],
"last": "Fonollosa",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 3rd WebNLG Workshop on Natural Language Generation from the Semantic Web",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Domingo, David Berg\u00e9s, Roser Cantenys, Roger Creus, and Jos\u00e9 A.R. Fonollosa. 2020. Enhancing sequence-to-sequence modelling for RDF triples to natural text. In Proceedings of the 3rd WebNLG Workshop on Natural Language Generation from the Semantic Web (WebNLG+ 2020), Dublin, Ireland (Virtual). \"Association for Computational Linguis- tics\".",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Generating natural language from linked data: Unsupervised template extraction",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Duma",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -Long Papers",
"volume": "",
"issue": "",
"pages": "83--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Duma and Ewan Klein. 2013. Generating nat- ural language from linked data: Unsupervised tem- plate extraction. In Proceedings of the 10th Inter- national Conference on Computational Semantics (IWCS 2013) -Long Papers, pages 83-94, Potsdam, Germany. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Survey of the state of the art in natural language generation: Core tasks, applications and evaluation",
"authors": [
{
"first": "Albert",
"middle": [],
"last": "Gatt",
"suffix": ""
},
{
"first": "Emiel",
"middle": [],
"last": "Krahmer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Albert Gatt and Emiel Krahmer. 2017. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. CoRR, abs/1703.09902.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Ses- sions, pages 177-180, Prague, Czech Republic. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Building Natural Language Generation Systems",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 2000,
"venue": "Studies in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1017/CBO9780511519857"
]
},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems. Studies in Natural Language Processing. Cambridge University Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. CoRR, abs/1508.07909.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Wikipedia2vec: An efficient toolkit for learning and visualizing the embeddings of words and entities from wikipedia",
"authors": [
{
"first": "Ikuya",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Akari",
"middle": [],
"last": "Asai",
"suffix": ""
},
{
"first": "Jin",
"middle": [],
"last": "Sakuma",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Hideaki",
"middle": [],
"last": "Takeda",
"suffix": ""
},
{
"first": "Yoshiyasu",
"middle": [],
"last": "Takefuji",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ikuya Yamada, Akari Asai, Jin Sakuma, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, and Yuji Matsumoto. 2020. Wikipedia2vec: An efficient toolkit for learning and visualizing the embeddings of words and entities from wikipedia. arXiv preprint 1812.06280v3.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Example of a knowledge graph (a) with its corresponding RDF triples (b) and its natural language description (c)."
},
"TABREF1": {
"content": "<table/>",
"text": "Performance of the system regarding different data partitions in the test set.",
"type_str": "table",
"num": null,
"html": null
}
}
}
}