ACL-OCL / Base_JSON /prefixN /json /N18 /N18-1020.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N18-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:52:49.128337Z"
},
"title": "Zero-Shot Question Generation from Knowledge Graphs for Unseen Predicates and Entity Types",
"authors": [
{
"first": "Hady",
"middle": [],
"last": "Elsahar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 de Lyon Laboratoire Hubert Curien Saint-\u00c9tienne",
"location": {
"country": "France"
}
},
"email": ""
},
{
"first": "Christophe",
"middle": [],
"last": "Gravier",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 de Lyon Laboratoire Hubert Curien Saint-\u00c9tienne",
"location": {
"country": "France"
}
},
"email": ""
},
{
"first": "Frederique",
"middle": [],
"last": "Laforest",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 de Lyon Laboratoire Hubert Curien Saint-\u00c9tienne",
"location": {
"country": "France"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a neural model for question generation from knowledge base triples in a \"Zero-Shot\" setup, that is generating questions for triples containing predicates, subject types or object types that were not seen at training time. Our model leverages triples occurrences in the natural language corpus in an encoderdecoder architecture, paired with an original part-of-speech copy action mechanism to generate questions. Benchmark and human evaluation show that our model sets a new state-ofthe-art for zero-shot QG.",
"pdf_parse": {
"paper_id": "N18-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a neural model for question generation from knowledge base triples in a \"Zero-Shot\" setup, that is generating questions for triples containing predicates, subject types or object types that were not seen at training time. Our model leverages triples occurrences in the natural language corpus in an encoderdecoder architecture, paired with an original part-of-speech copy action mechanism to generate questions. Benchmark and human evaluation show that our model sets a new state-ofthe-art for zero-shot QG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Questions Generation (QG) from Knowledge Graphs is the task consisting in generating natural language questions given an input knowledge base (KB) triple (Serban et al., 2016) . QG from knowledge graphs has shown to improve the performance of existing factoid question answering (QA) systems either by dual training or by augmenting existing training datasets (Dong et al., 2017; Khapra et al., 2017) . Those methods rely on large-scale annotated datasets such as Simple-Questions (Bordes et al., 2015) . Building such datasets is a tedious task in practice, especially to obtain an unbiased dataset -i.e. a dataset that covers equally a large amount of triples in the KB. In practice many of the predicates and entity types in KB are not covered by those annotated datasets. For example 75.6% of Freebase predicates are not covered by the SimpleQuestions dataset 1 . Among those we can find important missing predicates such as: fb:food/beer/country, fb:location/country/national anthem, fb:astronomy/star system/stars.",
"cite_spans": [
{
"start": 154,
"end": 175,
"text": "(Serban et al., 2016)",
"ref_id": null
},
{
"start": 360,
"end": 379,
"text": "(Dong et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 380,
"end": 400,
"text": "Khapra et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 481,
"end": 502,
"text": "(Bordes et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One challenge for QG from knowledge graphs is to adapt to predicates and entity types that 1 replicate the observation http://bit.ly/2GvVHae",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "were not seen at training time (Zero-Shot Question Generation). Since state-of-the-art systems in factoid QA rely on the tremendous efforts made to create SimpleQuestions, these systems can only process questions on the subset of 24.4% of freebase predicates defined in SimpleQuestions. Previous works for factoid QG (Serban et al., 2016) claims to solve the issue of small size QA datasets. However encountering an unseen predicate / entity type will generate questions made out of random text generation for those out-of-vocabulary predicates a QG system had never seen. We go beyond this state-of-the-art by providing an original and non-trivial solution for creating a much broader set of questions for unseen predicates and entity types. Ultimately, generating questions to predicates and entity types unseen at training time will allow QA systems to cover predicates and entity types that would not have been used for QA otherwise.",
"cite_spans": [
{
"start": 314,
"end": 338,
"text": "QG (Serban et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Intuitively, a human who is given the task to write a question on a fact offered by a KB, would read natural language sentences where the entity or the predicate of the fact occur, and build up questions that are aligned with what he reads from both a lexical and grammatical standpoint. In this paper, we propose a model for Zero-Shot Question Generation that follows this intuitive process. In addition to the input KB triple, we feed our model with a set of textual contexts paired with the input KB triple through distant supervision. Our model derives an encoder-decoder architecture, in which the encoder encodes the input KB triple, along with a set of textual contexts into hidden representations. Those hidden representations are fed to a decoder equipped with an attention mechanism to generate an output question. In the Zero-Shot setup, the emergence of new predicates and new class types during test time requires new lexicalizations to express these pred-icates and classes in the output question. These lexicalizations might not be encountered by the model during training time and hence do not exist in the model vocabulary, or have been seen only few times not enough to learn a good representation for them by the model. Recent works on Text Generation tackle the rare words/unknown words problem using copy actions (Luong et al., 2015; G\u00fcl\u00e7ehre et al., 2016) : words with a specific position are copied from the source text to the output text -although this process is blind to the role and nature of the word in the source text. Inspired by research in open information extraction (Fader et al., 2011) and structure-content neural language models (Kiros et al., 2014) , in which part-of-speech tags represent a distinctive feature when representing relations in text, we extend these positional copy actions. Instead of copying a word in a specific position in the source text, our model copies a word with a specific part-of-speech tag from the input text -we refer to those as partof-speech copy actions. Experiments show that our model using contexts through distant supervision significantly outperforms the strongest baseline among six (+2.04 BLEU-4 score). Adding our copy action mechanism further increases this improvement (+2.39). Additionally, a human evaluation complements the comprehension of our model for edge cases; it supports the claim that the improvement brought by our copy action mechanism is even more significant than what the BLEU score suggests.",
"cite_spans": [
{
"start": 1334,
"end": 1354,
"text": "(Luong et al., 2015;",
"ref_id": "BIBREF22"
},
{
"start": 1355,
"end": 1377,
"text": "G\u00fcl\u00e7ehre et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 1601,
"end": 1621,
"text": "(Fader et al., 2011)",
"ref_id": "BIBREF9"
},
{
"start": 1667,
"end": 1687,
"text": "(Kiros et al., 2014)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "QG became an essential component in many applications such as education (Heilman and Smith, 2010), tutoring (Graesser et al., 2004; Evens and Michael, 2006) and dialogue systems (Shang et al., 2015) . In our paper we focus on the problem of QG from structured KB and how we can generalize it to unseen predicates and entity types. (Seyler et al., 2015) generate quiz questions from KB triples. Verbalization of entities and predicates relies on their existing labels in the KB and a dictionary. (Serban et al., 2016) use an encoderdecoder architecture with attention mechanism trained on the SimpleQuestions dataset (Bordes et al., 2015) . (Dong et al., 2017) generate paraphrases of given questions to increases the performance of QA systems; paraphrases are generated relying on paraphrase datasets, neural ma-chine translation and rule mining. (Khapra et al., 2017) generate a set of QA pairs given a KB entity. They model the problem of QG as a sequence to sequence problem by converting all the KB entities to a set of keywords. None of the previous work in QG from KB address the question of generalizing to unseen predicates and entity types. Textual information has been used before in the Zero-Shot learning. (Socher et al., 2013) use information in pretrained word vectors for Zero-Shot visual object recognition. (Levy et al., 2017) incorporates a natural language question to the relation query to tackle Zero-Shot relation extraction problem.",
"cite_spans": [
{
"start": 72,
"end": 84,
"text": "(Heilman and",
"ref_id": "BIBREF14"
},
{
"start": 85,
"end": 131,
"text": "Smith, 2010), tutoring (Graesser et al., 2004;",
"ref_id": null
},
{
"start": 132,
"end": 156,
"text": "Evens and Michael, 2006)",
"ref_id": "BIBREF8"
},
{
"start": 178,
"end": 198,
"text": "(Shang et al., 2015)",
"ref_id": "BIBREF33"
},
{
"start": 331,
"end": 352,
"text": "(Seyler et al., 2015)",
"ref_id": "BIBREF32"
},
{
"start": 616,
"end": 637,
"text": "(Bordes et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 640,
"end": 659,
"text": "(Dong et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 1218,
"end": 1239,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF34"
},
{
"start": 1324,
"end": 1343,
"text": "(Levy et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Previous work in machine translation dealt with rare or unseen word problem problem for translating names and numbers in text. (Luong et al., 2015) propose a model that generates positional placeholders pointing to some words in source sentence and copy it to target sentence (copy actions). (G\u00fcl\u00e7ehre et al., 2016; Gu et al., 2016) introduce separate trainable modules for copy actions to adapt to highly variable input sequences, for text summarization. For text generation from tables, (Lebret et al., 2016) extend positional copy actions to copy values from fields in the given table. For QG, (Serban et al., 2016) use a placeholder for the subject entity in the question to generalize to unseen entities. Their work is limited to unseen entities and does not study how they can generalize to unseen predicates and entity types.",
"cite_spans": [
{
"start": 127,
"end": 147,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 292,
"end": 315,
"text": "(G\u00fcl\u00e7ehre et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 316,
"end": 332,
"text": "Gu et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 489,
"end": 510,
"text": "(Lebret et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Let F = {s, p, o} be the input fact provided to our model consisting of a subject s, a predicate p and an object o, and C be the set of textual contexts associated to this fact. Our goal is to learn a model that generates a sequence of T tokens Y = y 1 , y 2 , . . . , y T representing a question about the subject s, where the object o is the correct answer. Our model approximates the conditional probability of the output question given an input fact p(Y |F ), to be the probability of the output question, given an input fact and the additional textual context C, modelled as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(Y |F ) = T t=1 p(y t |y <t , F, C)",
"eq_num": "(1)"
}
],
"section": "Model",
"sec_num": "3"
},
{
"text": "where y <t represents all previously generated tokens until time step t. Additional textual contexts are natural language representation of the triples Figure 1 : The proposed model for Question Generation. The model consists of a single fact encoder and n textual context encoders, each consists of a separate GRU. At each time step t, two attention vectors generated from the two attention modules are fed to the decoder to generate the next word in the output question.",
"cite_spans": [],
"ref_spans": [
{
"start": 152,
"end": 160,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "that can be drawn from a corpus -our model is generic to any textual contexts that can be additionally provided, though we describe in Section 4.1 how to create such texts from Wikipedia. Our model derives the encoder-decoder architecture of (Sutskever et al., 2014; Bahdanau et al., 2014) with two encoding modules: a feed forward architecture encodes the input triple (sec. 3.1) and a set of recurrent neural network (RNN) to encode each textual context (sec. 3.2). Our model has two attention modules (Bahdanau et al., 2014) : one acts over the input triple and another acts over the input textual contexts (sec. 3.4). The decoder (sec. 3.3) is another RNN that generates the output question. At each time step, the decoder chooses to output either a word from the vocabulary or a special token indicating a copy action (sec. 3.5) from any of the textual contexts.",
"cite_spans": [
{
"start": 242,
"end": 266,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF35"
},
{
"start": 267,
"end": 289,
"text": "Bahdanau et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 504,
"end": 527,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Given an input fact F = {s, p, o}, let each of e s , e p and e o be a 1-hot vectors of size K. The fact encoder encodes each 1-hot vector into a fixed size vector h s = E f e s , h p = E f e p and h o = E f e o , where E f \u2208 R H k \u00d7K is the KB embedding matrix, H k is the size of the KB embedding and K is the size of the KB vocabulary. The encoded fact h f \u2208 R 3H k represents the concatenation of those three vectors and we use it to initialize the decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fact Encoder",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h f = [h s ; h p ; h o ]",
"eq_num": "(2)"
}
],
"section": "Fact Encoder",
"sec_num": "3.1"
},
{
"text": "Following (Serban et al., 2016), we learn E f using TransE (Bordes et al., 2015) . We fix its weights and do not allow their update during training time.",
"cite_spans": [
{
"start": 59,
"end": 80,
"text": "(Bordes et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fact Encoder",
"sec_num": "3.1"
},
{
"text": "Given a set of n textual contexts",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Context Encoder",
"sec_num": "3.2"
},
{
"text": "C = {c 1 , c 2 , . . . , c n : c j = (x j 1 , x j 2 , . . . , x j |c j | )}, where x j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Context Encoder",
"sec_num": "3.2"
},
{
"text": "i represents the 1-hot vector of the i th token in the j th textual context c j , and |c j | is the length of the j th context. We use a set of n Gated Recurrent Neural Networks (GRU) to encode each of the textual concepts separately:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Context Encoder",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h c j i = GRU j E c x j i , h c j i\u22121",
"eq_num": "(3)"
}
],
"section": "Textual Context Encoder",
"sec_num": "3.2"
},
{
"text": "where h c j i \u2208 R Hc is the hidden state of the GRU that is equivalent to x j i and of size H c . E c is the input word embedding matrix. The encoded context represents the encoding of all the textual contexts; it is calculated as the concatenation of all the final states of all the encoded contexts:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Context Encoder",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h c = [h c 1 |c 1 | ; h c 2 |c 2 | ; . . . ; h cn |cn| ].",
"eq_num": "(4)"
}
],
"section": "Textual Context Encoder",
"sec_num": "3.2"
},
{
"text": "For the decoder we use another GRU with an attention mechanism (Bahdanau et al., 2014) , in which the decoder hidden state s t \u2208 R H d at each time step t is calculated as:",
"cite_spans": [
{
"start": 63,
"end": 86,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s t = z t \u2022 s t\u22121 + (1 \u2212 z t ) \u2022s t ,",
"eq_num": "(5)"
}
],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "Where: E w \u2208 R m\u00d7V is the word embedding matrix, m is the word embedding size and H d is the size of the decoder hidden state. a f t , a c t are the outputs of the fact attention and the context attention modules respectively, detailed in the following subsection. In order to enforce the model to pair output words with words from the textual inputs, we couple the word embedding matrices of both the decoder E w and the textual context encoder E c (eq.(3)). We initialize them with GloVe embeddings (Pennington et al., 2014) and allow the network to tune them. The first hidden state of the decoder",
"cite_spans": [
{
"start": 501,
"end": 526,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "st = tanh W Ewyt\u22121 + U [rt \u2022 st\u22121] + A [a f t ; a c t ] (6) zt = \u03c3 Wz Ew yt\u22121 + Uz st\u22121 + Az [a f t ; a c t ]",
"eq_num": "(7)"
}
],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "rt = \u03c3 Wr Ew yt\u22121 + Ur st\u22121 + Ar [a f t ; a c t ]",
"eq_num": "(8)"
}
],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "W, W z , W r \u2208 R m\u00d7H d , U, U z , U r , A, A z , A r \u2208 R H d \u00d7H",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "s 0 = [h f ; h c ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "is initialized using a concatenation of the encoded fact (eq.(2)) and the encoded context (eq.(4)) . At each time step t, after calculating the hidden state of the decoder, the conditional probability distribution over each token y t of the generated question is computed as the sof tmax(W o s t ) over all the entries in the output vocabulary, W o \u2208 R H d \u00d7V is the weight matrix of the output layer of the decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "Our model has two attention modules: Triple attention over the input triple to determine at each time step t an attention-based encoding of the input fact a f t \u2208 R H k :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a f t = \u03b1 s,t h s + \u03b1 p,t h p + \u03b1 s,t h o ,",
"eq_num": "(9)"
}
],
"section": "Attention",
"sec_num": "3.4"
},
{
"text": "\u03b1 s,t , \u03b1 p,t , \u03b1 o,t are scalar values calculated by the attention mechanism to determine at each time step which of the encoded subject, predicate, or object the decoder should attend to.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "3.4"
},
{
"text": "Textual contexts attention over all the hidden states of all the textual contexts a c t \u2208 R Hc :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a c t = |C| i=1 |c i | j=1 \u03b1 c i t,j h c i j ,",
"eq_num": "(10)"
}
],
"section": "Attention",
"sec_num": "3.4"
},
{
"text": "\u03b1 c i t,j is a scalar value determining the weight of the j th word in the i th context c i at time step t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "3.4"
},
{
"text": "Given a set of encoded input vectors I = {h 1 , h 2 , ...h k } and the decoder previous hidden state s t\u22121 , the attention mechanism calculates \u03b1 t = \u03b1 i,t , . . . , \u03b1 k,t as a vector of scalar weights, each \u03b1 i,t determines the weight of its correspond- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e i,t = v a tanh(W a s t\u22121 + U a h i ) (11) \u03b1 i,t = exp (e i,t ) k j=1 exp (e j,t ) ,",
"eq_num": "(12)"
}
],
"section": "Attention",
"sec_num": "3.4"
},
{
"text": "where v a , W a , U a are trainable weight matrices of the attention modules. It is important to notice here that we encode each textual context separately using a different GRU, but we calculate an overall attention over all tokens in all textual contexts: at each time step the decoder should ideally attend to only one word from all the input contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "3.4"
},
{
"text": "We use the method of (Luong et al., 2015) by modeling all the copy actions on the data level through an annotation scheme. This method treats the model as a black box, which makes it adaptable to any text generation model. Instead of using positional copy actions, we use the part-of-speech information to decide the alignment process between the input and output texts to the model. Each word in every input textual context is replaced by a special token containing a combination of its context id (e.g. C1) and its POS tag (e.g. NOUN). Then, if a word in the output question matches a word in a textual context, it is replaced with its corresponding tag as shown in Table 1 . Unlike (Serban et al., 2016; Lebret et al., 2016) we model the copy actions in the input and the output levels. Our model does not have the drawback of losing the semantic information when replacing words with generic placeholders, since we provide the model with the input triple through the fact encoder. During inference the model chooses to either output words from the vocabulary or special tokens to copy from the textual contexts. In a post-processing step those special tokens are replaced with their original words from the textual contexts.",
"cite_spans": [
{
"start": 21,
"end": 41,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 678,
"end": 706,
"text": "Unlike (Serban et al., 2016;",
"ref_id": null
},
{
"start": 707,
"end": 727,
"text": "Lebret et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 668,
"end": 675,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Part-Of-Speech Copy Actions",
"sec_num": "3.5"
},
{
"text": "As a source of question paired with KB triples we use the SimpleQuestions dataset (Bordes et al., 2015) . It consists of 100K questions with their corresponding triples from Freebase, and was created manually through crowdsourcing. When asked to form a question from an input triple, human annotators usually tend to mainly focus on expressing the predicate of the input triple. For example, given a triple with the predicate fb:spacecraft/manufacturer the user may ask \"What is the manufacturer of [S] ?\". Annotators may specify the entity type of the subject or the object of the triple: \"What is the manufacturer of the spacecraft [S]?\" or \"Which company manufactures [S]?\". Motivated by this example we chose to associate each input triple with three textual contexts of three different types.",
"cite_spans": [
{
"start": 82,
"end": 103,
"text": "(Bordes et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Textual contexts dataset",
"sec_num": "4"
},
{
"text": "The first is a phrase containing lexicalization of the predicate of the triple. The second and the third are two phrases containing the entity type of the subject and the object of the triple. In what follows we show the process of collection and preprocessing of those textual contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Textual contexts dataset",
"sec_num": "4"
},
{
"text": "We extend the set of triples given in the Sim-pleQuestions dataset by using the FB5M (Bordes et al., 2015 ) subset of Freebase. As a source of text documents, we rely on Wikipedia articles.",
"cite_spans": [
{
"start": 85,
"end": 105,
"text": "(Bordes et al., 2015",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Collection of Textual Contexts",
"sec_num": "4.1"
},
{
"text": "Predicate textual contexts: In order to collect textual contexts associated with the SimpleQuestions triples, we follow the distant supervision setup for relation extraction (Mintz et al., 2009) . The distant supervision assumption has been effective in creating training data for relation extraction and shown to be 87% correct (Riedel et al., 2010) on Wikipedia text. First, we align each triple in the FB5M KB to sentences in Wikipedia if the subject and the object of this triple co-occur in the same sentence. We use a simple string matching heuristic to find entity mentions in text 2 . Afterwards we reduce the sentence to the set of words that appear on the dependency path between the subject and the object mentions in the sentence. We replace the positions of the subject and the object mentions with [S] and [O] to the keep track of the information about the direction of the relation. The top occurring pattern for each predicate is associated to this predicate as its textual context. Table 2 shows examples of predicates and their corresponding textual context.",
"cite_spans": [
{
"start": 174,
"end": 194,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF23"
},
{
"start": 329,
"end": 350,
"text": "(Riedel et al., 2010)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 999,
"end": 1006,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Collection of Textual Contexts",
"sec_num": "4.1"
},
{
"text": "We use the labels of the entity types as the sub-type and obj-type textual contexts. We collect the list of entity types of each entity in the FB5M through the predicate fb:type/instance. If an entity has multiple entity types we pick the entity type that is mentioned the most in the first sentence of each Wikipedia article. Thus the textual contexts will opt for entity types that is more natural to appear in free text and therefore questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-Type and Obj-Type textual contexts:",
"sec_num": null
},
{
"text": "To generate the special tokens for copy actions (sec. 3.5) we run POS tagging on each of the input textual contexts 3 . We replace every word in each textual context with a combination of its context id (e.g. C1) and its POS tag (e.g. NOUN). If the same POS tag appears multiple times in the textual context, it is given an additional id (e.g. C1 NOUN 2). If a word in the output question overlaps with a word in the input textual context, this word is replaced by its corresponding tag. For sentence and word tokenization we use the Regex tokenizer from the NLTK toolkit (Bird, 2006) , and for POS tagging and dependency pars- Table 3 : Dataset statistics across 10 folds for each experiment ing we use the Spacy 4 implementation.",
"cite_spans": [
{
"start": 572,
"end": 584,
"text": "(Bird, 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 628,
"end": 635,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generation of Special tokens",
"sec_num": "4.2"
},
{
"text": "We develop three setups that follow the same procedure as (Levy et al., 2017) for Zero-Shot relation extraction to evaluate how our model generalizes to: 1) unseen predicates, 2) unseen sub-types and 3) unseen obj-types. For the unseen predicates setup we group all the samples in SimpleQuestions by the predicate of the input triple, and keep groups that contain at least 50 samples. Afterwards we randomly split those groups to 70% train, 10% valid and 20% test mutual exclusive sets respectively. This guarantees that if the predicate fb:person/place of birth for example shows during test time, the training and validation set will not contain any input triples having this predicate. We repeat this process to create 10 cross validation folds, in our evaluation we report the mean and standard deviation results across those 10 folds. While doing this we make sure that the number of samples in each fold -not only unique predicates -follow the same 70%, 30%, 10% distribution. We repeat the same process for the subject entity types and object entity types (answer types) individually. Similarly, for example in the unseen object-type setup, the question \"Which artist was born in Berlin?\" appearing in the test set means that, there is no question in the training set having an entity of type artist. Table 3 shows the mean number of samples, predicates, sub-types and obj-types across the 10 folds for each experiment setup.",
"cite_spans": [
{
"start": 58,
"end": 77,
"text": "(Levy et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1308,
"end": 1315,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Zero-Shot Setups",
"sec_num": "5.1"
},
{
"text": "4 https://spacy.io/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-Shot Setups",
"sec_num": "5.1"
},
{
"text": "SELECT is a baseline built from (Serban et al., 2016) and adapted for the zero shot setup. During test time given a fact F , this baseline picks a fact F c from the training set and outputs the question that corresponds to it. For evaluating unseen predicates, F c has the same answer type (obj-type) as F . And while evaluating unseen sub-types or objtypes, F c and F have the same predicate.",
"cite_spans": [
{
"start": 32,
"end": 53,
"text": "(Serban et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "is an extension that we propose for SELECT. The input triple is encoded using the concatenation of the TransE embeddings of the subject, predicate and object. At test time, R-TRANSE picks a fact from the training set that is the closest to the input fact using cosine similarity and outputs the question that corresponds to it. We provide two versions of this baseline: R-TRANSE which indexes and retrieves raw questions with only a single placeholder for the subject label, such as in (Serban et al., 2016) . And R-TRANSE copy which indexes and retrieves questions using our copy actions mechanism (sec. 3.5).",
"cite_spans": [
{
"start": 486,
"end": 507,
"text": "(Serban et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "R-TRANSE",
"sec_num": null
},
{
"text": "IR is an information retrieval baseline. Information retrieval has been used before as baseline for QG from text input (Rush et al., 2015; Du et al., 2017) . We rely on the textual context of each input triple as the search keyword for retrieval. First, the IR baseline encodes each question in the training set as a vector of TF-IDF weights (Joachims, 1997) and then does dimensionality reduction through LSA (Halko et al., 2011) . At test time the textual context of the input triple is converted into a dense vector using the same process and then the question with the closest cosine distance to the input is retrieved. We provide two versions of this baseline: IR on raw text and IR copy on text with our placeholders for copy actions.",
"cite_spans": [
{
"start": 119,
"end": 138,
"text": "(Rush et al., 2015;",
"ref_id": "BIBREF29"
},
{
"start": 139,
"end": 155,
"text": "Du et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 342,
"end": 358,
"text": "(Joachims, 1997)",
"ref_id": "BIBREF15"
},
{
"start": 410,
"end": 430,
"text": "(Halko et al., 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "R-TRANSE",
"sec_num": null
},
{
"text": "Encoder-Decoder. Finally, we compare our model to the Encoder-Decoder model with a single placeholder, the best performing model from (Serban et al., 2016) . We initialize the encoder with TransE embeddings and the decoder with GloVe word embeddings. Although this model was not originally built to generalize to unseen predicates and entity types, it has some generalization abilities represented in the encoded infor-mation in the pre-trained embeddings. Pretrained KB terms and word embeddings encode relations between entities or between words as translations in the vector space. Thus the model might be able to map new classes or predicates in the input fact to new words in the output question.",
"cite_spans": [
{
"start": 134,
"end": 155,
"text": "(Serban et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "R-TRANSE",
"sec_num": null
},
{
"text": "To train the neural network models we optimize the negative log-likelihood of the training data with respect to all the model parameters. For that we use the RMSProp optimization algorithm with a decreasing learning rate of 0.001, mini-batch size = 200, and clipping gradients with norms larger than 0.1. We use the same vocabulary for both the textual context encoders and the decoder outputs. We limit our vocabulary to the top 30, 000 words including the special tokens. For the word embeddings we chose GloVe (Pennington et al., 2014) pretrained embeddings of size 100. We train TransE embeddings of size H k = 200, on the FB5M dataset (Bordes et al., 2015 ) using the TransE model implementation from (Lin et al., 2015) . We set GRU hidden size of the decoder to H d = 500, and textual encoder to H c = 200. The networks hyperparameters are set with respect to the final BLEU-4 score over the validation set. All neural networks are implemented using Tensorflow (Abadi et al., 2015) . All experiments and models source code are publicly available 5 for the sake of reproducibility.",
"cite_spans": [
{
"start": 513,
"end": 538,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF27"
},
{
"start": 640,
"end": 660,
"text": "(Bordes et al., 2015",
"ref_id": "BIBREF3"
},
{
"start": 706,
"end": 724,
"text": "(Lin et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 967,
"end": 987,
"text": "(Abadi et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training & Implementation Details",
"sec_num": "5.3"
},
{
"text": "To evaluate the quality of the generated question, we compare the original labeled questions by human annotators to the ones generated by each variation of our model and the baselines. We rely on a set of well established evaluation metrics for text generation: BLEU-1, BLEU-2, BLEU-3, BLEU-4 (Papineni et al., 2002) , METEOR (Denkowski and Lavie, 2014) and ROUGE L (Lin, 2004) .",
"cite_spans": [
{
"start": 293,
"end": 316,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF26"
},
{
"start": 366,
"end": 377,
"text": "(Lin, 2004)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation Metrics",
"sec_num": "5.4"
},
{
"text": "Automatic Metrics for evaluating text generation such as BLEU and METEOR give an measure of how close the generated questions are to the target correct labels. However, they still suffer from many limitations (Novikova et al., 2017) .",
"cite_spans": [
{
"start": 209,
"end": 232,
"text": "(Novikova et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "5.5"
},
{
"text": "Automatic metrics might not be able to evaluate directly whether a specific predicate was explicitly mentioned in the generated text or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "5.5"
},
{
"text": "As an example, taking a target question and two corresponding generated questions A and B: We can find that the sentence A having a better BLEU score than B although it is not able to express the correct target predicate (film genre). For that reason we decide to run two further human evaluations to directly measure the following: Predicate identification: annotators were asked to indicate whether the generated question contains the given predicate in the fact or not, either directly or implicitly. Naturalness: following (Ngomo et al., 2013) , we measure the comprehensibility and readability of the generated questions. Each annotator was asked to rate each generated question using a scale from 1 to 5, where: (5) perfectly clear and natural, (3) artificial but understandable, and (1) completely not understandable. We run our studies on 100 randomly sampled input facts alongside with their corresponding generated questions by each of the systems using the help of 4 annotators.",
"cite_spans": [
{
"start": 527,
"end": 547,
"text": "(Ngomo et al., 2013)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "5.5"
},
{
"text": "Automatic Evaluation Table 4 shows results of our model compared to all other baselines across all evaluation metrics. Our that encodes the KB fact and textual contexts achieves a significant enhancement over all the baselines in all evaluation metrics, with +2.04 BLEU-4 score than the Encoder-Decoder baseline. Incorporating the partof-speech copy actions further improves this enhancement to reach +2.39 BLEU-4 points. Among all baselines, the Encoder-Decoder baseline and the R-TRANSE baseline performed the best. This shows that TransE embeddings encode intra-predicates information and intra-class-types information to a great extent, and can generalize to some extent to unseen predicates and class types. Similar patterns can be seen in the evaluation on unseen sub-types and obj-types (Table 5) . Our model with copy actions was able to outperform cates identified and naturalness 0-5 all the other systems. Majority of systems have reported a significantly higher BLEU-4 scores in these two tasks than when generalizing to unseen predicates (+12 and +8 BLEU-4 points respectively). This indicates that these tasks are relatively easier and hence our models achieve relatively smaller enhancements over the baselines. Table 6 shows how different variations of our system can express the unseen predicate in the target question with comparison to the Encoder-Decoder baseline. Our proposed copy actions have scored a significant enhancement in the identification of unseen predicates with up to +40% more than best performing baseline and our model version without the copy actions.",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 28,
"text": "Evaluation Table 4",
"ref_id": null
},
{
"start": 794,
"end": 803,
"text": "(Table 5)",
"ref_id": null
},
{
"start": 1227,
"end": 1234,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "6"
},
{
"text": "By examining some of the generated questions ( Table 7) we see that models without copy actions can generalize to unseen predicates that only have a very similar freebase predicate in the training set.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 55,
"text": "Table 7)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": null
},
{
"text": "For example fb:tv program/language and fb:film/language, if one of those predicates exists in the training set the model can use the same questions for the other during test time. Copy actions from the sub-type and the obj-type textual contexts can generalize to a great extent to unseen predicates because of the overlap between the predicate and the object type in many questions (Example 2 Table 7 ). Adding the predicate context to our model has enhanced model performance for expressing unseen predicates by +9% (Table 6 ). However we can see that it has affected the naturalness of the question. The post processing step does not take into consideration that some verbs and prepositions do not fit in the sentence structure, or that some words are already existing in the question words (Example 4 Table 7) . This does not happen as much when having copy actions from the sub-type and the obj-type contexts because they are mainly formed of nouns which are more interchangeable than verbs or prepositions. A post-processing step to reform the question instead of direct copying from the input source is considered in our future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 393,
"end": 400,
"text": "Table 7",
"ref_id": null
},
{
"start": 517,
"end": 525,
"text": "(Table 6",
"ref_id": null
},
{
"start": 804,
"end": 812,
"text": "Table 7)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": null
},
{
"text": "In this paper we presented a new neural model for question generation from knowledge bases, with a main focus on predicates, subject types or object types that were not seen at the training phase (Zero-Shot Question Generation). Our model is based on an encoder-decoder architecture that leverages textual contexts of triples, two attention layers for triples and textual contexts and Table 7 : Examples of generated questions from different systems in comparison finally a part-of-speech copy action mechanism. Our method exhibits significantly better results for Zero-Shot QG than a set of strong baselines including the state-of-the-art question generation from KB. Additionally, a complimentary human evaluation, helps in showing that the improvement brought by our part-of-speech copy action mechanism is even more significant than what the automatic evaluation suggests. The source code and the collected textual contexts are provided for the community 6",
"cite_spans": [],
"ref_spans": [
{
"start": 385,
"end": 392,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We map Freebase entities to Wikidata through the Wikidata property P646, then we extract their labels and aliases. We use the Wikidata truthy dump: https://dumps. wikimedia.org/wikidatawiki/entities/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For the predicate textual contexts we run pos tagging on the original text not the lexicalized dependency path",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/hadyelsahar/ Zeroshot-QuestionGeneration",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/hadyelsahar/ Zeroshot-QuestionGeneration",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is partially supported by the Answering Questions using Web Data (WDAqua) project, a Marie Sk\u0142odowska-Curie Innovative Training Network under grant agreement No 642795, part of the Horizon 2020 programme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": " Table 6 : results of Human evaluation on % of predi-",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "TensorFlow: Largescale machine learning on heterogeneous systems",
"authors": [
{
"first": "Mart\u00edn",
"middle": [],
"last": "Abadi",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Barham",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Brevdo",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Citro",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Devin",
"suffix": ""
},
{
"first": "Sanjay",
"middle": [],
"last": "Ghemawat",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Harp",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Irving",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "Yangqing",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Manjunath",
"middle": [],
"last": "Kudlur",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Levenberg",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Cor- rado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Tal- war, Paul Tucker, Vincent Vanhoucke, Vijay Va- sudevan, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete War- den, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large- scale machine learning on heterogeneous systems. Software available from tensorflow.org. https: //www.tensorflow.org/.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. http://arxiv.org/abs/",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "NLTK: the natural language toolkit",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": 2006,
"venue": "ACL 2006, 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird. 2006. NLTK: the natural language toolkit. In ACL 2006, 21st International Conference on Computational Linguistics and 44th Annual Meet- ing of the Association for Computational Linguis- tics, Proceedings of the Conference, Sydney, Aus- tralia, 17-21 July 2006. http://aclweb.org/ anthology/P06-4018.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Large-scale simple question answering with memory networks",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple ques- tion answering with memory networks. CoRR abs/1506.02075. http://arxiv.org/abs/ 1506.02075.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Aglar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, \u00c7 aglar G\u00fcl\u00e7ehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representa- tions using RNN encoder-decoder for statistical ma- chine translation. CoRR abs/1406.1078.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Meteor universal: Language specific translation evaluation for any target language",
"authors": [
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation, WMT@ACL 2014",
"volume": "",
"issue": "",
"pages": "376--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael J. Denkowski and Alon Lavie. 2014. Me- teor universal: Language specific translation eval- uation for any target language. In Proceed- ings of the Ninth Workshop on Statistical Ma- chine Translation, WMT@ACL 2014, June 26- 27, 2014, Baltimore, Maryland, USA. pages 376- 380. http://aclweb.org/anthology/W/ W14/W14-3348.pdf.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning to paraphrase for question answering",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Mallinson",
"suffix": ""
},
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "2017",
"issue": "",
"pages": "875--886",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question answering. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP 2017, Copen- hagen, Denmark, September 9-11, 2017. pages 875-886. https://aclanthology.info/ papers/D17-1091/d17-1091.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning to ask: Neural question generation for reading comprehension",
"authors": [
{
"first": "Xinya",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Junru",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "17--1123",
"other_ids": {
"DOI": [
"10.18653/v1"
]
},
"num": null,
"urls": [],
"raw_text": "Xinya Du, Junru Shao, and Claire Cardie. 2017. Learn- ing to ask: Neural question generation for read- ing comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Volume 1: Long Papers. pages 1342-1352. https://doi.org/10.18653/ v1/P17-1123.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "One-on-one tutoring by humans and machines",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Evens",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Michael",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Evens and Joel Michael. 2006. One-on-one tu- toring by humans and machines. Computer Science Department, Illinois Institute of Technology .",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Identifying relations for open information extraction",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1535--1545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Process- ing, EMNLP 2011, 27-31 July 2011, John McIn- tyre Conference Centre, Edinburgh, UK, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1535-1545. http://www.aclweb.org/ anthology/D11-1142.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Autotutor: A tutor with dialogue in natural language",
"authors": [
{
"first": "C",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Shulan",
"middle": [],
"last": "Graesser",
"suffix": ""
},
{
"first": "George",
"middle": [
"Tanner"
],
"last": "Lu",
"suffix": ""
},
{
"first": "Heather",
"middle": [
"Hite"
],
"last": "Jackson",
"suffix": ""
},
{
"first": "Mathew",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ventura",
"suffix": ""
},
{
"first": "Max",
"middle": [
"M"
],
"last": "Olney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Louwerse",
"suffix": ""
}
],
"year": 2004,
"venue": "Behavior Research Methods",
"volume": "36",
"issue": "2",
"pages": "180--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur C Graesser, Shulan Lu, George Tanner Jack- son, Heather Hite Mitchell, Mathew Ventura, An- drew Olney, and Max M Louwerse. 2004. Autotu- tor: A tutor with dialogue in natural language. Be- havior Research Methods 36(2):180-192.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Incorporating copying mechanism in sequence-to-sequence learning",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7- 12, 2016, Berlin, Germany, Volume 1: Long Pa- pers. http://aclweb.org/anthology/P/ P16/P16-1154.pdf.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Pointing the unknown words",
"authors": [
{
"first": "Sungjin",
"middle": [],
"last": "\u00c7 Aglar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00c7 aglar G\u00fcl\u00e7ehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Point- ing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Pa- pers. http://aclweb.org/anthology/P/ P16/P16-1014.pdf.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Halko",
"suffix": ""
},
{
"first": "Joel",
"middle": [
"A"
],
"last": "Per-Gunnar Martinsson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tropp",
"suffix": ""
}
],
"year": 2011,
"venue": "SIAM Review",
"volume": "53",
"issue": "2",
"pages": "217--288",
"other_ids": {
"DOI": [
"10.1137/090771806"
]
},
"num": null,
"urls": [],
"raw_text": "Nathan Halko, Per-Gunnar Martinsson, and Joel A. Tropp. 2011. Finding structure with random- ness: Probabilistic algorithms for constructing ap- proximate matrix decompositions. SIAM Re- view 53(2):217-288. https://doi.org/10. 1137/090771806.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Good question! statistical ranking for question generation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings",
"volume": "",
"issue": "",
"pages": "609--617",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question genera- tion. In Human Language Technologies: Confer- ence of the North American Chapter of the As- sociation of Computational Linguistics, Proceed- ings, June 2-4, 2010, Los Angeles, California, USA. pages 609-617. http://www.aclweb.org/ anthology/N10-1086.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A probabilistic analysis of the rocchio algorithm with TFIDF for text categorization",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Fourteenth International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "143--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 1997. A probabilistic analysis of the rocchio algorithm with TFIDF for text catego- rization. In Proceedings of the Fourteenth Inter- national Conference on Machine Learning (ICML 1997), Nashville, Tennessee, USA, July 8-12, 1997. pages 143-151.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Generating natural language question-answer pairs from a knowledge graph using a RNN based question generation model",
"authors": [
{
"first": "M",
"middle": [],
"last": "Mitesh",
"suffix": ""
},
{
"first": "Dinesh",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "Sachindra",
"middle": [],
"last": "Raghu",
"suffix": ""
},
{
"first": "Sathish",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Reddy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "376--385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitesh M. Khapra, Dinesh Raghu, Sachindra Joshi, and Sathish Reddy. 2017. Generating natural lan- guage question-answer pairs from a knowledge graph using a RNN based question generation model. In Proceedings of the 15th Conference of the European Chapter of the Association for Com- putational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers. pages 376- 385. https://aclanthology.info/pdf/ E/E17/E17-1036.pdf.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Unifying visual-semantic embeddings with multimodal neural language models",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"S"
],
"last": "Zemel",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Kiros, Ruslan Salakhutdinov, and Richard S. Zemel. 2014. Unifying visual-semantic embed- dings with multimodal neural language models. CoRR abs/1411.2539. http://arxiv.org/ abs/1411.2539.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neural text generation from structured data with application to the biography domain",
"authors": [
{
"first": "R\u00e9mi",
"middle": [],
"last": "Lebret",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1203--1213",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R\u00e9mi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Em- pirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1- 4, 2016. pages 1203-1213. http://aclweb. org/anthology/D/D16/D16-1128.pdf.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Zero-shot relation extraction via reading comprehension",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 21st Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "333--342",
"other_ids": {
"DOI": [
"10.18653/v1/K17-1034"
]
},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), Vancouver, Canada, Au- gust 3-4, 2017. pages 333-342. https://doi. org/10.18653/v1/K17-1034.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text summarization branches out: Proceedings of the ACL-04 workshop",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. In Text summariza- tion branches out: Proceedings of the ACL-04 work- shop. Barcelona, Spain, volume 8.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning entity and relation embeddings for knowledge graph completion",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2181--2187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and re- lation embeddings for knowledge graph comple- tion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25- 30, 2015, Austin, Texas, USA.. pages 2181- 2187. http://www.aaai.org/ocs/index. php/AAAI/AAAI15/paper/view/9571.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Addressing the rare word problem in neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "11--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Address- ing the rare word problem in neural machine trans- lation. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguis- tics and the 7th International Joint Conference on Natural Language Processing of the Asian Feder- ation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers. pages 11-19. http://aclweb.org/ anthology/P/P15/P15-1002.pdf.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Ju- rafsky. 2009. Distant supervision for relation extrac- tion without labeled data. In ACL 2009, Proceed- ings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th Interna- tional Joint Conference on Natural Language Pro- cessing of the AFNLP, 2-7 August 2009, Singapore. pages 1003-1011. http://www.aclweb.org/ anthology/P09-1113.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Sorry, i don't speak SPARQL: translating SPARQL queries into natural language",
"authors": [
{
"first": "Axel-Cyrille Ngonga",
"middle": [],
"last": "Ngomo",
"suffix": ""
},
{
"first": "Lorenz",
"middle": [],
"last": "B\u00fchmann",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Unger",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gerber",
"suffix": ""
}
],
"year": 2013,
"venue": "22nd International World Wide Web Conference, WWW '13",
"volume": "",
"issue": "",
"pages": "977--988",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Axel-Cyrille Ngonga Ngomo, Lorenz B\u00fchmann, Christina Unger, Jens Lehmann, and Daniel Ger- ber. 2013. Sorry, i don't speak SPARQL: translating SPARQL queries into natural language. In 22nd In- ternational World Wide Web Conference, WWW '13, Rio de Janeiro, Brazil, May 13-17, 2013. pages 977- 988. http://dl.acm.org/citation.cfm? id=2488473.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Why we need new evaluation metrics for NLG",
"authors": [
{
"first": "Jekaterina",
"middle": [],
"last": "Novikova",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Dusek",
"suffix": ""
},
{
"first": "Amanda",
"middle": [
"Cercas"
],
"last": "Curry",
"suffix": ""
},
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "2017",
"issue": "",
"pages": "2241--2252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jekaterina Novikova, Ondrej Dusek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2017, Copen- hagen, Denmark, September 9-11, 2017. pages 2241-2252. https://aclanthology.info/ papers/D17-1238/d17-1238.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Philadelphia",
"suffix": ""
},
{
"first": "P",
"middle": [
"A"
],
"last": "",
"suffix": ""
},
{
"first": "Usa",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, July 6-12, 2002, Philadel- phia, PA, USA.. pages 311-318. http://www. aclweb.org/anthology/P02-1040.pdf.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1532- 1543. http://aclweb.org/anthology/D/ D14/D14-1162.pdf.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Modeling relations and their mentions without labeled text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "Machine Learning and Knowledge Discovery in Databases, European Conference, ECML PKDD 2010",
"volume": "",
"issue": "",
"pages": "148--163",
"other_ids": {
"DOI": [
"10.1007/978-3-642-15939-8_10"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCal- lum. 2010. Modeling relations and their men- tions without labeled text. In Machine Learn- ing and Knowledge Discovery in Databases, Euro- pean Conference, ECML PKDD 2010, Barcelona, Spain, September 20-24, 2010, Proceedings, Part III. pages 148-163. https://doi.org/10. 1007/978-3-642-15939-8_10.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "2015",
"issue": "",
"pages": "379--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush, Sumit Chopra, and Jason We- ston. 2015. A neural attention model for ab- stractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lis- bon, Portugal, September 17-21, 2015. pages 379- 389. http://aclweb.org/anthology/D/ D15/D15-1044.pdf.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Courville, and Yoshua Bengio. 2016. Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus. In Pro- ceedings of the 54th Annual Meeting of the Associ- ation for Computational Linguistics, ACL 2016, Au- gust 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. http://aclweb.org/anthology/ P/P16/P16-1056.pdf.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Generating quiz questions from knowledge graphs",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Seyler",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Yahya",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Berberich",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th International Conference on World Wide Web Companion, WWW 2015",
"volume": "",
"issue": "",
"pages": "113--114",
"other_ids": {
"DOI": [
"10.1145/2740908.2742722"
]
},
"num": null,
"urls": [],
"raw_text": "Dominic Seyler, Mohamed Yahya, and Klaus Berberich. 2015. Generating quiz questions from knowledge graphs. In Proceedings of the 24th Inter- national Conference on World Wide Web Compan- ion, WWW 2015, Florence, Italy, May 18-22, 2015 -Companion Volume. pages 113-114. https: //doi.org/10.1145/2740908.2742722.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Neural responding machine for short-text conversation",
"authors": [
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1577--1586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conver- sation. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Nat- ural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Pa- pers. pages 1577-1586. http://aclweb.org/ anthology/P/P15/P15-1152.pdf.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Zero-shot learning through cross-modal transfer",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Milind",
"middle": [],
"last": "Ganjoo",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8",
"volume": "",
"issue": "",
"pages": "935--943",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Milind Ganjoo, Christopher D. Man- ning, and Andrew Y. Ng. 2013. Zero-shot learning through cross-modal transfer. In Advances in Neu- ral Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Sys- tems 2013. Proceedings of a meeting held Decem- ber 5-8, 2013, Lake Tahoe, Nevada, United States.. pages 935-943.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In Advances in Neural Information Process- ing Systems 27: Annual Conference on Neural In- formation Processing Systems 2014, December 8- 13 2014, Montreal, Quebec, Canada. pages 3104- 3112.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "d are learnable parameters of the GRU.",
"type_str": "figure"
},
"TABREF0": {
"num": null,
"content": "<table><tr><td>C1</td><td>[S] death by [O] [S] [C1 NOUN] [C1 ADP] [O]</td></tr><tr><td>C2</td><td>Disease</td></tr><tr><td/><td>[C2 NOUN]</td></tr><tr><td>C3</td><td>Musical artist [C3 ADJ] [C3 NOUN]</td></tr></table>",
"type_str": "table",
"text": "What caused the [C1 NOUN] of the [C3 NOUN] [S] ?",
"html": null
},
"TABREF1": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "An annotated example of part-of-speech copy actions from several input textual contexts (C1, C2, C3), the words or placeholders in bold are copied in the generated question ing encoded input vector h i .",
"html": null
},
"TABREF3": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Table showing an example of textual contexts extracted for freebase predicates",
"html": null
},
"TABREF4": {
"num": null,
"content": "<table><tr><td>Train</td><td>Valid</td><td>Test</td></tr><tr><td># pred # samples # types # samples % samples 70.0 sub-types 169.4 55566.7 pred 112.7 60002.6 % samples 70.0 \u00b1 7.9 # types 521.6 # samples 57878.1 obj-types % samples 70.0 \u00b1 4.7</td><td>24.2 7938.1 16.1 8571.8 10.0 \u00b1 3.6 189.9 8268.3 10.0 \u00b1 2.5</td><td>48.4 15876.2 32.2 17143.6 20.0 \u00b1 6.2 282.2 16536.6 20.0 \u00b1 3.8</td></tr></table>",
"type_str": "table",
"text": "\u00b1 2.77 10.0 \u00b1 1.236 20.0 \u00b1 2.12",
"html": null
},
"TABREF6": {
"num": null,
"content": "<table><tr><td>2 Reference</td><td>how is roosevelt in Africa clas-sified?</td></tr><tr><td>Enc-Dec.</td><td>what is the name of a roosevelt in Africa?</td></tr><tr><td>Our-Model</td><td>what is the name of the movie roosevelt in Africa?</td></tr><tr><td colspan=\"2\">Our-Model Copy what is a genre of roosevelt in Africa?</td></tr><tr><td>3 Reference</td><td>where can 5260 philvron be found?</td></tr><tr><td>Enc-Dec.</td><td>what is a release some that 5260 philvron wrote?</td></tr><tr><td>Our-Model</td><td>what is the name of an artist 5260 philvron?</td></tr><tr><td colspan=\"2\">Our-Model Copy which star system contains the star system body 5260 philvron?</td></tr><tr><td>4 Reference</td><td>which university did ezra cor-nell create?</td></tr><tr><td>Enc-Dec.</td><td>which films are part of ezra cor-nell?</td></tr><tr><td>Our-Model</td><td>what is a position of ezra cornell?</td></tr><tr><td>5 Reference</td><td>who founded snocap , inc .?</td></tr><tr><td>Enc-Dec.</td><td>which asian snocap is most as?</td></tr><tr><td>Our model</td><td>what is the name of a person of snocap?</td></tr><tr><td>Our-</td><td/></tr></table>",
"type_str": "table",
"text": "Reference what language is spoken in the tv show three sheets? Enc-Dec. in what language is three sheets in? Our-Model what the the player is the three sheets? Our-Model Copy what is the language of three sheets? Our-Model Copy what founded the name of a university that ezra cornell founded? Model Copy who is the person behind snocap?",
"html": null
}
}
}
}