ACL-OCL / Base_JSON /prefixE /json /E17 /E17-1038.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E17-1038",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:53:35.961737Z"
},
"title": "Neural Semantic Encoders",
"authors": [
{
"first": "Tsendsuren",
"middle": [],
"last": "Munkhdalai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachusetts",
"location": {
"region": "MA",
"country": "USA"
}
},
"email": "tsendsuren.munkhdalai@umassmed.edu"
},
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachusetts",
"location": {
"region": "MA",
"country": "USA"
}
},
"email": "hong.yu@umassmed.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a memory augmented neural network for natural language understanding: Neural Semantic Encoders. NSE is equipped with a novel memory update rule and has a variable sized encoding memory that evolves over time and maintains the understanding of input sequences through read, compose and write operations. NSE can also access 1 multiple and shared memories. In this paper, we demonstrated the effectiveness and the flexibility of NSE on five different natural language tasks: natural language inference, question answering, sentence classification, document sentiment analysis and machine translation where NSE achieved state-of-the-art performance when evaluated on publically available benchmarks. For example, our shared-memory model showed an encouraging result on neural machine translation, improving an attention-based baseline by approximately 1.0 BLEU.",
"pdf_parse": {
"paper_id": "E17-1038",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a memory augmented neural network for natural language understanding: Neural Semantic Encoders. NSE is equipped with a novel memory update rule and has a variable sized encoding memory that evolves over time and maintains the understanding of input sequences through read, compose and write operations. NSE can also access 1 multiple and shared memories. In this paper, we demonstrated the effectiveness and the flexibility of NSE on five different natural language tasks: natural language inference, question answering, sentence classification, document sentiment analysis and machine translation where NSE achieved state-of-the-art performance when evaluated on publically available benchmarks. For example, our shared-memory model showed an encouraging result on neural machine translation, improving an attention-based baseline by approximately 1.0 BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recurrent neural networks (RNNs) have been successful for modeling sequences (Elman, 1990) . Particularly, RNNs equipped with internal short memories, such as long short-term memories (LSTM) (Hochreiter and Schmidhuber, 1997) have achieved a notable success in sequential tasks (Cho et al., 2014; Vinyals et al., 2015) . LSTM is powerful because it learns to control its short term memories. However, the short term memories in LSTM are a part of the training parameters. This imposes some practical difficulties in training and modeling long sequences with LSTM.",
"cite_spans": [
{
"start": 77,
"end": 90,
"text": "(Elman, 1990)",
"ref_id": "BIBREF7"
},
{
"start": 191,
"end": 225,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF10"
},
{
"start": 278,
"end": 296,
"text": "(Cho et al., 2014;",
"ref_id": "BIBREF6"
},
{
"start": 297,
"end": 318,
"text": "Vinyals et al., 2015)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently several studies have explored ways of extending the neural networks with an external memory (Graves et al., 2014; Grefenstette et al., 2015) . Unlike LSTM, the short term memories and the training parameters of such a neural network are no longer coupled and can be adapted. In this paper we propose a novel class of memory augmented neural networks called Neural Semantic Encoders (NSE) for natural language understanding. NSE offers several desirable properties. NSE has a variable sized encoding memory which allows the model to access entire input sequence during the reading process; therefore efficiently delivering long-term dependencies over time. The encoding memory evolves over time and maintains the memory of the input sequence through read, compose and write operations. NSE sequentially processes the input and supports word compositionality inheriting both temporal and hierarchical nature of human language. NSE can read from and write to a set of relevant encoding memories simultaneously or multiple NSEs can access a shared encoding memory effectively supporting knowledge and representation sharing. NSE is flexible, robust and suitable for practical NLU tasks and can be trained easily by any gradient descent optimizer.",
"cite_spans": [
{
"start": 101,
"end": 122,
"text": "(Graves et al., 2014;",
"ref_id": "BIBREF8"
},
{
"start": 123,
"end": 149,
"text": "Grefenstette et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate NSE on five different real tasks. For four of them, our models set new state-ofthe-art results. Our results suggest that a NN model with the shared memory between encoder and decoder is a promising approach for sequence transduction problems such as machine translation and abstractive summarization. In particular, we observe that the attention-based neural machine translation can be further improved by sharedmemory models. We also analyze memory access pattern and compositionality in NSE and show that our model captures semantic and syntactic structures of input sentence. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Compose Write Figure 1 : High-level architectures of the Neural Semantic Encoders. NSE reads and writes its own encoding memory in each time step (a). MMA-NSE accesses multiple relevant memories simultaneously (b) .",
"cite_spans": [
{
"start": 210,
"end": 213,
"text": "(b)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 14,
"end": 22,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Output",
"sec_num": null
},
{
"text": "One of the pioneering work that attempts to extend deep neural networks with an external memory is Neural Turing Machines (NTM) (Graves et al., 2014) . NTM implements a centralized controller and a fixed-sized random access memory. The NTM memory is addressable by both content (i.e. soft attention) and location based access mechanisms. The authors evaluated NTM on algorithmic tasks such as copying and sorting sequences.",
"cite_spans": [
{
"start": 128,
"end": 149,
"text": "(Graves et al., 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Comparison with Neural Turing Machines: NSE addresses certain drawbacks of NTM. NTM has a single centralized controller, which is usually an MLP or RNN while NSE takes a modular approach. The main controller in NSE is decomposed into three separate modules, each of which performs for read, compose or write operation. In NSE, the compose module is introduced in addition to the standard memory update operations (i.e. read-write) in order to process the memory entries and input information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The main advantage of NSE over NTM is in its memory update. Despite its sophisticated addressing mechanism, the NTM controller does not have mechanism to avoid information collision in the memory. Particularly the NTM controller emits two separate set of access weights (i.e. read weight and erase and write weights) that do not explicitly encode the knowledge about where information is read from and written to. Moreover the fixed-size memory in NTM has no memory allocation or de-allocation protocol. Therefore unless the controller is intelligent enough to track the previous read/write information, which is hard for an RNN when processing long sequences, the memory content is overlapped and information is over-written throughout different time scales. We think that this is a potential reason that makes NTM hard to train and makes the training not stable. We also note that the effectiveness of the location based addressing introduced in NTM is unclear. In NSE, we introduce a novel and systematic memory update approach based on the soft attention mechanism. NSE writes new information to the most recently read memory locations. This is accomplished by sharing the same memory key vector between the read and write modules. The NSE memory update is scalable and potentially more robust to train. NSE is provided with a variable sized memory and thus unlike NTM, the size of the NSE memory is more relaxed. The novel memory update mechanism and the variable sized memory together prevent NSE from the information collision issue and avoid the need of the memory allocation and de-allocation protocols. Each memory location of the NSE memory stores a token representation in input sequence during encoding. This provides NSE with an anytime-access to the entire input sequence including the tokens from the future time scales, which is not permitted in NTM, RNN and attention-based encoders.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Lastly, NTM addresses small algorithmic problems while NSE focuses on a set of large-scale language understanding tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The RNNSearch model proposed in (Bahdanau et al., 2015) can be seen as a variation of memory augmented networks due to its ability to read the historic output states of RNNs with soft attention. The work of Sukhbaatar et al. (2015) combines the soft attention with Memory Networks (MemNNs) . Similar to RNNSearch, MemNNs are designed with non-writable memories. It constructs layered memory representa-tions and showed promising results on both artificial and real question answering tasks. We note that RNNSearch and MemNNs avoid the memory update and management overhead by simply using a non-writable memory storage. Another variation of MemNNs is Dynamic Memory Network (Kumar et al., 2016) that is equipped with an episodic memory and seems to be flexible in different settings.",
"cite_spans": [
{
"start": 32,
"end": 55,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 207,
"end": 231,
"text": "Sukhbaatar et al. (2015)",
"ref_id": "BIBREF27"
},
{
"start": 674,
"end": 694,
"text": "(Kumar et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Although NSE differs from other memoryaugumented NN models in many aspects, they all use soft attention mechanism with a type of similarity measures to retrieve relevant information from the external memory. For example, NTM implements cosine similarity and MemNNs use vector dot product. NSE uses the vector dot product for the similarity measure in NSE because it is faster to compute.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Other related work includes Neural Program-Interpreters (Reed and de Freitas, 2016) , which learns to run sub-programs and to compose them for high-level programs. It uses execution traces to provide the full supervision. Researchers have also explored ways to add unbounded memory to LSTM (Grefenstette et al., 2015 ) using a particular data structure. Although this type of architecture provides a flexible capacity to store information, the memory access is constrained by the data structure used for the memory bank, such as stack and queue.",
"cite_spans": [
{
"start": 56,
"end": 83,
"text": "(Reed and de Freitas, 2016)",
"ref_id": "BIBREF24"
},
{
"start": 290,
"end": 316,
"text": "(Grefenstette et al., 2015",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Overall it is expensive to train and to scale the previously proposed memory-based models. Most models required a set of clever engineering tricks to work successfully. Most of the aforementioned memory augmented neural networks have been tested on synthetic tasks whereas in this paper we evaluated NSE on a wide range of real and largescale natural language applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our training set consists of N examples {X i , Y i } N i=1 , where the input X i is a sequence w i 1 , w i 2 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": ". . , w i T i of tokens while the output Y i can be either a single target or a sequence. We transform each input token w t to its word embedding x t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "Our Neural Semantic Encoders (NSE) model has four main components: read, compose and write modules and an encoding memory M \u2208 R k\u00d7l with a variable number of slots, where k is the embedding dimension and l is the length of the input sequence. Each memory slot vector m t \u2208 R k corresponds to the vector representation of information about word w t in memory. In particular, the memory is initialized by the embedding vectors {x t } l t=1 and is evolved over time, through read, compose and write operations. Figure 1 (a) illustrates the architecture of NSE.",
"cite_spans": [],
"ref_spans": [
{
"start": 508,
"end": 516,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3"
},
{
"text": "NSE performs three main operations in every time step. After initializing the memory slots with the corresponding input representations, NSE processes an embedding vector x t and retrieves a memory slot m r,t that is expected to be associatively coherent (i.e. semantically associated) with the current input word w t . 2 The slot location r (ranging from 1 to l) is defined by a key vector z t which the read module emits by attending over the memory slots. The compose module implements a composition operation that combines the memory slot with the current input. The write module then transforms the composition output to the encoding memory space and writes the resulting new representation into the slot location of the memory. Instead of composing the raw embedding vector x t , we use the hidden state o t produced by the read module at time t Concretely, let e l \u2208 R l and e k \u2208 R k be vectors of ones and given a read function",
"cite_spans": [
{
"start": 320,
"end": 321,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Read, Compose and Write",
"sec_num": "3.1"
},
{
"text": "f LST M r , a com- position f M LP c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Read, Compose and Write",
"sec_num": "3.1"
},
{
"text": "and a write f LST M w NSE in Figure 1 (a) computes the key vector z t , the output state h t , and the encoding memory M t in time step t as",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 38,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Read, Compose and Write",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "o t = f LST M r (x t ) (1) z t = sof tmax(o t M t\u22121 ) (2) m r,t = z t M t\u22121 (3) c t = f M LP c (o t , m r,t ) (4) h t = f LST M w (c t ) (5) M t = M t\u22121 (1 \u2212 (z t \u2297 e k ) ) + (h t \u2297 e l )(z t \u2297 e k )",
"eq_num": "(6)"
}
],
"section": "Read, Compose and Write",
"sec_num": "3.1"
},
{
"text": "where 1 is a matrix of ones, \u2297 denotes the outer product which duplicates its left vector l or k times to form a matrix. The read function f LST M r sequentially maps the word embeddings to the internal space of the memory M t\u22121 . Then Equation 2 looks for the slots related to the input by computing association degree between each memory slot and the hidden state o t . We calculate the association degree by the dot product and transform this scores to the fuzzy key vector z t by normalizing with sof tmax function. Since our key vector is fuzzy, the slot to be composed is retrieved by taking weighted sum of the all slots as in Equation 3. This process can also be seen as the soft attention mechanism (Bahdanau et al., 2015) . In Equation 4 and 5, we compose and process the retrieved slot with the current hidden state and map the resulting vector to the encoder output space. Finally, we write the new representation to the memory location pointed by the key vector in Equation 6 where the key vector z t emitted by the read module is reused to inform the write module of the most recently read slots. First the slot information that was retrieved is erased and then the new representation is located. NSE performs this iterative process until all words in the input sequence are read. The encoding memories {M } T t=1 and output states {h} T t=1 are further used for the tasks. Although NSE reads a single word at a time, it has an anytime-access to the entire sequence stored in the encoding memory. With the encoding memory, NSE maintains a mental image of the input sequence. The memory is initialized with the raw embedding vector at time t = 0. We term such a freshly initialized memory a baby memory. As NSE reads more input content in time, the baby memory evolves and refines the encoded mental image.",
"cite_spans": [
{
"start": 708,
"end": 731,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Read, Compose and Write",
"sec_num": "3.1"
},
{
"text": "The read f LST M r , the composition f M LP c and the write f LST M w functions are neural networks and are the training parameters in our NSE. As the name suggests, we use LSTM and multi-layer perceptron (MLP) in this paper. Since NSE is fully differentiable, it can be trained with any gradient descent optimizer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Read, Compose and Write",
"sec_num": "3.1"
},
{
"text": "For sequence to sequence transduction tasks like question answering, natural language inference and machine translation, it is beneficial to access other relevant memories in addition to its own one. The shared or the multiple memory access allows a set of NSEs to exchange knowledge representations and to communicate with each other to accomplish a particular task throughout the encoding memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared and Multiple Memory Accesses",
"sec_num": "3.2"
},
{
"text": "NSE can be extended easily, so that it is able to read from and write to multiple memories si-multaneously or multiple NSEs are able to access a shared memory. Figure 1 (b) depicts a highlevel architectural diagram of a multiple memory access-NSE (MMA-NSE). The first memory (in green) is the shared memory accessed by more than one NSEs. Given a shared memory M n \u2208 R k\u00d7n that has been encoded by processing a relevant sequence with length n, MMA-NSE with the access to one relevant memory is defined as",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Shared and Multiple Memory Accesses",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "o t = f LST M r (x t )",
"eq_num": "(7)"
}
],
"section": "Shared and Multiple Memory Accesses",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z t = sof tmax(o t M t\u22121 ) (8) m r,t = z t M t\u22121 (9) z n t = sof tmax(o t M n t\u22121 ) (10) m n r,t = z n t M n t\u22121 (11) c t = f M LP c (o t , m r,t , m n r,t )",
"eq_num": "(12)"
}
],
"section": "Shared and Multiple Memory Accesses",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = f LST M w (c t ) (13) M t = M t\u22121 (1 \u2212 (z t \u2297 e k ) ) + (h t \u2297 e l )(z t \u2297 e k ) (14) M n t = M n t\u22121 (1\u2212(z n t \u2297e k ) )+(h t \u2297e n )(z n t \u2297e k )",
"eq_num": "(15)"
}
],
"section": "Shared and Multiple Memory Accesses",
"sec_num": "3.2"
},
{
"text": "and this is almost the same as standard NSE. The read module now emits the additional key vector z n t for the shared memory and the composition function f M LP c combines more than one slots. In MMA-NSE, the different memory slots are retrieved from the shared memories depending on their encoded semantic representations. They are then composed together with the current input and written back to their corresponding slots. Note that MMA-NSE is capable of accessing a variable number of relevant shared memories once a composition function that takes in dynamic inputs is chosen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared and Multiple Memory Accesses",
"sec_num": "3.2"
},
{
"text": "We describe in this section experiments on five different tasks, in order to show that NSE can be effective and flexible in different settings. 3 We report results on natural language inference, question answering (QA), sentence classification, document sentiment analysis and machine translation. All five tasks challenge a model in terms of language understanding and semantic reasoning.",
"cite_spans": [
{
"start": 144,
"end": 145,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The models are trained using Adam (Kingma and Ba, 2014) (Rockt\u00e4schel et al., 2016) 100 242K 85.4 82.3 LSTM word-by-word attention (Rockt\u00e4schel et al., 2016) 100 development set. We chose two one-layer LSTM for read/write modules on the tasks other than QA on which we used two-layer LSTM. The pretrained 300-D Glove 840B vectors and 100-D Glove 6B vectors (Pennington et al., 2014) were obtained for the word embeddings. 4 The word embeddings are fixed during training. The embeddings for out-of-vocabulary words were set to zero vector. We crop or pad the input sequence to a fixed length. A padding vector was inserted when padding. The models were regularized by using dropouts and an l 2 weight decay. 5",
"cite_spans": [
{
"start": 34,
"end": 55,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 56,
"end": 82,
"text": "(Rockt\u00e4schel et al., 2016)",
"ref_id": "BIBREF25"
},
{
"start": 130,
"end": 156,
"text": "(Rockt\u00e4schel et al., 2016)",
"ref_id": "BIBREF25"
},
{
"start": 356,
"end": 381,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF23"
},
{
"start": 421,
"end": 422,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The natural language inference is one of the main tasks in language understanding. This task tests the ability of a model to reason about the semantic relationship between two sentences. In order to perform well on the task, NSE should be able to capture sentence semantics and be able to reason the relation between a sentence pair, i.e., whether a premise-hypothesis pair is entailing, contradictory or neutral. We conducted experiments on the Stanford Natural Language Inference (SNLI) dataset (Bowman et al., 2015), which consists of 549,367/9,842/9,824 premise-hypothesis pairs for train/dev/test sets and target label indicating their relation. Following the setting in (Mou et al., 2016; Bowman et al., 2016) ReLU activation and a sof tmax layer. We set the batch size to 128, the initial learning rate to 3e-4 and l 2 regularizer strength to 3e-5, and train each model for 40 epochs. The write/read neural nets and the last linear layer were regularized by using 30% dropouts. We evaluated three different variations of NSE show in Table 1 . The NSE model encodes each sentence simultaneously by using a separate memory for each sentence. The second model -MMA-NSE first encodes the premise and then the hypothesis sentence by sharing the premise encoded memory in addition to the hypothesis memory. For the third model, we use inter-sentence attention which selectively reconstructs the premise representation. Table 1 shows the results of our models along with the results of published methods for the task. The classifier with handcrafted features extracts a set of lexical features. The next group of models are based on sentence encoding. While most of the sentence encoder models rely solely on word embeddings, the dependency tree CNN and the SPINN-PI models make use of sentence parser output. The SPINN-PI model is similar to NSE in spirit that it also explicitly computes word composition. However, the composition in the SPINN-PI is guided by supervisions from a dependency parser. NSE outperformed the previous sentence encoders on this task. The MMA-SNE further slightly improved the result, indicating that reading the premise memory is helpful while encoding the hypothesis.",
"cite_spans": [
{
"start": 676,
"end": 694,
"text": "(Mou et al., 2016;",
"ref_id": "BIBREF20"
},
{
"start": 695,
"end": 715,
"text": "Bowman et al., 2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 1040,
"end": 1047,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 1420,
"end": 1427,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Natural Language Inference",
"sec_num": "4.1"
},
{
"text": "The last set of methods designs inter-sentence relation with parameterized soft attention (Bahdanau et al., 2015). Our MMA-NSE attention model is similar to the LSTM attention model. Particularly, it attends over the premise encoder outputs {h p } T t=1 in respect to the final hypothesis representation h h l and constructs an attentively blended vector of the premise. This model obtained 85.4% accuracy score. The best performing model for this task performs tree matching with attention mechanism and LSTM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Inference",
"sec_num": "4.1"
},
{
"text": "Answer sentence selection is an integral part of the open-domain question answering. For this task, a model is trained to identify the correct sentences that answer a factual question, from a set of candidate sentences. We experiment on WikiQA dataset constructed from Wikipedia (Yang et al., 2015) . The dataset contains 20,360/2,733/6,165 QA pairs for train/dev/test sets.",
"cite_spans": [
{
"start": 279,
"end": 298,
"text": "(Yang et al., 2015)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Answer Sentence Selection",
"sec_num": "4.2"
},
{
"text": "The MLP setup used in the language inference task is kept same, except that we now replace the sof tmax layer with a sigmoid layer and model the following conditional probability distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer Sentence Selection",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p \u03b8 (y = 1|h q l , h a l ) = sigmoid(o QA )",
"eq_num": "(16)"
}
],
"section": "Answer Sentence Selection",
"sec_num": "4.2"
},
{
"text": "where h q l and h a l are the question and the answer encoded vectors and o QA denotes the output of the hidden layer of the MLP. We trained the MMA-NSE attention model to minimize the sigmoid cross entropy loss. MMA-NSE first encodes the answers and then the questions by accessing its own and the answer encoding memories. In our preliminary experiment, we found that the multiple memory access and the attention over answer encoder outputs {h a } T t=1 are crucial to this problem. Following previous work, we adopt MAP and MRR as the evaluation metrics for this task. 6 We set the batch size to 4 and the initial learning rate to 1e-5, and train the model for 10 epochs. We used 40% dropouts after word embeddings and no 6 We used trec eval script to calculate the evaluation metrics l 2 weight decay. The word embeddings are pretrained 300-D Glove 840B vectors. For this task, a linear mapping layer transforms the 300-D word embeddings to the 512-D LSTM inputs. Table 2 presents the results of our model and the previous models for the task. 7 The classifier with handcrafted features is a SVM model trained with a set of features. The Bigram-CNN model is a simple convolutional neural net. While the LSTM and LSTM attention models outperform the previous best result by nearly 5-6% by implementing deep LSTM with three hidden layers, NASM improves it further and sets a strong baseline by combining variational auto-encoder (Kingma and Welling, 2014) with the soft attention. Our MMA-NSE attention model exceeds the NASM by approximately 1% on MAP and 0.8% on MRR for this task.",
"cite_spans": [
{
"start": 572,
"end": 573,
"text": "6",
"ref_id": null
},
{
"start": 725,
"end": 726,
"text": "6",
"ref_id": null
},
{
"start": 1431,
"end": 1457,
"text": "(Kingma and Welling, 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 968,
"end": 975,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Answer Sentence Selection",
"sec_num": "4.2"
},
{
"text": "We evaluated NSE on the Stanford Sentiment Treebank (SST) (Socher et al., 2013) . This dataset comes with standard train/dev/test sets and two subtasks: binary sentence classification or finegrained classification of five classes. We trained our model on the text spans corresponding to labeled phrases in the training set and evaluated the model on the full sentences.",
"cite_spans": [
{
"start": 58,
"end": 79,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Classification",
"sec_num": "4.3"
},
{
"text": "The sentence representations were passed to a two-layer MLP for classification. The first layer of the MLP has ReLU activation and 1024 or 300 units for binary or fine-grained setting. The second layer is a sof tmax layer. The read/write modules are two one-layer LSTM with 300 hidden units and the word embeddings are the pre-trained 300-D Glove 840B vectors. We set the batch size to 64, the initial learning rate to 3e-4 and l 2 regularizer strength to 3e-5, and train each model for 25 epochs. The write/read neural nets and the last linear layer were regularized by 50% dropouts. Table 3 compares the result of our model with the state-of-the-art methods on the two subtasks. Most best performing methods exploited the parse tree provided in the treebank on this task with the exception of the DMN. The Dynamic Memory Network (DMN) model is a memory-augmented network. Our model outperformed the DMN and set the state-of-the-art results on both subtasks. Model Bin FG RNTN (Socher et al., 2013) 85.4 45.7 Paragraph Vector (Le and Mikolov, 2014) 87.8 48.7 CNN-MC (Kim, 2014) 88.1 47.4 DRNN (Irsoy and Cardie, 2015) 86.6 49.8 2-layer LSTM (Tai et al., 2015) 86.3 46.0 Bi-LSTM (Tai et al., 2015) 87.5 49.1 CT-LSTM (Tai et al., 2015) 88.0 51.0 DMN (Kumar et al., 2016) 88.6 52.1 NSE 89.7 52.8 Table 3 : Test accuracy for sentence classification. Bin: Binary, FG: fine-grained 5 classes.",
"cite_spans": [
{
"start": 978,
"end": 999,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF26"
},
{
"start": 1067,
"end": 1078,
"text": "(Kim, 2014)",
"ref_id": "BIBREF13"
},
{
"start": 1094,
"end": 1118,
"text": "(Irsoy and Cardie, 2015)",
"ref_id": "BIBREF11"
},
{
"start": 1142,
"end": 1160,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF29"
},
{
"start": 1179,
"end": 1197,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF29"
},
{
"start": 1216,
"end": 1234,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF29"
},
{
"start": 1249,
"end": 1269,
"text": "(Kumar et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 585,
"end": 592,
"text": "Table 3",
"ref_id": null
},
{
"start": 1294,
"end": 1301,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Classification",
"sec_num": "4.3"
},
{
"text": "We evaluated our models for document-level sentiment analysis on two publically available largescale datasets: the IMDB consisting of 335,018 movie reviews and 10 different classes and Yelp 13 consisting of 348,415 restaurant reviews and 5 different classes. Each document in the datasets is associated with human ratings and we used these ratings as gold labels for sentiment classification. Particularly, we used the pre-split datasets of (Tang et al., 2015) . We stack a NSE or LSTM on the top of another NSE for document modeling. The first NSE encodes the sentences and the second NSE or LSTM takes sentence encoded outputs and constructs document representations. The document representation is given to a output sof tmax layer. The whole network is trained jointly by backpropagating the cross entropy loss. We used one-layer LSTM with 100 hidden units for the read/write modules and the pre-trained 100-D Glove 6B vectors for this task. We set the batch size to 32, the initial learning rate to 3e-4 and l 2 regularizer strength to 1e-5, and trained each model for 50 epochs. The write/read neural nets and the document-level NSE/LSTM were regularized by 15% dropouts and the softmax layer by 20% dropouts. In order to speedup the training, we created document buckets by considering the number of sentences per document, i.e., documents with the same number of sentences were put together in the same bucket. The buckets were shuffled and updated per epoch. We did not use curriculum scheduling (Bengio et al., 2009) , although it is observed to help sequence training. Table 4 shows our results. We report two performance metrics: accuracy and MSE. The best results on the task were previously obtained by Conv-GRNN and LSTM-GRNN, which are also stacked models. These models first learn the sentence representations with a CNN or LSTM and then combine them for document representation using a gated recurrent neural network (GRNN). Our NSE models outperformed the previous stateof-the-art models in terms of both accuracy and MSE, by approximately 2-3%. On the other hand, all systems tend to show poor results on the IMDB dataset. That is, the IMDB dataset contains longer documents than the Yelp 13 and it has 10 classes while the Yelp 13 dataset has five classes to distinguish. 8 The stacked NSEs (NSE-NSE) performed slightly better than the NSE-LSTM on the IMDB dataset. This is possibly due to the encoding memory of the document level NSE that preserves the long dependency in documents with a large number of sentences.",
"cite_spans": [
{
"start": 441,
"end": 460,
"text": "(Tang et al., 2015)",
"ref_id": "BIBREF30"
},
{
"start": 1504,
"end": 1525,
"text": "(Bengio et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 2292,
"end": 2293,
"text": "8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1579,
"end": 1586,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Document Sentiment Analysis",
"sec_num": "4.4"
},
{
"text": "Lastly, we conducted an experiment on neural machine translation (NMT). The NMT problem is mostly defined within the encoder-decoder framework (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014) . The encoder provides the semantic and syntactic information about the source sentences to the decoder and the decoder generates the target sentences by conditioning on this information and its partially produced translation. For an efficient encoding, the attention-based NTM was introduced (Bahdanau et al., 2015) .",
"cite_spans": [
{
"start": 143,
"end": 175,
"text": "(Kalchbrenner and Blunsom, 2013;",
"ref_id": "BIBREF12"
},
{
"start": 176,
"end": 193,
"text": "Cho et al., 2014;",
"ref_id": "BIBREF6"
},
{
"start": 194,
"end": 217,
"text": "Sutskever et al., 2014)",
"ref_id": "BIBREF28"
},
{
"start": 511,
"end": 534,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "4.5"
},
{
"text": "For NTM, we implemented three different models. The first model is a baseline model and is similar to the one proposed in (Bahdanau et al., 2015) (RNNSearch) . This model (LSTM-LSTM) has two LSTM for the encoder/decoder and has the soft attention neural net, which attends over the source sentence and constructs a focused encoding vector for each target word. The second model is an NSE-LSTM encoder-decoder which encodes the source sentence with NSE and generates the targets with the LSTM network by using the NSE output states and the attention network. The last model is an NSE-NSE setup, where the encoding part is the same as the NSE-LSTM while the decoder NSE now uses the output state and has an access to the encoder memory, i.e., the encoder and the decoder NSEs access a shared memory. The memory is encoded by the first NSEs and then read/written by the decoder NSEs. We used the English-German translation corpus from the IWSLT 2014 evaluation campaign (Cettolo et al., 2012) . The corpus consists of sentence-aligned translation of TED talks. The data was pre-processed and lowercased with the Moses toolkit. 9 We merged the dev2010 and dev2012 sets for development and the tst2010, tst2011 and tst2012 sets for test data 10 . Sentence pairs with length longer than 25 words were filtered out. This resulted in 110,439/4,998/4,793 pairs for train/dev/test sets. We kept the most frequent 25,000 words for the German dictionary. The English dictionary has 51,821 words. The 300-D Glove 840B vectors were used for embedding the words in the source sentence whereas a lookup embedding layer was used for the target German words. Note that the word embeddings are usually optimized along with the NMT models. However, for the evaluation purpose we in this experiment do not optimize the English word embeddings. Besides, we do not use a beam search to generate the target sentences.",
"cite_spans": [
{
"start": 122,
"end": 157,
"text": "(Bahdanau et al., 2015) (RNNSearch)",
"ref_id": null
},
{
"start": 967,
"end": 989,
"text": "(Cettolo et al., 2012)",
"ref_id": "BIBREF4"
},
{
"start": 1124,
"end": 1125,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "4.5"
},
{
"text": "The LSTM encoder/decoders have two layers with 300 units. The NSE read/write modules are two one-layer LSTM with the same number of units as the LSTM encoder/decoders. This ensures that the number of parameters of the models is roughly the equal. The models were trained to minimize word-level cross entropy loss and were regularized by 20% input dropouts and the 9 https://github.com/moses-smt/mosesdecoder 10 We modified prepareData.sh script: https://github.com/facebookresearch/MIXER 30% output dropouts. We set the batch size to 128, the initial learning rate to 1e-3 for LSTM-LSTM and 3e-4 for the other models and l 2 regularizer strength to 3e-5, and train each model for 40 epochs. We report BLEU score for each models. 11 Table 5 reports our results. The baseline LSTM-LSTM encoder-decoder (with attention) obtained 17.02 BLEU on the test set. The NSE-LSTM improved the baseline slightly. Given this very small improvement of the NSE-LSTM, it is unclear whether the NSE encoder is helpful in NMT. However, if we replace the LSTM decoder with another NSE and introduce the shared memory access to the encoder-decoder model (NSE-NSE), we improve the baseline result by almost 1.0 BLEU. The NSE-NSE model also yields an increasing BLEU score on dev set. The result demonstrates that the attention-based NMT systems can be improved by a shared-memory encoder-decoder model. In addition, memory-based NMT systems should perform well on translation of long sequences by preserving long term dependencies.",
"cite_spans": [],
"ref_spans": [
{
"start": 732,
"end": 739,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "4.5"
},
{
"text": "NSE is capabable of performing multiscale composition by retrieving associative slots for a particular input at a time step. We analyzed the memory access order and the compositionality of memory slot and the input word in the NSE model trained on the SNLI data. Figure 2 shows the word association graphs for the two sentence picked from SNLI test set. The association graph was constructed by inspecting the key vector z. For an input word, we connect it to the most active slot pointed by z 12 .",
"cite_spans": [],
"ref_spans": [
{
"start": 263,
"end": 271,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Memory Access and Compositionality",
"sec_num": "5.1"
},
{
"text": "Note the graph components clustered around the semantically rich words: \"sits\", \"wall\" and \"autumn\" (a) and \"Three\", \"puppies\", \"tub\" and \"vet\" (b). The memory slots corresponding to words that are semantically rich in the current context are the most frequently accessed. The graph is able to capture certain syntactic structures including phrases (e.g., \"hand built rock wall\") and modifier relations (between \"sits\" and \"quietly\" and Figure 2 : Word association or composition graphs produced by NSE memory access. The directed arcs connect the words that are composed via compose module. The source nodes are input words and the destination nodes (pointed by the arrows) correspond to the accessed memory slots. < S > denotes the beginning of sequence.",
"cite_spans": [],
"ref_spans": [
{
"start": 437,
"end": 445,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Memory Access and Compositionality",
"sec_num": "5.1"
},
{
"text": "between \"tub\" and \"sprayed with water\"). Another interesting property is that the model tends to perform sensible compositions while processing the input sentence. For example, NSE retrieved the memory slot corresponding to \"wall\" or \"Three\" when reading the input \"rock\" or \"are\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Access and Compositionality",
"sec_num": "5.1"
},
{
"text": "In Appendix A, we show a step-by-step visualization of NSE memory states for the first sentence. Note how the encoding memory is evolved over time. In time step four (t = 4), the memory slot for \"quietly\" encodes information about \"quiet(ly) little child\". When t = 6, the model forms another composition involving \"quietly\", \"quietly sits\". In the last time step, we are able to find the most or the least frequently accessed slots in the memory. The least accessed slots correspond to function words while the frequently accessed slots are content words and tend to carry out rich semantics and intrinsic compositions found in the input sentence. Overall the model is less constrained and is able to compose multiword expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Access and Compositionality",
"sec_num": "5.1"
},
{
"text": "Our proposed memory augmented neural networks have achieved the state-of-the-art results when evaluated on five representative NLP tasks. NSE is capable of building an efficient architecture of the single, shared and multiple memory accesses for a specific NLP task. For example, for the NLI task NSE accesses premise encoded memory when processing hypothesis. For the QA task, NSE accesses answer encoded memory when reading question for QA. In machine translation, NSE shares a single encoded memory between encoder and decoder. Such flexibility in the architectural choice of the NSE memory access allows for the robust models for a better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The initial state of the NSE memory stores information about each word in the input sequence. We in this paper used word embeddings to represent the words in the memory. Different variations of word representations such as characterbased models are left to be evaluated for memory initialization in the future. We plan to extend NSE so that it learns to select and access a relevant subset from a memory set. One could also explore unsupervised variations of NSE, for example, to train them to produce encoding memory and representation vector of entire sentences or documents using either new or existing models such as the skip-gram model (Mikolov et al., 2013) .",
"cite_spans": [
{
"start": 641,
"end": 663,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "By access we mean changing the memory states by the read, compose and write operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Such a coherence is calculated by a soft attention with dot product similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Code for the experiments and NSEs is available at https://bitbucket.org/tsendeemts/nse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Inclusion of simple word count feature improves the performance by around 0.15-0.3 across the board",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The average number of sentences and words in a document for IMDB: 14, 152 and Yelp 13: 9, 326",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We computed the BLEU score with multi-bleu.perl script of the Moses toolkit 12 Since z is fuzzy, we visualize the highest scoring slot. For a few inputs, z pointed to a slot corresponding to the same word. In this case, we masked out those slots and showed the second best scoring slot.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Abhyuday Jagannatha and the anonymous reviewers for their insightful comments and suggestions. This work was supported in part by the grant HL125089 from the National Institutes of Health (NIH). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "A Step-by-step visualization of memory states in NSEEach small table represents the memory state at a single time step. The current time step and input token are listed on the top of the table. The memory slots pointed by the query vector is highlighted in red color. The brackets represent the word composition order in each slot. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Curriculum learning",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "J\u00e9r\u00f4me",
"middle": [],
"last": "Louradour",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 26th annual international conference on machine learning",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, J\u00e9r\u00f4me Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international confer- ence on machine learning, pages 41-48. ACM.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In EMNLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A fast unified model for parsing and sentence understanding",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Gauthier",
"suffix": ""
},
{
"first": "Raghav",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Gupta",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Jon Gauthier, Abhinav Ras- togi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. CoRR, abs/1603.06021.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Wit 3 : Web inventory of transcribed and translated talks",
"authors": [
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Girardi",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2012,
"venue": "EAMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Fed- erico. 2012. Wit 3 : Web inventory of transcribed and translated talks. In EAMT, Trento, Italy.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Long short-term memory-networks for machine reading",
"authors": [
{
"first": "Jianpeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. CoRR, abs/1601.06733.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1406.1078"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Finding structure in time",
"authors": [
{
"first": "Jeffrey",
"middle": [
"L"
],
"last": "Elman",
"suffix": ""
}
],
"year": 1990,
"venue": "Cognitive science",
"volume": "14",
"issue": "2",
"pages": "179--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey L. Elman. 1990. Finding structure in time. Cognitive science, 14(2):179-211.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural turing machines",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "Ivo",
"middle": [],
"last": "Danihelka",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1410.5401"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning to transduce with unbounded memory",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "NIPS 2015",
"volume": "",
"issue": "",
"pages": "1819--1827",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. 2015. Learning to transduce with unbounded memory. In NIPS 2015, pages 1819-1827.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Modeling compositionality with multiplicative recurrent neural networks",
"authors": [
{
"first": "Ozan",
"middle": [],
"last": "Irsoy",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ozan Irsoy and Claire Cardie. 2015. Modeling compo- sitionality with multiplicative recurrent neural net- works. In ICLR 2015.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Recurrent continuous translation models",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recur- rent continuous translation models. In EMNLP, vol- ume 3, page 413.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In ICLR.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Autoencoding variational bayes",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2014,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Max Welling. 2014. Auto- encoding variational bayes. In ICLR 2014.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Ask me anything: Dynamic memory networks for natural language processing",
"authors": [
{
"first": "Ankit",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Ozan",
"middle": [],
"last": "Irsoy",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "English",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Pierce",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Ondruska",
"suffix": ""
},
{
"first": "Ishaan",
"middle": [],
"last": "Gulrajani",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankit Kumar, Ozan Irsoy, Jonathan Su, James Brad- bury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natu- ral language processing. CoRR, abs/1506.07285.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In ICML 2014.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neural variational inference for text processing",
"authors": [
{
"first": "Yishu",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2016,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In ICLR 2016.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS, pages 3111-3119.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Recognizing entailment and contradiction by tree-based convolution",
"authors": [
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Men",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Recognizing entailment and contradiction by tree-based convolution. In ACL 2016.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Neural tree indexers for text understanding",
"authors": [
{
"first": "Tsendsuren",
"middle": [],
"last": "Munkhdalai",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsendsuren Munkhdalai and Hong Yu. 2017. Neural tree indexers for text understanding.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A decomposable attention model for natural language inference",
"authors": [
{
"first": "P",
"middle": [],
"last": "Ankur",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.01933"
]
},
"num": null,
"urls": [],
"raw_text": "Ankur P Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "14",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532-1543.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Neural programmer-interpreters",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Reed",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nando De Freitas",
"suffix": ""
}
],
"year": 2016,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Reed and Nando de Freitas. 2016. Neural programmer-interpreters. In ICLR 2016.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Reasoning about entailment with neural attention",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u1ef3",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Rockt\u00e4schel, Edward Grefenstette, Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u1ef3, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In ICLR 2016.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Chuang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In EMNLP 2013.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "End-to-end memory networks",
"authors": [
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2015,
"venue": "NIPS 2015",
"volume": "",
"issue": "",
"pages": "2431--2439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In NIPS 2015, pages 2431-2439.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In NIPS, pages 3104-3112.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Improved semantic representations from tree-structured long short-term memory networks",
"authors": [
{
"first": "Kai Sheng",
"middle": [],
"last": "Tai",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representa- tions from tree-structured long short-term memory networks. In ACL 2015.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Document modeling with gated recurrent neural network for sentiment classification",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2015. Docu- ment modeling with gated recurrent neural network for sentiment classification. In EMNLP.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Question answering using enhanced lexical semantic models",
"authors": [
{
"first": "Ming-Wei",
"middle": [],
"last": "Wen Tau Yih",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Andrzej",
"middle": [],
"last": "Meek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pastusiak",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wen tau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. In ACL 2013.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Grammar as a foreign language",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2015,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, \u0141ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Gram- mar as a foreign language. In NIPS.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Learning natural language inference with LSTM. CoRR",
"authors": [
{
"first": "Shuohang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuohang Wang and Jing Jiang. 2015. Learning natural language inference with LSTM. CoRR, abs/1512.08849.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Memory networks",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In ICML 2015.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Wikiqa: A challenge dataset for open-domain question answering",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Meek",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain ques- tion answering. In EMNLP 2015.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Deep learning for answer sentence selection",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Pulman",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS Deep Learning Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep learning for answer sentence selection. In NIPS Deep Learning Work- shop 2014.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Training and test accuracy on natural language inference task. d is the word embedding size and |\u03b8| M the number of model parameters."
},
"TABREF5": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Experiment results on answer sentence selection."
},
"TABREF7": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">: Results of document-level sentiment clas-</td></tr><tr><td colspan=\"4\">sification. PV: paragraph vector, Acc: accuracy,</td></tr><tr><td colspan=\"2\">and MSE: mean squared error.</td><td/><td/></tr><tr><td>Model</td><td>Train</td><td>Dev</td><td>Test</td></tr><tr><td colspan=\"4\">Baseline LSTM-LSTM 28.06 17.96 17.02</td></tr><tr><td>NSE-LSTM</td><td colspan=\"3\">28.73 17.67 17.13</td></tr><tr><td>NSE-NSE</td><td colspan=\"3\">29.89 18.53 17.93</td></tr></table>",
"text": ""
},
"TABREF8": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>: BLEU scores for English-German trans-</td></tr><tr><td>lation task.</td></tr></table>",
"text": ""
}
}
}
}