{ "paper_id": "D16-1012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:36:56.979805Z" }, "title": "Deep Multi-Task Learning with Shared Memory", "authors": [ { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "Shanghai Key Laboratory of Intelligent Information Processing", "institution": "Fudan University", "location": { "addrLine": "825 Zhangheng Road", "settlement": "Shanghai", "country": "China" } }, "email": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "", "affiliation": { "laboratory": "Shanghai Key Laboratory of Intelligent Information Processing", "institution": "Fudan University", "location": { "addrLine": "825 Zhangheng Road", "settlement": "Shanghai", "country": "China" } }, "email": "xpqiu@fudan.edu.cn" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "Shanghai Key Laboratory of Intelligent Information Processing", "institution": "Fudan University", "location": { "addrLine": "825 Zhangheng Road", "settlement": "Shanghai", "country": "China" } }, "email": "xjhuang@fudan.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Neural network based models have achieved impressive results on various specific tasks. However, in previous works, most models are learned separately based on single-task supervised objectives, which often suffer from insufficient training data. In this paper, we propose two deep architectures which can be trained jointly on multiple related tasks. More specifically, we augment neural model with an external memory, which is shared by several tasks. Experiments on two groups of text classification tasks show that our proposed architectures can improve the performance of a task with the help of other related tasks.", "pdf_parse": { "paper_id": "D16-1012", "_pdf_hash": "", "abstract": [ { "text": "Neural network based models have achieved impressive results on various specific tasks. However, in previous works, most models are learned separately based on single-task supervised objectives, which often suffer from insufficient training data. In this paper, we propose two deep architectures which can be trained jointly on multiple related tasks. More specifically, we augment neural model with an external memory, which is shared by several tasks. Experiments on two groups of text classification tasks show that our proposed architectures can improve the performance of a task with the help of other related tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural network based models have been shown to achieved impressive results on various NLP tasks rivaling or in some cases surpassing traditional models, such as text classification Socher et al., 2013; Liu et al., 2015a) , semantic matching (Hu et al., 2014; Liu et al., 2016a) , parser (Chen and Manning, 2014) and machine translation (Bahdanau et al., 2014) .", "cite_spans": [ { "start": 181, "end": 201, "text": "Socher et al., 2013;", "ref_id": "BIBREF31" }, { "start": 202, "end": 220, "text": "Liu et al., 2015a)", "ref_id": null }, { "start": 241, "end": 258, "text": "(Hu et al., 2014;", "ref_id": "BIBREF17" }, { "start": 259, "end": 277, "text": "Liu et al., 2016a)", "ref_id": "BIBREF22" }, { "start": 287, "end": 311, "text": "(Chen and Manning, 2014)", "ref_id": "BIBREF4" }, { "start": 336, "end": 359, "text": "(Bahdanau et al., 2014)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Usually, due to the large number of parameters these neural models need a large-scale corpus. It is hard to train a deep neural model that generalizes well with size-limited data, while building the large scale resources for some NLP tasks is also a challenge. To overcome this problem, these models often involve an unsupervised pre-training phase. The final model is fine-tuned on specific task with respect * Corresponding author.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "to a supervised training criterion. However, most pre-training methods are based on unsupervised objectives (Collobert et al., 2011; Turian et al., 2010; Mikolov et al., 2013) , which is effective to improve the final performance, but it does not directly optimize the desired task.", "cite_spans": [ { "start": 108, "end": 132, "text": "(Collobert et al., 2011;", "ref_id": "BIBREF6" }, { "start": 133, "end": 153, "text": "Turian et al., 2010;", "ref_id": "BIBREF34" }, { "start": 154, "end": 175, "text": "Mikolov et al., 2013)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Multi-task learning is an approach to learn multiple related tasks simultaneously to significantly improve performance relative to learning each task independently. Inspired by the success of multi-task learning (Caruana, 1997) , several neural network based models (Collobert and Weston, 2008; Liu et al., 2015b) are proposed for NLP tasks, which utilized multi-task learning to jointly learn several tasks with the aim of mutual benefit. The characteristic of these multi-task architectures is they share some lower layers to determine common features. After the shared layers, the remaining layers are split into multiple specific tasks.", "cite_spans": [ { "start": 212, "end": 227, "text": "(Caruana, 1997)", "ref_id": "BIBREF3" }, { "start": 266, "end": 294, "text": "(Collobert and Weston, 2008;", "ref_id": "BIBREF5" }, { "start": 295, "end": 313, "text": "Liu et al., 2015b)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose two deep architectures of sharing information among several tasks in multitask learning framework. All the related tasks are integrated into a single system which is trained jointly. More specifically, inspired by Neural Turing Machine (NTM) (Graves et al., 2014) and memory network (Sukhbaatar et al., 2015) , we equip taskspecific long short-term memory (LSTM) neural network (Hochreiter and Schmidhuber, 1997) with an external shared memory. The external memory has capability to store long term information and knowledge shared by several related tasks. Different with NTM, we use a deep fusion strategy to integrate the information from the external memory into taskspecific LSTM, in which a fusion gate controls the information flowing flexibly and enables the model to selectively utilize the shared information.", "cite_spans": [ { "start": 268, "end": 289, "text": "(Graves et al., 2014)", "ref_id": "BIBREF14" }, { "start": 309, "end": 334, "text": "(Sukhbaatar et al., 2015)", "ref_id": "BIBREF32" }, { "start": 404, "end": 438, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We demonstrate the effectiveness of our architectures on two groups of text classification tasks. Experimental results show that jointly learning of multiple related tasks can improve the performance of each task relative to learning them independently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions are of three-folds:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We proposed a generic multi-task framework, in which different tasks can share information by an external memory and communicate by a reading/writing mechanism. Two proposed models are complementary to prior multi-task neural networks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Different with Neural Turing Machine and memory network, we introduce a deep fusion mechanism between internal and external memories, which helps the LSTM units keep them interacting closely without being conflated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 As a by-product, the fusion gate enables us to better understand how the external shared memory helps specific task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we briefly describe LSTM model, and then propose an external memory enhanced LSTM with deep fusion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Memory Models for Specific Task", "sec_num": "2" }, { "text": "Long short-term memory network (LSTM) (Hochreiter and Schmidhuber, 1997) is a type of recurrent neural network (RNN) (Elman, 1990) , and specifically addresses the issue of learning long-term dependencies. LSTM maintains an internal memory cell that updates and exposes its content only when deemed necessary. Architecturally speaking, the memory state and output state are explicitly separated by activation gates (Wang and Cho, 2015) . However, the limitation of LSTM is that it lacks a mechanism to index its memory while writing and reading (Danihelka et al., 2016) .", "cite_spans": [ { "start": 117, "end": 130, "text": "(Elman, 1990)", "ref_id": "BIBREF11" }, { "start": 415, "end": 435, "text": "(Wang and Cho, 2015)", "ref_id": "BIBREF35" }, { "start": 545, "end": 569, "text": "(Danihelka et al., 2016)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Long Short-term Memory", "sec_num": "2.1" }, { "text": "While there are numerous LSTM variants, here we use the LSTM architecture used by (Jozefowicz et al., 2015) , which is similar to the architecture of (Graves, 2013) but without peep-hole connections.", "cite_spans": [ { "start": 82, "end": 107, "text": "(Jozefowicz et al., 2015)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Long Short-term Memory", "sec_num": "2.1" }, { "text": "We define the LSTM units at each time step t to be a collection of vectors in R d : an input gate i t , a forget gate f t , an output gate o t , a memory cell c t and a hidden state h t . d is the number of the LSTM units. The elements of the gating vectors i t , f t and o t are in [0, 1] .", "cite_spans": [ { "start": 283, "end": 286, "text": "[0,", "ref_id": null }, { "start": 287, "end": 289, "text": "1]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Long Short-term Memory", "sec_num": "2.1" }, { "text": "The LSTM is precisely specified as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Long Short-term Memory", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf8ee \uf8ef \uf8ef \uf8f0c t o t i t f t \uf8f9 \uf8fa \uf8fa \uf8fb = \uf8ee \uf8ef \uf8ef \uf8f0 tanh \u03c3 \u03c3 \u03c3 \uf8f9 \uf8fa \uf8fa \uf8fb W p x t h t\u22121 + b p , (1) c t =c t i t + c t\u22121 f t ,", "eq_num": "(2)" } ], "section": "Long Short-term Memory", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h t = o t tanh (c t ) ,", "eq_num": "(3)" } ], "section": "Long Short-term Memory", "sec_num": "2.1" }, { "text": "where x t \u2208 R m is the input at the current time step; W \u2208 R 4h\u00d7(d+m) and b p \u2208 R 4h are parameters of affine transformation; \u03c3 denotes the logistic sigmoid function and denotes elementwise multiplication. The update of each LSTM unit can be written precisely as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Long Short-term Memory", "sec_num": "2.1" }, { "text": "(h t , c t ) = LSTM(h t\u22121 , c t\u22121 , x t , \u03b8 p ). (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Long Short-term Memory", "sec_num": "2.1" }, { "text": "Here, the function LSTM(\u2022, \u2022, \u2022, \u2022) is a shorthand for Eq. (1-3), and \u03b8 p represents all the parameters of LSTM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Long Short-term Memory", "sec_num": "2.1" }, { "text": "LSTM has an internal memory to keep useful information for specific task, some of which may be beneficial to other tasks. However, it is non-trivial to share information stored in internal memory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "Recently, there are some works to augment LSTM with an external memory, such as neural Turing machine (Graves et al., 2014) and memory network (Sukhbaatar et al., 2015) , called memory enhanced LSTM (ME-LSTM). These models enhance the low-capacity internal memory to have a capability of modelling long pieces of text (Andrychowicz and Kurach, 2016) .", "cite_spans": [ { "start": 102, "end": 123, "text": "(Graves et al., 2014)", "ref_id": "BIBREF14" }, { "start": 143, "end": 168, "text": "(Sukhbaatar et al., 2015)", "ref_id": "BIBREF32" }, { "start": 318, "end": 349, "text": "(Andrychowicz and Kurach, 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "Inspired by these models, we introduce an external memory to share information among several tasks. To better control shared information and understand how it is utilized from external memory, we propose a deep fusion strategy for ME-LSTM. Figure 1 : Graphical illustration of the proposed ME-LSTM unit with deep fusion of internal and external memories.", "cite_spans": [], "ref_spans": [ { "start": 240, "end": 248, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "tanh tanh \uf073 \uf073 \uf073 \uf073 t x 1 t h \uf02d 1 t c \uf02d t r t f t i t o t g t h t c Memory 1 t k \uf02d 1 t e \uf02d 1 t a \uf02d \uf073 tanh tanh", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "As shown in Figure 1 , ME-LSTM consists the original LSTM and an external memory which is maintained by reading and writing operations. The LSTM not only interacts with the input and output information but accesses the external memory using selective read and write operations.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "The external memory and corresponding operations will be discussed in detail below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "External Memory The form of external memory is defined as a matrix M \u2208 R K\u00d7M , where K is the number of memory segments, and M is the size of each segment. Besides, K and M are generally instance-independent and pre-defined as hyperparameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "At each step t, LSTM emits output h t and three key vectors k t , e t and a t simultaneously. k t , e t and a t can be computed as \uf8ee", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf8f0 k t e t a t \uf8f9 \uf8fb = \uf8ee \uf8f0 tanh \u03c3 tanh \uf8f9 \uf8fb (W m h t + b m )", "eq_num": "(5)" } ], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "where W m and b m are parameters of affine transformation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "Reading The read operation is to read information", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r t \u2208 R M from memory M t\u22121 . r t = \u03b1 t M t\u22121 ,", "eq_num": "(6)" } ], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "where r t denotes the reading vector and \u03b1 t \u2208 R K represents a distribution over the set of segments of memory M t\u22121 , which controls the amount of information to be read from and written to the memory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "Each scalar \u03b1 t,k in attention distribution \u03b1 t can be obtained as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "\u03b1 t,k = softmax(g(M t\u22121,k , k t\u22121 )) (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "where M t\u22121,k represents the k-th row memory vector, and k t\u22121 is a key vector emitted by LSTM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "Here", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "g(x, y) (x \u2208 R M , y \u2208 R M )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "is a align function for which we consider two different alternatives:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g(x, y) = v T tanh(W a [x; y]) cosine(x, y)", "eq_num": "(8)" } ], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "where v \u2208 R M is a parameter vector. In our current implementation, the similarity measure is cosine similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "Writing The memory can be written by two operations: erase and add.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M t = M t\u22121 (1 \u2212 \u03b1 t e T t ) + \u03b1 t a T t ,", "eq_num": "(9)" } ], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "where e t , a t \u2208 R M represent erase and add vectors respectively. To facilitate the following statements, we re-write the writing equation as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M t = f write (M t\u22121 , \u03b1 t , h t ).", "eq_num": "(10)" } ], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "Deep Fusion between External and Internal Memories After we obtain the information from external memory, we need a strategy to comprehensively utilize information from both external and internal memory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "To better control signals flowing from external memory, inspired by (Wang and Cho, 2015) , we propose a deep fusion strategy to keep internal and external memories interacting closely without being conflated.", "cite_spans": [ { "start": 68, "end": 88, "text": "(Wang and Cho, 2015)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "In detail, the state h t of LSTM at step t depends on both the read vector r t from external memory, and internal memory c t , which is computed by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h t = o t tanh(c t + g t (W f r t )),", "eq_num": "(11)" } ], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "where W f is parameter matrix, and g t is a fusion gate to select information from external memory, which is computed by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g t = \u03c3(W r r t + W c c t ),", "eq_num": "(12)" } ], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "x1 x2 x3xT where W r and W c are parameter matrices. Finally, the update of external memory enhanced LSTM unit can be written precisely as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "h (m) 1 h (m) 2 h (m) 3 \u2022 \u2022 \u2022 h (m) T softmax1 y (m) M (s) 0 M (s) 1 M (s) 2 \u2022 \u2022 \u2022 M (s) T \u22121 h (n) 1 h (n) 2 h (n) 3 \u2022 \u2022 \u2022 h (n) T softmax2 y (n) x1 x2 x3 xT (a) Global Memory Architecture x1 x2 x3 xT h (m) 1 h (m) 2 h (m) 3 \u2022 \u2022 \u2022 h (m) T softmax1 y (m) M (m) 1 M (m) 2 M (m) 3 M (s) 0 M (s) 1 M (s) 2 M (s) T \u22121 M (n) 1 M (n) 2 M (n) 3 h (n) 1 h (n) 2 h (n) 3 \u2022 \u2022 \u2022 h (n) T softmax2 y (n)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(h t , M t , c t ) = ME-LSTM(h t\u22121 , M t\u22121 , c t\u22121 , x t , \u03b8 p , \u03b8 q ),", "eq_num": "(13)" } ], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "where \u03b8 p represents all the parameters of LSTM internal structure and \u03b8 q represents all the parameters to maintain the external memory. Overall, the external memory enables ME-LSTM to have larger capability to store more information, thereby increasing the ability of ME-LSTM. The read and write operations allow ME-LSTM to capture complex sentence patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory Enhanced LSTM", "sec_num": "2.2" }, { "text": "Most existing neural network methods are based on supervised training objectives on a single task (Collobert et al., 2011; Socher et al., 2013; . These methods often suffer from the limited amounts of training data. To deal with this problem, these models often involve an unsupervised pre-training phase. This unsupervised pre-training is effective to improve the final performance, but it does not directly optimize the desired task. Motivated by the success of multi-task learning (Caruana, 1997) , we propose two deep architectures with shared external memory to leverage supervised data from many related tasks. Deep neural model is well suited for multi-task learning since the features learned from a task may be useful for other tasks. Figure 2 gives an illustration of our proposed architectures.", "cite_spans": [ { "start": 98, "end": 122, "text": "(Collobert et al., 2011;", "ref_id": "BIBREF6" }, { "start": 123, "end": 143, "text": "Socher et al., 2013;", "ref_id": "BIBREF31" }, { "start": 484, "end": 499, "text": "(Caruana, 1997)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 744, "end": 752, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Deep Architectures with Shared Memory for Multi-task Learning", "sec_num": "3" }, { "text": "In ARC-I, the input is modelled by a task-specific LSTM and external shared memory. More formally, given an input text x, the task-specific output h", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ARC-I: Global Shared Memory", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(m) t of task m at step t is defined as (h (m) t , M (s) t , c (m) t ) = ME-LSTM(h (m) t\u22121 , M (s) t\u22121 , c (m) t\u22121 , x t , \u03b8 (m) p , \u03b8 (s) q ),", "eq_num": "(14)" } ], "section": "ARC-I: Global Shared Memory", "sec_num": null }, { "text": "where x t represents word embeddings of word x t ; the superscript s represents the parameters are shared across different tasks; the superscript m represents that the parameters or variables are taskspecific for task m.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ARC-I: Global Shared Memory", "sec_num": null }, { "text": "Here all tasks share single global memory M (s) , meaning that all tasks can read information from it and have the duty to write their shared or taskspecific information into the memory.", "cite_spans": [ { "start": 44, "end": 47, "text": "(s)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ARC-I: Global Shared Memory", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M (s) t = f write (M (s) t\u22121 , \u03b1 (s) t , h (m) t )", "eq_num": "(15)" } ], "section": "ARC-I: Global Shared Memory", "sec_num": null }, { "text": "After calculating the task-specific representation of text h (m) T for task m, we can predict the probability distribution over classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ARC-I: Global Shared Memory", "sec_num": null }, { "text": "In ARC-I, all tasks share a global memory, but can also record task-specific information besides shared information. To address this, we allocate each task a local task-specific external memory, which can further write shared information to a global memory for all tasks. More generally, for task m, we assign each taskspecific LSTM with a local memory M (m) , followed by a global memory M (s) , which is shared across different tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ARC-II: Local-Global Hybrid Memory", "sec_num": null }, { "text": "The read and write operations of the local and global memory are defined as ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ARC-II: Local-Global Hybrid Memory", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r (m) t = \u03b1 (m) t M (m) t ,", "eq_num": "(16)" } ], "section": "ARC-II: Local-Global Hybrid Memory", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M (m) t = f write (M (m) t\u22121 , \u03b1 (m) t , h (m) t ),", "eq_num": "(17)" } ], "section": "ARC-II: Local-Global Hybrid Memory", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r (s) t = \u03b1 (s) t\u22121 M (s) t\u22121 ,", "eq_num": "(18)" } ], "section": "ARC-II: Local-Global Hybrid Memory", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M (s) t = f write (M (s) t\u22121 , \u03b1 (s) t , r (m) t ),", "eq_num": "(19)" } ], "section": "ARC-II: Local-Global Hybrid Memory", "sec_num": null }, { "text": "where the superscript s represents the parameters are shared across different tasks; the superscript m represents that the parameters or variables are taskspecific for task m.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ARC-II: Local-Global Hybrid Memory", "sec_num": null }, { "text": "In ARC-II, the local memories enhance the capacity of memorizing, while global memory enables the information flowing from different tasks to interact sufficiently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ARC-II: Local-Global Hybrid Memory", "sec_num": null }, { "text": "The task-specific representation h (m) , emitted by the deep muti-task architectures, is ultimately fed into the corresponding task-specific output layers.", "cite_spans": [ { "start": 35, "end": 38, "text": "(m)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y (m) = softmax(W (m) h (m) + b (m) ),", "eq_num": "(20)" } ], "section": "Training", "sec_num": "4" }, { "text": "where\u0177 (m) is prediction probabilities for task m. Given M related tasks, our global cost function is the linear combination of cost function for all tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c6 = M m=1 \u03bb m L(\u0177 (m) , y (m) )", "eq_num": "(21)" } ], "section": "Training", "sec_num": "4" }, { "text": "where \u03bb m is the weights for each task m respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "Computational Cost Compared with vanilla LSTM, our proposed two models do not cause much extra computational cost while converge faster. In our experiment, the most complicated ARC-II, costs 2 times as long as vanilla LSTM. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "In this section, we investigate the empirical performances of our proposed architectures on two multitask datasets. Each dataset contains several related tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": "The used multi-task datasets are briefly described as follows. The detailed statistics are listed in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 101, "end": 108, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "Movie Reviews The movie reviews dataset consists of four sub-datasets about movie reviews.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "\u2022 SST-1 The movie reviews with five classes in the Stanford Sentiment Treebank 1 (Socher et al., 2013) . \u2022 SST-2 The movie reviews with binary classes.", "cite_spans": [ { "start": 81, "end": 102, "text": "(Socher et al., 2013)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "It is also from the Stanford Sentiment Treebank. \u2022 SUBJ The movie reviews with labels of subjective or objective (Pang and Lee, 2004) . \u2022 IMDB The IMDB dataset 2 consists of 100,000 movie reviews with binary classes (Maas et al., 2011) . One key aspect of this dataset is that each movie review has several sentences. (Socher et al., 2011) 43.2 82.4 ---MV-RNN (Socher et al., 2012) 44.4 82.9 ---RNTN (Socher et al., 2013) 45.7 85.4 ---DCNN 48.5 86.8 -89.3 -CNN-multichannel (Kim, 2014) 47.4 88.1 93.2 --Tree-LSTM (Tai et al., 2015) 50.6 86.9 --- Table 3 : Accuracies of our models on movie reviews tasks against state-of-the-art neural models. The last column gives the improvements relative to LSTM and ME-LSTM respectively. NBOW: Sums up the word vectors and applies a non-linearity followed by a softmax classification layer. RAE: Recursive Autoencoders with pre-trained word vectors from Wikipedia (Socher et al., 2011) . MV-RNN: Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012) . RNTN: Recursive Neural Tensor Network with tensor-based feature function and parse trees (Socher et al., 2013) . DCNN: Dynamic Convolutional Neural Network with dynamic k-max pooling Denil et al., 2014) . CNN-multichannel: Convolutional Neural Network (Kim, 2014) . Tree-LSTM: A generalization of LSTMs to tree-structured network topologies (Tai et al., 2015) .", "cite_spans": [ { "start": 113, "end": 133, "text": "(Pang and Lee, 2004)", "ref_id": "BIBREF27" }, { "start": 216, "end": 235, "text": "(Maas et al., 2011)", "ref_id": "BIBREF25" }, { "start": 318, "end": 339, "text": "(Socher et al., 2011)", "ref_id": "BIBREF29" }, { "start": 360, "end": 381, "text": "(Socher et al., 2012)", "ref_id": "BIBREF30" }, { "start": 400, "end": 421, "text": "(Socher et al., 2013)", "ref_id": "BIBREF31" }, { "start": 474, "end": 485, "text": "(Kim, 2014)", "ref_id": "BIBREF20" }, { "start": 513, "end": 531, "text": "(Tai et al., 2015)", "ref_id": "BIBREF33" }, { "start": 902, "end": 923, "text": "(Socher et al., 2011)", "ref_id": "BIBREF29" }, { "start": 990, "end": 1011, "text": "(Socher et al., 2012)", "ref_id": "BIBREF30" }, { "start": 1103, "end": 1124, "text": "(Socher et al., 2013)", "ref_id": "BIBREF31" }, { "start": 1197, "end": 1216, "text": "Denil et al., 2014)", "ref_id": "BIBREF8" }, { "start": 1266, "end": 1277, "text": "(Kim, 2014)", "ref_id": "BIBREF20" }, { "start": 1355, "end": 1373, "text": "(Tai et al., 2015)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 546, "end": 553, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "Product Reviews This dataset 3 , constructed by Blitzer et al. (2007) , contains Amazon product reviews from four different domains: Books, DVDs, Electronics and Kitchen appliances. The goal in each domain is to classify a product review as either positive or negative. The datasets in each domain are partitioned randomly into training data, development data and testing data with the proportion of 70%, 20% and 10% respectively.", "cite_spans": [ { "start": 48, "end": 69, "text": "Blitzer et al. (2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "The multi-task frameworks proposed by previous works are various while not all can be applied to the tasks we focused. Nevertheless, we chose two most related neural models for multi-task learning and implement them as strong competitor methods .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Competitor Methods for Multi-task Learning", "sec_num": "5.2" }, { "text": "\u2022 MT-CNN: This model is proposed by Collobert and Weston (2008) with convolutional layer, in which lookup-tables are shared partially while other layers are task-specific.", "cite_spans": [ { "start": 36, "end": 63, "text": "Collobert and Weston (2008)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Competitor Methods for Multi-task Learning", "sec_num": "5.2" }, { "text": "\u2022 MT-DNN: The model is proposed by Liu et al. (2015b) with bag-of-words input and multilayer perceptrons, in which a hidden layer is shared.", "cite_spans": [ { "start": 35, "end": 53, "text": "Liu et al. (2015b)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Competitor Methods for Multi-task Learning", "sec_num": "5.2" }, { "text": "The networks are trained with backpropagation and the gradient-based optimization is performed using the Adagrad update rule (Duchi et al., 2011) .", "cite_spans": [ { "start": 125, "end": 145, "text": "(Duchi et al., 2011)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Hyperparameters and Training", "sec_num": "5.3" }, { "text": "The word embeddings for all of the models are initialized with the 100d GloVe vectors (840B token version, (Pennington et al., 2014) ) and fine-tuned during training to improve the performance. The other parameters are initialized by randomly sampling from uniform distribution in [\u22120.1, 0.1]. The mini-batch size is set to 16.", "cite_spans": [ { "start": 107, "end": 132, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Hyperparameters and Training", "sec_num": "5.3" }, { "text": "For each task, we take the hyperparameters which achieve the best performance on the development set via an small grid search over combinations of the initial learning rate [0.1, 0.01], l 2 regularization [0.0, 5E\u22125, 1E\u22125]. For datasets without development set, we use 10-fold cross-validation (CV) instead. The final hyper-parameters are set as Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 346, "end": 353, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Hyperparameters and Training", "sec_num": "5.3" }, { "text": "We first compare our proposed models with the baseline system for single task classification. Table 3 shows the classification accuracies on the movie reviews dataset. The row of \"Single Task\" shows the results of LSTM and ME-LSTM for each individual task. With the help of multi-task learning, the performances of these four tasks are improved by 1.8% (ARC-I) and 2.9% (ARC-II) on average relative to LSTM. We can find that the architecture of local-global hybrid external memory has better performances. The reason is that the global memory in ARC-I could store some task-specific information besides shared information, which maybe noisy to other tasks. Moreover, both of our proposed models outperform MT-CNN and MT-DNN, which indicates the effectiveness of our proposed shared mechanism. To give an intuitive evaluation of these results, we also list the following state-of-the-art neural models. With the help of utilizing the shared information of several related tasks, our results outperform most of state-of-the-art models. Although Tree-LSTM outperforms our method on SST-1, it needs an external parser to get the sentence topological structure. It is worth noticing that our models are generic and compatible with the other LSTM based models. For example, we can easily extend our models to incorporate the Tree-LSTM model. Table 4 shows the classification accuracies on the tasks of product reviews. The row of \"Single Task\" shows the results of the baseline for each individual task. With the help of global shared memory (ARC-I), the performances of these four tasks are improved by an average of 2.9%(2.6%) compared with LSTM(ME-LSTM). ARC-II achieves best performances on three sub-tasks, and its average improvement is 3.7%(3.5%). Compared with MT-CNN and MT-DNN, our models achieve a better performance. We think the reason is that our models can not only share lexical information but share complicated patterns of sentences by reading/writing operations of external memory. Furthermore, these results on product reviews are consistent with that on movie reviews, which shows our architectures are robust.", "cite_spans": [], "ref_spans": [ { "start": 94, "end": 102, "text": "Table 3", "ref_id": null }, { "start": 1337, "end": 1344, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Multi-task Learning of Movie Reviews", "sec_num": "5.4" }, { "text": "To get an intuitive understanding of what is happening when we use shared memory to predict the class of text, we design an experiment to compare and analyze the difference between our models and vanilla LSTM, thereby demonstrating the effectiveness of our proposed architectures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study", "sec_num": "5.6" }, { "text": "We sample two sentences from the SST-2 validation dataset, and the changes of the predicted sentiment score at different time steps are shown in Figure 3 , which are obtained by vanilla LSTM and ARC-I respectively. Additionally, both models are bidirectional for better visualization. To get more insights into how the shared external memory influences the specific task, we plot and observe the evolving activation of fusion gates through time, which controls signals flowing from a shared external memory to task-specific output, to understand the behaviour of neurons.", "cite_spans": [], "ref_spans": [ { "start": 145, "end": 153, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Case Study", "sec_num": "5.6" }, { "text": "For the sentence \"It is a cookie-cutter movie, a cut-and-paste job.\", which has a negative sentiment, while the standard LSTM gives a wrong prediction due to not understanding the informative words \"cookie-cutter\" and \"cut-and-paste\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study", "sec_num": "5.6" }, { "text": "In contrast, our model makes a correct prediction and the reason can be inferred from the activation of fusion gates. As shown in Figure 3 -(c), we can see clearly the neurons are activated much when they take input as \"cookie-cutter\" and \"cut-and-paste\", which indicates much information in shared memory has be passed into LSTM, therefore enabling the model to give a correct prediction.", "cite_spans": [], "ref_spans": [ { "start": 130, "end": 138, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Case Study", "sec_num": "5.6" }, { "text": "Another case \"If you were not nearly moved to tears by a couple of scenes , you 've got ice water in your veins\", a subjunctive clause introduced by \"if \", has a positive sentiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study", "sec_num": "5.6" }, { "text": "As shown in Figure 3-(b,d) , vanilla LSTM failed to capture the implicit meaning behind the sentence, while our model is sensitive to the pattern \"If ... were not ...\" and has an accurate understanding of the sentence, which indicates the shared memory mechanism can not only enrich the meaning of certain words, but teach some information of sentence structure to specific task. ", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 26, "text": "Figure 3-(b,d)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Case Study", "sec_num": "5.6" }, { "text": "Neural networks based multi-task learning has been proven effective in many NLP problems (Collobert and Weston, 2008; Glorot et al., 2011; Liu et al., 2015b; Liu et al., 2016b) . In most of these models, the lower layers are shared across all tasks, while top layers are task-specific. Collobert and Weston (2008) used a shared representation for input words and solved different traditional NLP tasks within one framework. However, only one lookup table is shared, and the other lookup tables and layers are task-specific.", "cite_spans": [ { "start": 89, "end": 117, "text": "(Collobert and Weston, 2008;", "ref_id": "BIBREF5" }, { "start": 118, "end": 138, "text": "Glorot et al., 2011;", "ref_id": "BIBREF13" }, { "start": 139, "end": 157, "text": "Liu et al., 2015b;", "ref_id": "BIBREF21" }, { "start": 158, "end": 176, "text": "Liu et al., 2016b)", "ref_id": "BIBREF23" }, { "start": 286, "end": 313, "text": "Collobert and Weston (2008)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Liu et al. (2015b) developed a multi-task DNN for learning representations across multiple tasks. Their multi-task DNN approach combines tasks of query classification and ranking for web search. But the input of the model is bag-of-word representation, which loses the information of word order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "More recently, several multi-task encoder-decoder networks were also proposed for neural machine translation (Dong et al., 2015; Luong et al., 2015; Firat et al., 2016) , which can make use of cross-lingual information. Unlike these works, in this paper we design two neural architectures with shared memory for multitask learning, which can store useful information across the tasks. Our architectures are relatively loosely coupled, and therefore more flexible to expand. With the help of shared memory, we can obtain better task-specific sentence representation by utilizing the knowledge obtained by other related tasks.", "cite_spans": [ { "start": 109, "end": 128, "text": "(Dong et al., 2015;", "ref_id": "BIBREF9" }, { "start": 129, "end": 148, "text": "Luong et al., 2015;", "ref_id": "BIBREF24" }, { "start": 149, "end": 168, "text": "Firat et al., 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In this paper, we introduce two deep architectures for multi-task learning. The difference with the previous models is the mechanisms of sharing information among several tasks. We design an external memory to store the knowledge shared by several related tasks. Experimental results show that our models can improve the performances of several related tasks by exploring common features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "In addition, we also propose a deep fusion strategy to integrate the information from the external memory into task-specific LSTM with a fusion gate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "In future work, we would like to investigate the other sharing mechanisms of neural network based multi-task learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "http://nlp.stanford.edu/sentiment. 2 http://ai.stanford.edu/\u02dcamaas/data/ sentiment/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.cs.jhu.edu/\u02dcmdredze/ datasets/sentiment/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the anonymous reviewers for their valuable comments. This work was partially funded by National Natural Science Foundation of China (No. 61532011 and 61672162), the National High Technology Research and Development Program of China (No. 2015AA015408).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning efficient algorithms with hierarchical attentive memory", "authors": [ { "first": "Marcin", "middle": [], "last": "Andrychowicz", "suffix": "" }, { "first": "Karol", "middle": [], "last": "Kurach", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1602.03218" ] }, "num": null, "urls": [], "raw_text": "Marcin Andrychowicz and Karol Kurach. 2016. Learn- ing efficient algorithms with hierarchical attentive memory. arXiv preprint arXiv:1602.03218.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "D", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Bahdanau, K. Cho, and Y. Bengio. 2014. Neural ma- chine translation by jointly learning to align and trans- late. ArXiv e-prints, September.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification", "authors": [ { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2007, "venue": "ACL", "volume": "7", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Blitzer, Mark Dredze, Fernando Pereira, et al. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL, volume 7, pages 440-447.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Multitask learning. Machine learning", "authors": [ { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" } ], "year": 1997, "venue": "", "volume": "28", "issue": "", "pages": "41--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rich Caruana. 1997. Multitask learning. Machine learn- ing, 28(1):41-75.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A fast and accurate dependency parser using neural networks", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "740--750", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740-750.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified ar- chitecture for natural language processing: Deep neu- ral networks with multitask learning. In Proceedings of ICML.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "The Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493- 2537.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Associative long short-term memory", "authors": [ { "first": "Ivo", "middle": [], "last": "Danihelka", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Wayne", "suffix": "" }, { "first": "Benigno", "middle": [], "last": "Uria", "suffix": "" }, { "first": "Nal", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivo Danihelka, Greg Wayne, Benigno Uria, Nal Kalch- brenner, and Alex Graves. 2016. Associative long short-term memory. CoRR, abs/1602.03032.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Modelling, visualising and summarising documents with a single convolutional neural network", "authors": [ { "first": "Misha", "middle": [], "last": "Denil", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Demiraj", "suffix": "" }, { "first": "Nal", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Nando", "middle": [], "last": "De Freitas", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1406.3830" ] }, "num": null, "urls": [], "raw_text": "Misha Denil, Alban Demiraj, Nal Kalchbrenner, Phil Blunsom, and Nando de Freitas. 2014. Modelling, visualising and summarising documents with a sin- gle convolutional neural network. arXiv preprint arXiv:1406.3830.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Multi-task learning for multiple language translation", "authors": [ { "first": "Daxiang", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Wei", "middle": [], "last": "He", "suffix": "" }, { "first": "Dianhai", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multi- ple language translation. In Proceedings of the ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2011, "venue": "The Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121-2159.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Finding structure in time", "authors": [ { "first": "", "middle": [], "last": "Jeffrey L Elman", "suffix": "" } ], "year": 1990, "venue": "Cognitive science", "volume": "14", "issue": "", "pages": "179--211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey L Elman. 1990. Finding structure in time. Cog- nitive science, 14(2):179-211.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Multi-way, multilingual neural machine translation with a shared attention mechanism", "authors": [ { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1601.01073" ] }, "num": null, "urls": [], "raw_text": "Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. arXiv preprint arXiv:1601.01073.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Domain adaptation for large-scale sentiment classification: A deep learning approach", "authors": [ { "first": "Xavier", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 28th International Conference on Machine Learning (ICML-11)", "volume": "", "issue": "", "pages": "513--520", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceed- ings of the 28th International Conference on Machine Learning (ICML-11), pages 513-520.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Neural turing machines", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Wayne", "suffix": "" }, { "first": "Ivo", "middle": [], "last": "Danihelka", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1410.5401" ] }, "num": null, "urls": [], "raw_text": "Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Generating sequences with recurrent neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1308.0850" ] }, "num": null, "urls": [], "raw_text": "Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735- 1780.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Convolutional neural network architectures for matching natural language sentences", "authors": [ { "first": "Baotian", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qingcai", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "An empirical exploration of recurrent network architectures", "authors": [ { "first": "Rafal", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Zaremba", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2015, "venue": "Proceedings of The 32nd International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In Proceedings of The 32nd Interna- tional Conference on Machine Learning.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A convolutional neural network for modelling sentences", "authors": [ { "first": "Nal", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nal Kalchbrenner, Edward Grefenstette, and Phil Blun- som. 2014. A convolutional neural network for mod- elling sentences. In Proceedings of ACL.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Multi-timescale long shortterm memory neural network for modelling sentences and documents", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Convolutional neural networks for sentence classification", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1408.5882" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sen- tence classification. arXiv preprint arXiv:1408.5882. PengFei Liu, Xipeng Qiu, Xinchi Chen, Shiyu Wu, and Xuanjing Huang. 2015a. Multi-timescale long short- term memory neural network for modelling sentences and documents. In Proceedings of the Conference on EMNLP.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Representation learning using multi-task deep neural networks for semantic classification and information retrieval", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Duh", "suffix": "" }, { "first": "Ye-Yi", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2015, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. 2015b. Representa- tion learning using multi-task deep neural networks for semantic classification and information retrieval. In NAACL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Deep fusion LSTMs for text semantic matching", "authors": [ { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Jifan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengfei Liu, Xipeng Qiu, Jifan Chen, and Xuanjing Huang. 2016a. Deep fusion LSTMs for text seman- tic matching. In Proceedings of Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Recurrent neural network for text classification with multi-task learning", "authors": [ { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016b. Recurrent neural network for text classification with multi-task learning. In Proceedings of International Joint Conference on Artificial Intelligence.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Multi-task sequence to sequence learning", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Le", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "", "middle": [], "last": "Kaiser", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.06114" ] }, "num": null, "urls": [], "raw_text": "Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Learning word vectors for sentiment analysis", "authors": [ { "first": "L", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Raymond", "middle": [ "E" ], "last": "Maas", "suffix": "" }, { "first": "", "middle": [], "last": "Daly", "suffix": "" }, { "first": "T", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Pham", "suffix": "" }, { "first": "", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "142--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Pro- ceedings of the ACL, pages 142-150.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representa- tions in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2004. A sentimental educa- tion: Sentiment analysis using subjectivity summariza- tion based on minimum cuts. In Proceedings of ACL.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Empiricial Methods in Natural Language Processing", "volume": "12", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. Proceedings of the Empiricial Meth- ods in Natural Language Processing (EMNLP 2014), 12:1532-1543.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Semisupervised recursive autoencoders for predicting sentiment distributions", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "H", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Huang", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Jeffrey Pennington, Eric H Huang, An- drew Y Ng, and Christopher D Manning. 2011. Semi- supervised recursive autoencoders for predicting sen- timent distributions. In Proceedings of EMNLP.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Semantic compositionality through recursive matrix-vector spaces", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Brody", "middle": [], "last": "Huval", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Andrew Y", "middle": [], "last": "Manning", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2012, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1201--1211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceed- ings of EMNLP, pages 1201-1211.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Y", "middle": [], "last": "Jean", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "End-to-end memory networks", "authors": [ { "first": "Sainbayar", "middle": [], "last": "Sukhbaatar", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Fergus", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2431--2439", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems, pages 2431- 2439.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Improved semantic representations from tree-structured long short-term memory networks", "authors": [ { "first": "Kai Sheng", "middle": [], "last": "Tai", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1503.00075" ] }, "num": null, "urls": [], "raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D Man- ning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. arXiv preprint arXiv:1503.00075.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Word representations: a simple and general method for semi-supervised learning", "authors": [ { "first": "Joseph", "middle": [], "last": "Turian", "suffix": "" }, { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of ACL.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Largercontext language modelling", "authors": [ { "first": "Tian", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.03729" ] }, "num": null, "urls": [], "raw_text": "Tian Wang and Kyunghyun Cho. 2015. Larger- context language modelling. arXiv preprint arXiv:1511.03729.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Local-Global Hybrid Memory Architecture", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Two architectures for modelling text with multi-task learning.", "type_str": "figure", "uris": null }, "FIGREF2": { "num": null, "text": "(a)(b) The change of the predicted sentiment score at different time steps. Y-axis represents the sentiment score, while X-axis represents the input words in chronological order. The red horizontal line gives a border between the positive and negative sentiments. (c)(d) Visualization of the fusion gate's activation.", "type_str": "figure", "uris": null }, "TABREF1": { "num": null, "text": "Statistics of two multi-task datasets. Each dataset consists of four related tasks.", "content": "