ACL-OCL / Base_JSON /prefixQ /json /Q19 /Q19-1008.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q19-1008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:09:31.850353Z"
},
"title": "Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications",
"authors": [
{
"first": "Rumen",
"middle": [],
"last": "Dangovski",
"suffix": "",
"affiliation": {},
"email": "rumenrd@mit.edu"
},
{
"first": "Li",
"middle": [],
"last": "Jing",
"suffix": "",
"affiliation": {},
"email": "ljing@mit.edu"
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HBKU",
"location": {}
},
"email": "pnakov@qf.org.qa"
},
{
"first": "Mi\u0107o",
"middle": [],
"last": "Tatalovi\u0107",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Massachusetts Institute of Technology",
"location": {}
},
"email": ""
},
{
"first": "Marin",
"middle": [],
"last": "Solja\u010di\u0107",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Massachusetts Institute of Technology",
"location": {}
},
"email": "soljacic@mit.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Stacking long short-term memory (LSTM) cells or gated recurrent units (GRUs) as part of a recurrent neural network (RNN) has become a standard approach to solving a number of tasks ranging from language modeling to text summarization. Although LSTMs and GRUs were designed to model long-range dependencies more accurately than conventional RNNs, they nevertheless have problems copying or recalling information from the long distant past. Here, we derive a phase-coded representation of the memory state, Rotational Unit of Memory (RUM), that unifies the concepts of unitary learning and associative memory. We show experimentally that RNNs based on RUMs can solve basic sequential tasks such as memory copying and memory recall much better than LSTMs/GRUs. We further demonstrate that by replacing LSTM/GRU with RUM units we can apply neural networks to real-world problems such as language modeling and text summarization, yielding results comparable to the state of the art.",
"pdf_parse": {
"paper_id": "Q19-1008",
"_pdf_hash": "",
"abstract": [
{
"text": "Stacking long short-term memory (LSTM) cells or gated recurrent units (GRUs) as part of a recurrent neural network (RNN) has become a standard approach to solving a number of tasks ranging from language modeling to text summarization. Although LSTMs and GRUs were designed to model long-range dependencies more accurately than conventional RNNs, they nevertheless have problems copying or recalling information from the long distant past. Here, we derive a phase-coded representation of the memory state, Rotational Unit of Memory (RUM), that unifies the concepts of unitary learning and associative memory. We show experimentally that RNNs based on RUMs can solve basic sequential tasks such as memory copying and memory recall much better than LSTMs/GRUs. We further demonstrate that by replacing LSTM/GRU with RUM units we can apply neural networks to real-world problems such as language modeling and text summarization, yielding results comparable to the state of the art.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "An important element of the ongoing neural revolution in Natural Language Processing (NLP) is the rise of Recurrent Neural Networks (RNNs), which have become a standard tool for addressing a number of tasks ranging from language modeling, part-of-speech tagging and named entity recognition to neural machine translation, text summarization, question answering, and building chatbots/ dialog systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As standard RNNs suffer from exploding/ vanishing gradient problems, alternatives such as long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) or gated recurrent units (GRUs) (Cho et al., 2014) have been proposed and have now become standard.",
"cite_spans": [
{
"start": 120,
"end": 154,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF23"
},
{
"start": 187,
"end": 205,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Nevertheless, LSTMs and GRUs fail to demonstrate really long-term memory capabilities or efficient recall on synthetic tasks (see Figure 1 ). Figure 1 shows that when RNN units are fed a long string (e.g., emojis in Figure 1 (a)), they struggle to represent the input in their memory, which results in recall or copy mistakes. The origins of these issues are two-fold: (i) a single hidden state cannot memorize complicated sequential dynamics and (ii) the hidden state is not manipulated well, resulting in information loss. Typically, these are addressed separately: by using external memory for (i), and gated mechanisms for (ii).",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 138,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 142,
"end": 150,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 216,
"end": 224,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here, we solve (i) and (ii) jointly by proposing a novel RNN unit, Rotational Unit of Memory (RUM), that manipulates the hidden state by rotating it in an Euclidean space, resulting in a better information flow. This remedy to (ii) affects (i) to the extent that the external memory is less needed. As a proof of concept, in Figure 1(a) , RUM recalls correctly a faraway emoji.",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 336,
"text": "Figure 1(a)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We further show that RUM is fit for real-world NLP tasks. In Figure 1(b) , a RUM-based seq2seq model produces a better summary than what a standard LSTM-based seq2seq model yields. In this particular example, LSTM falls into the wellknown trap of repeating information close to the end, whereas RUM avoids it. Thus, RUM can be seen as a more ''well-rounded'' alternative to LSTM.",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 72,
"text": "Figure 1(b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given the example from Figure 1 , we ask the following questions: Does the long-term memory's 0 1 2 3 4 5 6 7 8 9 10 11 12 ? @ 1 RUM LSTM (a) (b) Story (abridged) The raccoon that topples your trashcan and pillages your garden may leave more than just a mess. More likely than not, it also contaminates your yard with parasites -most notably, raccoon roundworms baylisascaris procyonis (...) That is true in varying degrees throughout North America, where urban raccoons may infect people more than previously assumed. Led by Weinstein, the UCSB researchers wondered if most human infections went undetected\u2026 Their study, appearing in the CDC Journal Emerging Infectious Diseases, found that 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. That was news to Weinstein, who said the researchers wouldn't have been surprised if they'd found no evidence of human infection\u2026 Over 90 percent of raccoons in Santa Barbara play host to this parasite, which grows to about the size of a No. 2 pencil and can produce over 100,000 eggs per day (\u2026) Sometimes they reach the brain, with potentially devastating consequences. This infection, termed \"baylisascariasis,\" kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe brain damage in dozens of people, including a toddler in Santa Barbara back in 2002.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 31,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "LSTM generated summary \"baylisascariasis,\" kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed \"baylisascariasis,\" kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed \"baylisascariasis,\" kills mice, has endangered the allegheny woodrat. RUM (ours) generated summary Urban raccoons may infect people more than previously assumed. 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. Over 90 percent of raccoons in Santa Barbara play host to this parasite. advantage for synthetic tasks such as copying and recall translate to improvements for real-world NLP problems? Can RUM solve issues (i) and (ii) more efficiently? Does a theoretical advance improve real-world applications?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose RUM as the answer to these questions via experimental observations and mathematical intuition. We combine concepts from unitary learning and associative memory to utilize the theoretical advantages of rotations, and then we show promising applications to hard NLP tasks. Our evaluation of RUM is organized around a sequence of tests: (1) Pass a synthetic memory copying test; (2) Pass a synthetic associative recall test; (3) Show promising performance for question answering on the bAbI data set; (4) Improve the state-of-the-art for characterlevel language modeling on the Penn Treebank; (5) Perform effective seq2seq text summarization, training on the difficult CNN / Daily Mail summarization corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, there is no previous work on RNN units that shows such promising performance, both theoretical and practical. Our contributions can be summarized as follows: (i) We propose a novel representation unit for RNNs based on an idea not previously explored in this context-rotations. (ii) We show theoretically and experimentally that our unit models much longer distance dependencies than LSTM and GRU, and can thus solve tasks such as memory recall and memory copying much better. (iii) We further demonstrate that RUM can be used as a replacement for LSTM/GRU in real-world problems such as language modeling, question answering, and text summarization, yielding results comparable to the state of the art. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work rethinks the concept of gated models. LSTM and GRU are the most popular such models, and they learn to generate gates-such as input, reset, and update gates-that modify the hidden state through element-wise multiplication and addition. We manipulate the hidden state in a completely different way: Instead of gates, we learn directions in the hidden space towards which we rotate it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Moreover, because rotations are orthogonal, RUM is implicitly orthogonal, meaning that RUM does not parametrize the orthogonal operation, but rather extracts it from its own components. Thus, RUM is also different from explicitly orthogonal models such as uRNN, EURNN, GORU, and all other RNN units that parametrize their norm-preserving operation (see below).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Rotations have fundamental applications in mathematics (Artin, 2011; Hall, 2015) and physics (Sakurai and Napolitano, 2010) . In computer vision, rotational matrices and quaternions contain valuable information and have been used to estimate object poses (Katz, 2001; Shapiro and Stockman, 2001; Kuipers, 2002) . Recently, efficient, accurate and rotationally invariant representation units have been designed for convolutional neural networks (Worrall et al., 2017; Weiler et al., 2018) . Unlike that work, we use rotations to design a new RNN unit with application to NLP, rather than vision.",
"cite_spans": [
{
"start": 55,
"end": 68,
"text": "(Artin, 2011;",
"ref_id": null
},
{
"start": 69,
"end": 80,
"text": "Hall, 2015)",
"ref_id": "BIBREF19"
},
{
"start": 93,
"end": 123,
"text": "(Sakurai and Napolitano, 2010)",
"ref_id": "BIBREF46"
},
{
"start": 255,
"end": 267,
"text": "(Katz, 2001;",
"ref_id": "BIBREF26"
},
{
"start": 268,
"end": 295,
"text": "Shapiro and Stockman, 2001;",
"ref_id": "BIBREF49"
},
{
"start": 296,
"end": 310,
"text": "Kuipers, 2002)",
"ref_id": "BIBREF31"
},
{
"start": 444,
"end": 466,
"text": "(Worrall et al., 2017;",
"ref_id": "BIBREF61"
},
{
"start": 467,
"end": 487,
"text": "Weiler et al., 2018)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Unitary learning approaches the problem of vanishing and exploding gradients, which obstruct learning of really long-term dependencies (Bengio et al., 1994) . Theoretically, using unitary and orthogonal matrices will keep the norm of the gradient: the absolute value of their eigenvalues is raised to a high power in the gradient calculation, but it equals one, so it neither vanishes, nor explodes. Arjovsky et al. (2016) (unitary RNN, or uRNN) and Jing et al. (2017b) (efficient unitary RNN, or EURNN) used parameterizations to form the unitary spaces. Wisdom et al. (2016) applied gradient projection onto a unitary manifold. Vorontsov et al. (2017) used penalty terms as a regularization to restrict the matrices to be unitary. Unfortunately, such theoretical approaches struggle to perform outside of the domain of synthetic tasks, and fail at simple realworld tasks such as character-level language modeling (Jing et al., 2017a) . To alleviate this issue, Jing et al. (2017a) combined a unitary parametrization with gates, thus yielding a gated orthogonal recurrent unit (GORU), which provides a ''forgetting mechanism,'' required by NLP tasks.",
"cite_spans": [
{
"start": 135,
"end": 156,
"text": "(Bengio et al., 1994)",
"ref_id": "BIBREF7"
},
{
"start": 400,
"end": 445,
"text": "Arjovsky et al. (2016) (unitary RNN, or uRNN)",
"ref_id": null
},
{
"start": 450,
"end": 503,
"text": "Jing et al. (2017b) (efficient unitary RNN, or EURNN)",
"ref_id": null
},
{
"start": 555,
"end": 575,
"text": "Wisdom et al. (2016)",
"ref_id": "BIBREF60"
},
{
"start": 629,
"end": 652,
"text": "Vorontsov et al. (2017)",
"ref_id": "BIBREF57"
},
{
"start": 914,
"end": 934,
"text": "(Jing et al., 2017a)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Among pre-existing RNN units, RUM is most similar to GORU in spirit because both models transform (significantly) GRU. Note, however, that GORU parametrizes an orthogonal operation whereas RUM extracts an orthogonal operation in the form of a rotation. In this sense, to parallel our model's implicit orthogonality to the literature, RUM is a ''firmware'' structure rather than a ''learnware'' structure, as discussed in (Balduzzi and Ghifary, 2016) .",
"cite_spans": [
{
"start": 421,
"end": 449,
"text": "(Balduzzi and Ghifary, 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Associative memory modeling provides a large variety of input encodings in a neural network for effective pattern recognition (Kohonen, 1974; Krotov and Hopfield, 2016) . It is particularly appealing for RNNs because their memory is in short supply. RNNs often circumvent this by using external memory in the form of an attention mechanism (Bahdanau et al., 2015; Hermann et al., 2015) . Another alternative is the use of neural Turing machines (Graves et al., 2014 . In either case, this yields an increase in the size of the model and makes training harder.",
"cite_spans": [
{
"start": 126,
"end": 141,
"text": "(Kohonen, 1974;",
"ref_id": "BIBREF28"
},
{
"start": 142,
"end": 168,
"text": "Krotov and Hopfield, 2016)",
"ref_id": "BIBREF29"
},
{
"start": 340,
"end": 363,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF5"
},
{
"start": 364,
"end": 385,
"text": "Hermann et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 445,
"end": 465,
"text": "(Graves et al., 2014",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recent advances in associative memory (Plate, 2003; Ba et al., 2016a; Zhang and Zhou, 2017) suggest that its updates can be learned efficiently with backpropagation through time (Rumelhart et al., 1986) . For example, Zhang and Zhou (2017) used weights that are dynamically updated by the input sequence. By treating the RNN weights as memory determined by the current input data, a larger memory size is provided and fewer trainable parameters are required.",
"cite_spans": [
{
"start": 38,
"end": 51,
"text": "(Plate, 2003;",
"ref_id": "BIBREF44"
},
{
"start": 52,
"end": 69,
"text": "Ba et al., 2016a;",
"ref_id": "BIBREF3"
},
{
"start": 70,
"end": 91,
"text": "Zhang and Zhou, 2017)",
"ref_id": null
},
{
"start": 178,
"end": 202,
"text": "(Rumelhart et al., 1986)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Note that none of these methods used rotations to create the associative memory. The novelty of our model is that it exploits the simple and fundamental multiplicative closure of rotations to generate rotational associative memory for RNNs. As a result, an RNN that uses our RUM units yields state-of-the-art performance for synthetic associative recall while using very few parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Successful RNNs require well-engineered manipulations of the hidden state h t at time step t. We approach this mathematically, considering h t as a real vector in an N h -dimensional Euclidean space, where N h is the dimension of the ''hidden'' state R N h . Note that there is an angle between two vectors in R N h (the cosine of that angle can be calculated as a normalized dot product ''\u2022''). Moreover, we can associate a unique angle to h t with respect to some reference vector. Thus, a hidden state can be characterized by a magnitude, i.e., L 2 -norm '' . '', and a phase, i.e., angle with respect to the reference vector. Thus, if we devise a mechanism to generate reference vectors at any given time step, we would enable rotating the hidden state with respect to the generated reference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "This rethinking of R N h allows us to propose a powerful learning representation: Instead of following the standard way of learning to modify the norm of h t through multiplication by gates and self-looping (as in LSTM), we learn to rotate the hidden state, thereby changing the phase, but preserving the magnitude. The benefits of using such phase-learning representation are two-fold: (i) the preserved magnitude yields stable gradients, which in turn enables really long-term memory, and (ii) there is always a sequence of rotations that can bring the current phase to a desired target one, thus enabling effective recall of information. In order to achieve this, we need a phaselearning transformation that is also differentiable. A simple and efficient approach is to compute the angle between two special vectors, and then to update the phase of the hidden state by rotating it with the computed angle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "! h # h #$% x # h # + 1 \u2212 + * + ! # , # -\u0303# \u2299 ! #$% ! # -\u0303, h ! 0 h (a) (b) \u2191 2 \u00d7 \u2299 u # h 5 # 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "We let the RNN generate the special vectors at time step t (i) by linearly embedding the RNN input x t \u2208 R N x to an embedded input\u03b5 t \u2208 R N h , and (ii) by obtaining a target memory \u03c4 t as a linear combination of the current input x t (projected in the hidden space) and the previous history h t\u22121 (after a linear transformation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The Rotation Operation. We propose a function Rotation :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "R N h \u00d7 R N h \u2192 R N h \u00d7N h , which implements this idea.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Rotation takes a pair of column vectors (a, b) and returns the rotational matrix R from a to b. If a and b have the same orientation, then R is the identity matrix; otherwise, the two vectors form a plane span(a, b). Our operation projects and rotates in that plane, leaving everything else intact, as shown in Figure 2 ",
"cite_spans": [],
"ref_spans": [
{
"start": 311,
"end": 319,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "u = a/ a v = (b \u2212 (u \u2022 b)u)/ b \u2212 (u \u2022 b)u",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "We can express the matrix operation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "R as [1 \u2212 uu \u2020 \u2212 vv \u2020 ] + (u, v)R(\u03b8)(u, v) \u2020 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "where the bracketed term is the projection 2 and the second term is the 2D rotation in the plane, given by the following matrix:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "R(\u03b8) = cos \u03b8 \u2212 sin \u03b8 sin \u03b8 cos \u03b8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Finally, we define the rotation operation as follows: Rotation(a, b) \u2261 R. Note that R is differentiable by construction, and thus it is backpropagation-friendly. Moreover, we implement Rotation and its action on h t efficiently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The key consideration is not to compute R explicitly. Instead, we follow Equation 1, which can be computed in linear memory O(N h ). Likewise, the time complexity is O(N 2 h ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Associative memory. We find that, for some sequential tasks, it is useful to exploit the multiplicative structure of rotations to enable associative memory. This is based on the observation that just like the sum of two real numbers is also a real number, the product of two rotational matrices is another rotational matrix. 3 Therefore, we use a rotation R t as an additional memory state that accumulates phase as follows",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R t = (R t\u22121 ) \u03bb Rotation(\u03b5 t , \u03c4 t )",
"eq_num": "(2)"
}
],
"section": "Model",
"sec_num": "3"
},
{
"text": "We make the associative memory from Equation (2) tunable through the parameter \u03bb \u2208 {0, 1}, which serves to switch the associative memory off and on. To the best of our knowledge, our model is the first RNN to explore such multiplicative associative memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Note that even though R t acts as an additional memory state, there are no additional parameters in RUM: The parameters are only within the Rotation operation. As the same Rotation appears at each recursive step (2), the parameters are shared.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The RUM cell. Figure 2 (b) shows a sketch of the connections in the RUM cell. RUM consists of an update gate u \u2208 R N h that has the same function as in GRU. However, instead of a reset gate, the model learns the memory target \u03c4 \u2208 R N h . RUM also learns to embed the input vector",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 22,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "x \u2208 R N x into R N h to yield\u03b5 \u2208 R N h .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Hence, Rotation encodes the rotation between the embedded input and the target, which is accumulated in the associative memory unit R t \u2208 R N h \u00d7N h (originally initialized to the identity matrix). Here, \u03bb is a non-negative integer that is a hyper-parameter of the model. The orthogonal matrix R t acts on the state h to produce an evolved hidden stateh. Finally, RUM calculates the new hidden state via u, just as in GRU. The RUM equations are given in Algorithm 1. The orthogonal matrix R(\u03b5 t , \u03c4 ) conceptually takes the place of a weight kernel acting on the hidden state in GRU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Non-linear activation for RUM. We motivate the choice of activations using analysis of the gradient updates. Let the cost function be C. For T steps, we compute the partial derivative via the chain rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "\u2202C \u2202h t = \u2202C \u2202h T T \u22121 k=t \u2202h k+1 \u2202h k = \u2202C \u2202h T T \u22121 k=t D (k) W \u2020 where D (k) = diag{f (W h k\u22121 + Ax k + b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "} is the Jacobian matrix of the pointwise non-linearity f for a standard vanilla RNN. For the sake of clarity, let us consider a simplified version of RUM, where W \u2261 R k is a rotation matrix, and let us use spectral norm for matrices. By orthogonality, we have W \u2020 = 1. Then, the norm of the update \u2202C/\u2202h t is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "bounded by \u2202C/\u2202h T W \u2020 T \u2212t T \u22121 k=1 D (k) , which simplifies to \u2202C/\u2202h T T \u22121 k=1 D (k) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Hence, if the norm of D (k) is strictly less than one, we would witness vanishing gradients (for large T ), which we aim to avoid by choosing a proper activation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Hence, we compare four well-known activations f : ReLU, tanh, sigmoid, and softsign. Figure 3 shows their derivatives. As long as some value is positive, the ReLU derivative will be one, and thus D (k) = 1. This means that ReLU is potentially a good choice. Because RUM is closer to GRU, which makes the analysis more complicated, we conduct ablation studies on nonlinear activations and on the importance of the update gate throughout Section 4.",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 93,
"text": "Figure 3",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Time normalization (optional). Sometimes h t (in Algorithm 1) blows up, for example, when using ReLU activation or for heterogeneous architectures that use other types of units (e.g., LSTM/GRU) in addition to RUM or perform complex computations based on attention mechanism or pointers. In such cases, we suggest using L 2normalization of the hidden state h t to have a fixed norm \u03b7 along the time dimension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "We call it time normalization, as we usually feed mini-batches to the RNN during learning that have the shape (N b , N T ), where N b is the size of the batch and N T is the length of the sequence. Empirically, fixing \u03b7 to be a small number stabilizes training, and we found that values centered around \u03b7 = 1.0 work well. This is an optional component in RUM, as typically h t does not blow up. In our experiments, we only needed it for our character-level language modeling, which mixes RUM and LSTM units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Algorithm 1 Rotational Unit of Memory (RUM) Input: dimensions N x , N h , T ; data x \u2208 R T \u00d7Nx",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "; type of cell \u03bb; norm \u03b7 for time-normalization; non-linear activation function f . Initialize:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "kernels W \u03c4 xh , W u xh \u2208 R Nx\u00d7N h , W \u03c4 hh , W u hh \u2208 R N h \u00d7N h andW xh \u2208 R Nx\u00d7N h ; biases b \u03c4 t , b u t ,b t \u2208 R N h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "; hidden state h 0 ; orthogonal initialization for weights with gain 1.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "for t = 1 to T do \u03c4 t = W \u03c4 xh x t + W \u03c4 hh h t\u22121 + b \u03c4 t //memory target u t = W u xh x t + W u hh h t\u22121 + b u t //update gate u t = sigmoid(u t ) //activation of the update gat\u1ebd \u03b5 t =W xh x t +b t //embedded input R t = (R t\u22121 ) \u03bb Rotation(\u03b5 t , \u03c4 t ) //associative memor\u1ef9 h t = f (\u03b5 t + R t h t\u22121 ) //hidden state evolution h t = u t h t\u22121 + (1 \u2212 u t ) h t //new state h t = \u03b7h t / h t //normalization N (optional) end for 4 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "We now describe two kinds of experiments based (i) on synthetic and (ii) on real-world tasks. The former test the representational power of RUMs vs. LSTMs/GRUs, and the latter test whether RUMs also perform well for real-world NLP problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Copying memory task (A) is a standard testbed for the RNN's capability for long-term memory (Hochreiter and Schmidhuber, 1997; Arjovsky et al., 2016; Henaff et al., 2016) . Here, we follow the experimental set-up in Jing et al. (2017b) .",
"cite_spans": [
{
"start": 92,
"end": 126,
"text": "(Hochreiter and Schmidhuber, 1997;",
"ref_id": "BIBREF23"
},
{
"start": 127,
"end": 149,
"text": "Arjovsky et al., 2016;",
"ref_id": "BIBREF1"
},
{
"start": 150,
"end": 170,
"text": "Henaff et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 216,
"end": 235,
"text": "Jing et al. (2017b)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "Data. The alphabet of the input consists of symbols",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "{a i }, i \u2208 {0, 1, \u2022 \u2022 \u2022 , n \u2212 1, n, n + 1}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": ", the first n of which represent data for copying, and the remaining two forming ''blank'' and ''marker'' symbols, respectively. In our experiments, we set n = 8 and the data for copying is the first 10 symbols of the input. The RNN model is expected to output ''blank'' during T = 500 delay steps and, after the ''marker'' appears in the input, to output (copy) sequentially the first 10 input symbols. The train/test split is 50,000/500 examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "Models. We test RNNs built using various types of units: LSTM (Hochreiter and Schmidhuber, 1997) , GRU (Cho et al., 2014) , uRNN (Wisdom et al., 2016) , EURNN (Jing et al., 2017b) , GORU (Jing et al., 2017a) , and RUM (ours) with \u03bb \u2208 {0, 1} and \u03b7 \u2208 {1.0, N/A}. We train with a batch size of 128 and an RMSProp with a 0.9 decay rate, and we try learning rates from {0.01, Figure 4 : Synthetic memory copying results. Shown is the cross-entropy loss. The number in the name of the models indicates the size of the hidden state, \u03bb = 1 means tuning the associative memory, and \u03b7 = N/A means not using time normalization. Note that the results for GRU 100 are not visible due to overlap with GRU 256. 0.001, 0.0001}. We found that LSTM and GRU fail for all learning rates, EURNN is unstable for large learning rates, and RUM is stable for all learning rates. Thus, we use 0.001 for all units except for EURNN, for which we use 0.0001.",
"cite_spans": [
{
"start": 62,
"end": 96,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF23"
},
{
"start": 103,
"end": 121,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 129,
"end": 150,
"text": "(Wisdom et al., 2016)",
"ref_id": "BIBREF60"
},
{
"start": 159,
"end": 179,
"text": "(Jing et al., 2017b)",
"ref_id": "BIBREF25"
},
{
"start": 187,
"end": 207,
"text": "(Jing et al., 2017a)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 371,
"end": 379,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "Results. In Figure 4 , we show the cross-entropy loss for delay time T = 500. Note that LSTM and GRU hit a predictable baseline of memoryless strategy, equivalent to random guessing. 4 We can see that RUM improves over the baseline and converges to 100% accuracy. For the explicit unitary models, EURNN and uRNN also solve the problem in just a few steps, and GORU converges slightly faster than RUM.",
"cite_spans": [
{
"start": 183,
"end": 184,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "Next, we study why RUM units can solve the task, whereas LSTM/GRU units cannot. In Figure 4 , we also test a RUM model (called RUM ) without a flexible target memory and embedded input, that is, the weight kernels that produce \u03c4 t and\u03b5 t are kept constant. We observe that the model does not learn well (converges extremely slowly). This means that learning to rotate the hidden state by having control over the angles used for rotations is indeed needed.",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 91,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "Controlling the norm of the hidden state is also important. The activations of LSTM and GRU are sigmoid and tanh, respectively, and both are bounded. RUM uses ReLU, which allows larger Prms. T = 30/50. GRU (ours) 21.5/17.6 14k GORU (ours) 21.8/18.9 13k EURNN (ours) 24.5/18.5 4k LSTM (ours) 25.6/20.5 17k FW-LN (Ba et al., 2016a) 100.0/20.8 9k WeiNet (Zhang and Zhou, 2017) 100.0/100.0 22k RUM \u03bb = 0 \u03b7 = N/A (ours) 25.0/18.5 13k RUM \u03bb = 1 \u03b7 = 1.0 (ours) 100.0/83.7 13k RUM \u03bb = 1 \u03b7 = N/A (ours)",
"cite_spans": [
{
"start": 202,
"end": 212,
"text": "GRU (ours)",
"ref_id": null
},
{
"start": 284,
"end": 290,
"text": "(ours)",
"ref_id": null
},
{
"start": 311,
"end": 329,
"text": "(Ba et al., 2016a)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "100.0/100.0 13k hidden states (nevertheless, note that RUM with the bounded tanh also yields 100% accuracy). We observe that, when we remove the normalization, RUM converges faster compared with using \u03b7 = 1.0. Having no time normalization means larger spikes in the cross-entropy and increased risk of exploding loss. EURNN and uRNN are exposed to this, while RUM uses a tunable reduction of the risk through time normalization. We also observe the benefits of tuning the associative rotational memory. Indeed, a RUM with \u03bb = 1 has a smaller hidden size, N h = 100, but it learns much faster than a RUM with \u03bb = 0. It is possible that the accumulation of phase via \u03bb = 1 enables faster really long-term memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "Finally, we would like to note that removing the update gate or using tanh and softsign activations do not hurt performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "Associative recall task (B) is another testbed for long-term memory. We follow the settings in Ba et al. (2016a) and Zhang and Zhou (2017) .",
"cite_spans": [
{
"start": 95,
"end": 112,
"text": "Ba et al. (2016a)",
"ref_id": "BIBREF3"
},
{
"start": 117,
"end": 138,
"text": "Zhang and Zhou (2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "Data. The sequences for training are random, and consist of pairs of letters and digits. We set the query key to always be a letter. We fix the size of the letter set to half the length of the sequence, the digits are from 0 to 9. No letter is repeated. In particular, the RNN is fed a sequence of letter-digit pairs followed by the separation indicator ''??'' and a query letter (key), e.g., ''a1s2d3f4g5??d''. The RNN is supposed to output the digit that follows the query key (''d'' in this example): It needs to find the query key and then to output the digit that follows (''3'' in this example). The train/dev/test split is 100k/10k/20k examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "RUM ! = 0 % = N/A 0.0 0.2 0.4 (b) 10k 30k 50k RUM ! = 1 % = N/A (c) 0.0 0.5 1.0 4k 28k 16k training steps (a)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "architecture that is close to the theoretical optimum of the representational Moreover, the diagonal structure is not task specific. For example, in Fi particular W",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "(2) hh for the target \u2327 on the Penn Treebank task. The way w of the diagonal structure, combined with the off-diagonal activations, is th grammar and vocabulary, as well as the links between various components",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "! \"# (%) ! \"# (') ! ## (%) ! ## (') ( (') ( (%) ) (%) ) (') di \u2026which is effecti hidden s target memory rota ! ## (%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "kernel for target a portion of the diagonal, visualized in a horizontal position, has the function to generate a target memory (b) (a) Figure 3 : The kernel generating the target memory for RUM is followin pattern, which signifies the sequential learning of the model. Models. We test LSTM, GRU, GORU, FW-LN (Ba et al., 2016a) , WeiNet (Zhang and Zhou, 2017), and RUM (\u03bb = 1, \u03b7 = 0). All the models have the same hidden state N h = 50 for different lengths T . We train for 100k epochs with a batch size of 128, RMSProp as an optimizer, and a learning rate of 0.001 (selected using hyper-parameter search).",
"cite_spans": [
{
"start": 308,
"end": 326,
"text": "(Ba et al., 2016a)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 135,
"end": 143,
"text": "Figure 3",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "Results. Table 1 shows the results. We can see that LSTM and GRU are unable to recall the digit correctly. Even GORU, which learns the copying task, fails to solve the problem. FW-LN, WeiNet, and RUM can learn the task for T = 30. For RUM, it is necessary that \u03bb = 1, as for \u03bb = 0 its performance is similar to that of LSTM and GORU. WeiNet and RUM are the only known models that can learn the task for the challenging 50 input characters. Note that RUM yields 100% accuracy with 40% fewer parameters.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "The benefit of the associative memory is apparent from the temperature map in Figure 5(a) , where we can see that the weight kernel for the target memory has a clear diagonal activation. This suggests that the model learns how to rotate the hidden state in the Euclidean space by observing the sequence encoded in the hidden state. Note that none of our baseline models exhibit such a pattern for the weight kernels. Figure 5(b) shows the evolution of the rotational behavior during the 53 time steps for a model that does not learn the task. We can see that cos \u03b8 is small and biased towards 0.2. Figure 5(c) shows the evolution of a model with associative memory (\u03bb = 1) that does learn the task. Note that these distributions have a wider range that is more uniform.",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 89,
"text": "Figure 5(a)",
"ref_id": "FIGREF8"
},
{
"start": 417,
"end": 428,
"text": "Figure 5(b)",
"ref_id": "FIGREF8"
},
{
"start": 598,
"end": 609,
"text": "Figure 5(c)",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "Also, there are one or two cos \u03b8 instances close to 1.0 per distribution, that is, the angle is close to zero and the hidden state is rotated only marginally. The distributions in Figure 5 (c) yield more varied representations.",
"cite_spans": [],
"ref_spans": [
{
"start": 180,
"end": 188,
"text": "Figure 5",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Synthetic Tasks",
"sec_num": "4.1"
},
{
"text": "Question answering (C) is typically done using neural networks with external memory, but here we use a vanilla RNN with and without attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real-world NLP Tasks",
"sec_num": "4.2"
},
{
"text": "Data. We use the bAbI Question Answering data set (Weston et al., 2016) , which consists of 20 subtasks, with 9k/1k/1k examples for train/ dev/test per subtask. We train a separate model for each subtask. We tokenize the text (at the word and at the sentence level), and then we concatenate the story and the question.",
"cite_spans": [
{
"start": 50,
"end": 71,
"text": "(Weston et al., 2016)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Real-world NLP Tasks",
"sec_num": "4.2"
},
{
"text": "For the word level, we embed the words into dense vectors, and we feed them into the RNN. Hence, the input sequence can be labeled as {x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real-world NLP Tasks",
"sec_num": "4.2"
},
{
"text": "(s) 1 , . . . , x (s) n , x (q) 1 , . . . , x (q)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real-world NLP Tasks",
"sec_num": "4.2"
},
{
"text": "m }, where the story has n words and the question has m words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real-world NLP Tasks",
"sec_num": "4.2"
},
{
"text": "For the sentence level, we generate sentence embeddings by averaging word embeddings. Thus, the input sequence for a story with n sentences is {x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real-world NLP Tasks",
"sec_num": "4.2"
},
{
"text": "(s) 1 , . . . , x (s) n , x (q) }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real-world NLP Tasks",
"sec_num": "4.2"
},
{
"text": "Attention mechanism for sentence level. We use simple dot-product attention (Luong et al., 2015) :",
"cite_spans": [
{
"start": 76,
"end": 96,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Real-world NLP Tasks",
"sec_num": "4.2"
},
{
"text": "{p t } 0\u2264t\u2264n := softmax({h (q) \u2022 h (s) t } 0\u2264t\u2264n ). The context vector c := n t=0 p t h (s)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real-world NLP Tasks",
"sec_num": "4.2"
},
{
"text": "t is then passed, together with the query vector, to a dense layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real-world NLP Tasks",
"sec_num": "4.2"
},
{
"text": "Models. We compare uRNN, EURNN, LSTM, GRU, GORU, and RUM (with \u03b7 = N/A in all experiments). The RNN model outputs the prediction at the end of the question through a softmax layer. We use a batch size of 32 for all 20 subtasks. We train the model using Adam optimizer with a learning rate of 0.001 (Kingma and Ba, 2015). All embeddings (word and sentence) are 64-dimensional. For each subset, we train until convergence on the dev set, without other regularization. For testing, we report the average accuracy over the 20 subtasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real-world NLP Tasks",
"sec_num": "4.2"
},
{
"text": "Results. Table 2 shows the average accuracy on the 20 bAbI tasks. Without attention, RUM outperforms LSTM/GRU and all unitary baseline models by a sizable margin both at the word and",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Real-world NLP Tasks",
"sec_num": "4.2"
},
{
"text": "Acc.(%) Word Level 1 LSTM (Weston et al., 2016) 49.2 2 uRNN (ours) 51.6 3 EURNN (ours) 52.9 4 LSTM (ours) 56.0 5 GRU (ours) 58.2 6 GORU (Jing et al., 2017a) 60.4 7 RUM \u03bb = 0 (ours) 73.2 8 DNC 96.2 Sentence Level 9 EUNN/attnEUNN (ours) 66.7/69.5 10 LSTM/attnLSTM (ours) 67.2/80.1 11 GRU/attnGRU (ours) 70.4/77.3 12 GORU/attnGORU (ours) 71.3/76.4 13 RUM/attnRUM \u03bb = 0 (ours) 75.1/74.3 14 RUM/attnRUM \u03bb = 1 (ours) 79.0/80.1 15 RUM/attnRUM \u03bb = 0 w/ tanh (ours) 70.5/72.9 16 MemN2N (Sukhbaatar et al., 2015) 95.8 17 GMemN2N (Perez and Liu, 2017) 96.3 18 DMN+ (Xiong et al., 2016) 97.2 19 EntNet (Henaff et al., 2017) 99.5 20 QRN (Seo et al., 2017) 99.7 at the sentence level. Moreover, RUM without attention (line 14) outperforms all models except for attnLSTM. Furthermore, LSTM and GRU benefit the most from adding attention (lines 10-11), while the phase-coded models (lines 9, 12-15) obtain only a small boost in performance or even a decrease (e.g., in line 13). Although RUM (line 14) shares the best accuracy with LSTM (line 10), we hypothesize that a ''phaseinspired'' attention might further boost RUM's performance. 5 Language modeling [character-level] (D) is an important testbed for RNNs (Graves, 2013) .",
"cite_spans": [
{
"start": 26,
"end": 47,
"text": "(Weston et al., 2016)",
"ref_id": "BIBREF59"
},
{
"start": 136,
"end": 156,
"text": "(Jing et al., 2017a)",
"ref_id": "BIBREF24"
},
{
"start": 477,
"end": 502,
"text": "(Sukhbaatar et al., 2015)",
"ref_id": "BIBREF51"
},
{
"start": 519,
"end": 540,
"text": "(Perez and Liu, 2017)",
"ref_id": "BIBREF43"
},
{
"start": 554,
"end": 574,
"text": "(Xiong et al., 2016)",
"ref_id": "BIBREF62"
},
{
"start": 590,
"end": 611,
"text": "(Henaff et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 624,
"end": 642,
"text": "(Seo et al., 2017)",
"ref_id": "BIBREF48"
},
{
"start": 1121,
"end": 1122,
"text": "5",
"ref_id": null
},
{
"start": 1196,
"end": 1210,
"text": "(Graves, 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Data. The Penn Treebank (PTB) corpus is a collection of articles from The Wall Street Journal (Marcus et al., 1993) , with a vocabulary of 10k words (using 50 different characters). We use a train/dev/test split of 5.1M/400k/450k tokens, and we replace rare words with <unk>. We feed 150 tokens at a time, and we use a batch size of 128.",
"cite_spans": [
{
"start": 94,
"end": 115,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Models. We incorporate RUM into a recent high-level model: Fast-Slow RNN (FS-RNN) (Mujika et al., 2017) . The FS-RNN-k architecture consists of two hierarchical layers: one of them is a ''fast'' layer that connects k RNN cells F 1 , . . . , F k in series; the other is a ''slow'' layer that consists of a single RNN cell S. The organization is roughly as follows: F 1 receives the input from the minibatch and feeds its state into S, S feeds its state into F 2 , and so on; finally, the output of F k is a probability distribution over characters. FS-RUM-2 uses fast cells (all LSTM) with hidden size of 700 and a slow cell (RUM) with a hidden state of size 1000, time normalization \u03b7 = 1.0, and \u03bb = 0. We also tried to use associative memory \u03bb = 1 or to avoid time normalization, but we encountered exploding loss at early training stages. We optimized all hyper-parameters on the dev set.",
"cite_spans": [
{
"start": 82,
"end": 103,
"text": "(Mujika et al., 2017)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Additionally, we tested FS-EURNN-2, i.e., the slow cell is EURNN with a hidden size of 2000, and FS-GORU-2 with a slow cell GORU with a hidden size of 800 (everything else remains as for FS-RUM-2). As the learning phases are periodic, there is no easy regularization for FS-EURNN-2 or FS-GORU-2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "For FS-RNN, we use the hyper-parameter values suggested in Mujika et al. (2017) . We further use layer normalization (Ba et al., 2016b ) on all states, on the LSTM gates, on the RUM update gate, and on the target memory. We also apply zoneout (Krueger et al., 2017) to the recurrent connections, as well as dropout (Srivastava et al., 2014) . We embed each character into a 128-dimensional space (without pre-training).",
"cite_spans": [
{
"start": 59,
"end": 79,
"text": "Mujika et al. (2017)",
"ref_id": "BIBREF37"
},
{
"start": 117,
"end": 134,
"text": "(Ba et al., 2016b",
"ref_id": "BIBREF4"
},
{
"start": 243,
"end": 265,
"text": "(Krueger et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 315,
"end": 340,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "For training, we use the Adam optimizer with a learning rate of 0.002, we decay the learning rate for the last few training epochs, and we apply gradient clipping with a maximal norm of the gradients equal to 1.0. Finally, we pass the output through a softmax layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "For testing, we report bits-per-character (BPC) loss on the test dataset, which is the cross-entropy loss but with a binary logarithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Our best FS-RUM-2 uses decaying learning rate: 180 epochs with a learning rate of 0.002, then 60 epochs with 0.0001, and finally 120 epochs with 0.00001.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We also test a RUM with \u03b7 = 1.0, and a twolayer RUM with \u03b7 = 0.3. The cell zoneout/hidden zoneout/dropout probability is 0.5/0.9/0.35 for FS-RUM-2, and 0.5/0.1/0.65 for the vanilla versions. We train for 100 epochs with a 0.002 learning rate. These values were suggested by Mujika et al. (2017) , who used LSTM cells. (Chung et al., 2017) 1.240 -12 HyperLSTM (Ha et al., 2016) 1.219 14.4M 13 NASCell (Zoph and V. Le, 2017) 1.214 16.3M 14 FS-LSTM-4 (Mujika et al., 2017) 1.193 6.5M 15 FS-LSTM-2 (Mujika et al., 2017) 1.190 7.2M 16 FS-RUM-2 (ours)",
"cite_spans": [
{
"start": 274,
"end": 294,
"text": "Mujika et al. (2017)",
"ref_id": "BIBREF37"
},
{
"start": 318,
"end": 338,
"text": "(Chung et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 359,
"end": 376,
"text": "(Ha et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 413,
"end": 422,
"text": "Le, 2017)",
"ref_id": null
},
{
"start": 448,
"end": 469,
"text": "(Mujika et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 494,
"end": 515,
"text": "(Mujika et al., 2017)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "1.189 11.2M 17 6lyr-QRNN (Merity et al., 2018) 1.187 13.8M 18 3lyr-LSTM (Merity et al., 2018) 1.175 13.8M Table 3 : Character-level language modeling results. BPC score on the PTB test split. Using tanh is slightly better than ReLU (lines 2-3). Removing the update gate in line 1 is worse than line 2. Phase-inspired regularization may improve lines 1-3, 6-8, 9-10, and 16.",
"cite_spans": [
{
"start": 25,
"end": 46,
"text": "(Merity et al., 2018)",
"ref_id": "BIBREF36"
},
{
"start": 72,
"end": 93,
"text": "(Merity et al., 2018)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 106,
"end": 113,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Results. In Table 3 , we report the BPC loss for character-level language modeling on PTB. For the test split, FS-RUM-2 reduces the BPC for Fast-Slow models by 0.001 points absolute. Moreover, we achieved a decrease of 0.002 BPC points for the validation split using an FS-RUM-2 model with a hidden size of 800 for the slow cell (RUM) and a hidden size of 1100 for the fast cells (LSTM). Our results support a conjecture from the conclusions of Mujika et al. (2017) , which states that models with long-term memory, when used as the slow cell, may enhance performance.",
"cite_spans": [
{
"start": 445,
"end": 465,
"text": "Mujika et al. (2017)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Text summarization (E) is the task of reducing long pieces of text to short summaries without losing much information. It is one of the most challenging tasks in NLP (Nenkova and McKeown, 2011) , with a number of applications ranging from question answering to journalism (Tatalovi\u0107, 2018) . Text summarization can be abstractive (Nallapati et al., 2016) , extractive (Nallapati et al., 2017) , or hybrid (See et al., 2017) . Advances in encoder-decoder/seq2seq models (Cho et al., 2014; established models based on RNNs as powerful tools for text summarization. Having accumulated knowledge from the ablation and the preparation tasks, we test RUM on this hard real-world NLP task.",
"cite_spans": [
{
"start": 166,
"end": 193,
"text": "(Nenkova and McKeown, 2011)",
"ref_id": "BIBREF41"
},
{
"start": 272,
"end": 289,
"text": "(Tatalovi\u0107, 2018)",
"ref_id": "BIBREF53"
},
{
"start": 330,
"end": 354,
"text": "(Nallapati et al., 2016)",
"ref_id": "BIBREF39"
},
{
"start": 368,
"end": 392,
"text": "(Nallapati et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 405,
"end": 423,
"text": "(See et al., 2017)",
"ref_id": "BIBREF47"
},
{
"start": 469,
"end": 487,
"text": "(Cho et al., 2014;",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Data. We follow the set-up in See et al. (2017) and we use the CNN/ Daily Mail corpus (Hermann et al., 2015; Nallapati et al., 2016) , which consist of news stories with reference summaries. On average, there are 781 tokens per story and 56 tokens per summary. The train/dev/test datasets contain 287,226/13,368/11,490 text-summary pairs.",
"cite_spans": [
{
"start": 86,
"end": 108,
"text": "(Hermann et al., 2015;",
"ref_id": "BIBREF22"
},
{
"start": 109,
"end": 132,
"text": "Nallapati et al., 2016)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We further experimented with a new data set, which we crawled from the Science Daily Web site, iterating certain patterns of date/time. We successfully extracted 60,900 Web pages, each containing a public story about a recent scientific paper. We extracted the main content, a short summary, and a title from the HTML page using Beautiful Soup. The input story length is 488.42 \u00b1 219.47, the target summary length is 45.21 \u00b1 18.60, and the title length is 9.35 \u00b1 2.84. In our experiments, we set the vocabulary size to 50k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We defined four tasks on this data: (i) s2s, story to summary, (ii) sh2s, shuffled story to summary (we put the paragraphs in the story in a random order); (iii) s2t, story to title; and (iv) oods2s, outof-domain testing for s2s (i.e., training on CNN / Daily Mail and testing on Science Daily).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Models. We use a pointer-generator network (See et al., 2017) , which is a combination of a seq2seq model (Nallapati et al., 2016) with attention (Bahdanau et al., 2015) and a pointer network (Vinyals et al., 2015) . We believe that the pointer-generator network architecture to be a good testbed for experiments with a new RNN unit because it enables both abstractive and extractive summarization.",
"cite_spans": [
{
"start": 43,
"end": 61,
"text": "(See et al., 2017)",
"ref_id": "BIBREF47"
},
{
"start": 106,
"end": 130,
"text": "(Nallapati et al., 2016)",
"ref_id": "BIBREF39"
},
{
"start": 146,
"end": 169,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF5"
},
{
"start": 192,
"end": 214,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We adopt the model from See et al. (2017) as our LEAD baseline. This model uses a bi-directional LSTM encoder (400 steps) with attention distribution and an LSTM decoder (100 steps for training and 120 steps for testing), with all hidden states being 256-dimensional, and 128-dimensional word embeddings trained from scratch during training. For training, we use the cross-entropy loss for the seq2seq model. For evaluation, we use ROUGE (Lin and Hovy, 2003) . We also allow the coverage mechanism proposed in the original paper, which penalizes repetitions and improves the quality of the summaries (marked as ''cov.'' in Table 4 ). Following the original paper, we train LEAD for 270k iterations and we turn on the coverage for about 3k iterations at the end to get LEAD cov. We use an Adagrad optimizer with a learning rate of 0.15, an accumulator value of 0.1, and a batch size of 16. For decoding, we use a beam of size 4.",
"cite_spans": [
{
"start": 438,
"end": 458,
"text": "(Lin and Hovy, 2003)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 623,
"end": 630,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "The only component in LEAD that our proposed models change is the type of the RNN unit for the (Nallapati et al., 2016) 35.46 13.30 32.65 9 (Nallapati et al., 2017 ) 39.60 16.20 35.30 10 (See et al., 2017 36.44 15.66 33.42 11 (See et al., 2017 ) cov. 39.53 17.28 36.38 12 (Narayan et al., 2018 40.0 18.20 36.60 13 (Celikyilmaz et al., 2018) encoder/decoder. Namely, encRUM is a LEAD with a bidirectional RUM as an encoder (but with an LSTM decoder), decRUM is LEAD with a RUM as a decoder (but with a bi-LSTM encoder), and allRUM is LEAD with all LSTM units replaced by RUM ones. We train these models as LEAD, by minimizing the validation crossentropy. We found that encRUM and allRUM take about 100k training steps to converge, while decRUM takes about 270k steps. Then, we turn on coverage training, as advised by See et al. (2017) , and we train for a few thousand steps {2k,3k,4k,5k,8k}. The best ROUGE on dev was achieved for 2k steps, and this is what we used ultimately. We did not use time normalization as training was stable without it. We used the same hidden sizes for the LSTM, the RUM, and the mixed models. For the size of the hidden units, we tried {256, 360, 400, 512} on the dev set, and we found that 256 worked best overall.",
"cite_spans": [
{
"start": 95,
"end": 119,
"text": "(Nallapati et al., 2016)",
"ref_id": "BIBREF39"
},
{
"start": 140,
"end": 163,
"text": "(Nallapati et al., 2017",
"ref_id": "BIBREF38"
},
{
"start": 164,
"end": 204,
"text": ") 39.60 16.20 35.30 10 (See et al., 2017",
"ref_id": null
},
{
"start": 226,
"end": 243,
"text": "(See et al., 2017",
"ref_id": "BIBREF47"
},
{
"start": 244,
"end": 293,
"text": ") cov. 39.53 17.28 36.38 12 (Narayan et al., 2018",
"ref_id": null
},
{
"start": 314,
"end": 340,
"text": "(Celikyilmaz et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 817,
"end": 834,
"text": "See et al. (2017)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Results. Table 4 shows ROUGE scores for the CNN / Daily Mail and the Science Daily test splits. We can see that RUM can easily replace LSTM in the pointer-generator network. We found that the best place to use RUM is in the decoder of the seq2seq model, since decRUM is better than encRUM and allRUM. Overall, we obtained the best results with decRUM 256 (lines 2 and 7), and we observed slight improvements for some ROUGE variants over previous work (i.e., with respect to lines 10-11).",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We further trained decRUM with coverage for about 2,000 additional steps, which yielded 0.01 points of increase for ROUGE 1 (but with reduced ROUGE 2/L). We can conclude that here, as in the language modeling study (D), a combination of LSTM and RUM is better than using LSTM-only or RUM-only seq2seq models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We conjecture that using RUM in the decoder is better because the encoder already has an attention mechanism and thus does not need much long-term memory, and would better focus on a more local context (as in LSTM). However, long-term memory is crucial for the decoder as it has to generate fluent output, and the attention mechanism cannot help it (i.e., better to use RUM). This is in line with our attention experiments on question answering. In future work, we plan to investigate combinations of LSTM and RUM units in more detail to identify optimal phase-coded attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Incorporating RUM into the seq2seq model yields larger gradients, compatible with stable training. Figure 6(a) shows the global norm of the gradients for our baseline models. Because of the tanh activation, LSTM's gradients hit the 1.0 baseline even though gradient clipping is 2.0. All RUM-based models have larger global norm. decRUM 360 sustains a slightly higher norm than LEAD, which might be beneficial. Panel 6(b), a consequence of 6(a), demonstrates that the RUM decoder sustains hidden states of higher norm throughout training. Panel 6(c) shows the contribution of the output at each encoder step to the gradient updates of the model. We observe that an LSTM encoder (in LEAD and decRUM) yields slightly higher gradient updates to the model, which is in line with our conjecture that it is better to use an LSTM encoder. Finally, panel 6(d) shows the gradient updates at each decoder step. Although the overall performance of LEAD and decRUM is similar, we note that the last few gradient updates from a RUM decoder are zero, while they are slightly above zero for LSTM. This happens because the target summaries for a minibatch are actually shorter than 100 tokens. Here, RUM exhibits an interesting property: It identifies that the target summary has ended, and thus for the subsequent extra steps, our model stops the gradients from updating the weights. An LSTM decoder keeps updating during the extra steps, which might indicate that it does not identify the end of the target summary properly. We also compare our best decRUM 256 model to LEAD on the Science Daily data (lines 15-18). In Table 4 , lines 15-17, we retrain the models from scratch. We can see that LEAD has clear advantage on the easiest task (line 15), which generally requires copying the first few sentences of the Science Daily article.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 110,
"text": "Figure 6(a)",
"ref_id": "FIGREF9"
},
{
"start": 1604,
"end": 1611,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "In line 16, this advantage decreases, as shuffling the paragraphs makes the task harder. We further observe that our RUM-based model demonstrates better performance on ROUGE F-2/L in line 17, where the task is highly abstractive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Out-of-domain performance. In line 18, decRUM 256 and LEAD are pretrained on CNN / Daily Mail (models from lines 1-2), and our RUM-based model shows clear advantage on all ROUGE metrics. We also observe examples that are better than the ones coming from LEAD (see for example the story 6 in Figure 1) . We hypothesize that RUM is better on out-of-domain data due to its associative nature, as can be seen in Equation 2: At inference, the weight matrix updates for the hidden state depend explicitly on the current input.",
"cite_spans": [],
"ref_spans": [
{
"start": 291,
"end": 300,
"text": "Figure 1)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Automating Science Journalism. We further test decRUM 256 and LEAD on the challenging task of producing popular summaries for research articles. The abundance of such articles online and the popular coverage of many of them (e.g., on Science Daily) provides an opportunity to develop models for automating science journalism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "The only directly related work 7 is that of Vadapalli et al. (2018) , who used research papers with corresponding popular style blog posts from Science Daily and phys.org, and aimed at generating the blog title. In their work, (i) they fed the paper title and its abstract into a heuristic function to extract relevant information, then (ii) they fed the output of this function into a pointer-generator network to produce a candidate title for the blog post.",
"cite_spans": [
{
"start": 44,
"end": 67,
"text": "Vadapalli et al. (2018)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Although we also use Science Daily and pointergenerator networks, we differ from the above work in a number of aspects. First, we focus on generating highlights, which are longer, more informative, and more complex than titles. Moreover, we feed the model a richer input, which includes not only the title and the abstract, but also the full text of the research paper. 8 Finally, we skip (i), 6 http://www.sciencedaily.com/releases/ 2017/07/170724142035.htm.",
"cite_spans": [
{
"start": 394,
"end": 395,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "7 Other summarization work preserved the original scientific style (Teufel and Moens, 2002; Nikolov et al., 2018) .",
"cite_spans": [
{
"start": 67,
"end": 91,
"text": "(Teufel and Moens, 2002;",
"ref_id": "BIBREF54"
},
{
"start": 92,
"end": 113,
"text": "Nikolov et al., 2018)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "8 As the full text for research papers is typically only available in PDF format (sometimes also in HTML and/or XML), it is generally hard to convert to text format. Thus, we focus on publications by just a few well-known publishers, which cover a sizable proportion of the research papers discussed in Science Daily, and for which we developed parsers: American Association for the Advancement of Science (AAAS), Elsevier, Public Library of Science (PLOS), Proceedings of the National Academy of Sciences (PNAS), Springer, and Wiley. Ultimately, we ended up with 50,308 full text articles, each paired with a corresponding Science Daily blog post.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Science Daily reference Researchers are collecting and harvesting enzymes while maintaining the enzyme's bioactivity. The new model system may impact cancer research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "LEAD generated highlight Scientists have developed a new method that could make it possible to develop drugs and vaccines. The new method could be used to develop new drugs to treat cancer and other diseases such as cancer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "decRUM generated highlight Researchers have developed a method that can be used to predict the isolation of nanoparticles in the presence of a complex mixture. The method, which uses nanoparticles to map the enzyme, can be used to detect and monitor enzymes, which can be used to treat metabolic diseases such as cancer. and in (ii) we encode for 1,000 steps (i.e., input words) and we decode for 100 steps. We observed that reading the first 1,000 words from the research paper is generally enough to generate a meaningful Science Daily-style highlight. Overall, we encode much more content from the research paper and we generate much longer highlights. To the best of our knowledge, our model is the only successful one in the domain of automatic science journalism that takes such a long input. Figure 7 shows some highlights generated by our models, trained for 35k steps for decRUM and for 50k steps for LEAD. The highlights are grammatical, abstractive, and follow the Science Daily-style of reporting. The pointergenerator framework also allows for copying scientific terminology, which allows it to handle simultaneously domains ranging from computer science, to physics, to medicine. Interestingly, the words cancer and diseases are not mentioned in the research paper's title or abstract, not even on the entire first page; yet, our models manage to extract them. See a demo and more examples in the link at footnote 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 799,
"end": 807,
"text": "Figure 7",
"ref_id": "FIGREF10"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "RUM vs. GORU. Here, we study the energy landscape of the loss function in order to give some intuition about why RUM's choice of rotation is more appealing than what was used in previous phase-coded models. For simplicity, we only compare to GORU (Jing et al., 2017a) GORU's gated mechanism is most similar to that of RUM, and its orthogonal parametrization, given by Clements et al. (2016) , is similar to that for the other orthogonal models in Section 2. Given a batch B = {b i } i , weights W = {w j } j , and a model F , the loss L(W, B) is defined as",
"cite_spans": [
{
"start": 247,
"end": 267,
"text": "(Jing et al., 2017a)",
"ref_id": "BIBREF24"
},
{
"start": 368,
"end": 390,
"text": "Clements et al. (2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "j F (W, b j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In GORU, the weights are defined to be angles of rotations, and thus the summand is F (W, b j ) \u2261 GORU(. . . , cos(w i ), sin(w i ), . . . , b j ). The arguments w i of the trigonometric functions are independent of the batch element b j , and all summands are in phase. Thus, the more trigonometric functions appear in F (W, b j ), the more local minima we expect to observe in L.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In contrast, for RUM we can write F (W, b j ) \u2261 RUM(. . . , cos(g(w i , b j )), sin(g(w i , b j )), . . . , b j ), where g is the arccos function that was used in defining the operation Rotation in Section 3. Because g depends on the input b j , the summands F (W, b j ) are generally out of phase. As a result, L will not be close to periodic, which reduces the risk of falling into local minima.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We test our intuition by comparing the energy landscapes of RUM and GORU in Figure 8 , following techniques similar to those in Li et al. (2018) . For each model, we vary the weights in the orthogonal transformations: the Rotation operation for RUM, and the phase-coded kernel in GORU. Figure 8 (a) and 8(c) show a 1D slice of the energy landscape. Note that 8(a) has less local minima than 8(c), which is also seen in Figures 8(b) and 8(d) for a 2D slice of the energy landscape.",
"cite_spans": [
{
"start": 128,
"end": 144,
"text": "Li et al. (2018)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 76,
"end": 84,
"text": "Figure 8",
"ref_id": "FIGREF11"
},
{
"start": 286,
"end": 294,
"text": "Figure 8",
"ref_id": "FIGREF11"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Note of caution. We should be careful when using long-term memory RNN units if they are embedded in more complex networks (not just vanilla RNNs), such as stacked RNNs or seq2seq models with attention: Because such networks use unbounded activations (such as ReLU), the gradients could blow up in training. This is despite the unitary mechanism that stabilizes the vanilla RNN units. Along with the unitary models, RUM is also susceptible to blow-ups (as LSTM/GRU are), but it has a tunable mechanism solving this problem: time normalization. We end this section with Table 5 , which lists the best ingredients for successful RUM models.",
"cite_spans": [],
"ref_spans": [
{
"start": 568,
"end": 575,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We have proposed a representation unit for RNNs that combines properties of unitary learning and associative memory and enables really long-term memory modeling. We have further demonstrated that our model outperforms conventional RNNs on synthetic and on some real-world NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "In future work, we plan to expand the representational power of our model by allowing \u03bb in Equation (2) to be not only zero or one, but any real number. 9 Second, we speculate that because our rotational matrix is a function of the RNN input (rather than being fixed after training, as in LSTM/GRU), RUM has a lot of potential for transfer learning. Finally, we would like to explore novel dataflows for RNN accelerators, which can run RUM efficiently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Our TensorFlow(Abadi et al., 2015) code, visualizations, and summaries can be found at http://github.com/ rdangovs/rotational-unit-of-memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "1 is the identity matrix, \u2020 is the transpose of a vector/ matrix and (u, v) is the concatenation of the two vectors.3 This reflects the fact that the set of orthogonal matrices O(N h ) forms a group under the multiplication operation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Calculated as follows: C = (M log n)/(T +2M ), where C is cross-entropy, T = 500 is delay time, n = 8 is the size of the alphabet, M = 10 is the length of the string to memorize.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "RUM's associative memory, Equation(2), is similar to attention because it accumulates phase (i.e., forms a context). We plan to investigate phase-coded attention in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For a rotational matrix R and a real number \u03bb, we define the power R \u03bb through the matrix exponential and the logarithm of R. Since R is orthogonal, its logarithm is a skewsymmetric matrix A, and we define R \u03bb := (e A ) \u03bb = e \u03bbA . Note that \u03bbA is also skew-symmetric, and thus R \u03bb is another orthogonal matrix. For computer implementation, we can truncate the series expansion e \u03bbA = \u221e k=0 (1/k!)(\u03bbA) k at some late point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are very grateful to Dan Hogan from Science Daily for his help, to Daniel Dardani and Matthew Fucci for their advice, and to Thomas Frerix for the fruitful discussions.This work was partially supported by the Army Research Office through the Institute for Soldier Nanotechnologies under contract W911NF-18-2-0048; the National Science Foundation under grant no. CCF-1640012; and by the Semiconductor Research Corporation under grant no. 2016-EP-2693-B. This research is also supported in part by the MIT-SenseTime Alliance on Artificial Intelligence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "2015. TensorFlow: Large-scale machine learning on heterogeneous systems",
"authors": [
{
"first": "Mart\u00edn",
"middle": [],
"last": "Abadi",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Barham",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Brevdo",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Citro",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Devin",
"suffix": ""
},
{
"first": "Sanjay",
"middle": [],
"last": "Ghemawat",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Harp",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Irving",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "Yangqing",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Manjunath",
"middle": [],
"last": "Kudlur",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.04467"
]
},
"num": null,
"urls": [],
"raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar,. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. arXiv preprint arXiv:1603.04467.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unitary evolution recurrent neural networks",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Arjovsky",
"suffix": ""
},
{
"first": "Amar",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 33rd International Conference on International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1120--1128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Arjovsky, Amar Shah, and Yoshua Bengio. 2016. Unitary evolution recurrent neural networks. In Proceedings of the 33rd Interna- tional Conference on International Conference on Machine Learning, pages 1120-1128. New York, NY.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using fast weights to attend to the recent past",
"authors": [
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Volodymyr",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Joel",
"middle": [
"Z"
],
"last": "Leibo",
"suffix": ""
},
{
"first": "Catalin",
"middle": [],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "4331--4339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jimmy Ba, Geoffrey E. Hinton, Volodymyr Mnih, Joel Z. Leibo, and Catalin Ionescu. 2016a. Using fast weights to attend to the recent past. In Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems 29, pages 4331-4339. Barcelona.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Layer normalization",
"authors": [
{
"first": "Jimmy",
"middle": [
"Lei"
],
"last": "Ba",
"suffix": ""
},
{
"first": "Jamie",
"middle": [
"Ryan"
],
"last": "Kiros",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "4880--4888",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016b. Layer normalization. In Pro- ceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems 29, pages 4880-4888. Barcelona.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Pro- ceedings of the 3rd International Conference on Learning Representations. San Diego, CA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Strongly-typed recurrent neural networks",
"authors": [
{
"first": "David",
"middle": [],
"last": "Balduzzi",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [],
"last": "Ghifary",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 33rd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1292--1300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Balduzzi and Muhammad Ghifary. 2016. Strongly-typed recurrent neural networks. In Proceedings of the 33rd International Confer- ence on Machine Learning, pages 1292-1300. New York, NY.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning long-term dependencies with gradient descent is difficult",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Patrice",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Frasconi",
"suffix": ""
}
],
"year": 1994,
"venue": "IEEE Transactions on Neural Networks",
"volume": "5",
"issue": "2",
"pages": "157--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157-166.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deep communicating agents for abstractive summarization",
"authors": [
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1662--1675",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Pro- ceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1662-1675, New Orleans, LA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Fast abstractive summarization with reinforceselected sentence rewriting",
"authors": [
{
"first": "Yen-Chun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "675--686",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce- selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 675-686, Melbourne.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Aglar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, \u00c7 aglar G\u00fcl\u00e7ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine trans- lation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1724-1734, Doha.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Hierarchical multiscale recurrent neural networks",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Sungjin",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. 2017. Hierarchical multiscale recurrent neural networks. In Proceedings of the 5th International Conference on Learning Repre- sentations, Toulon.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Optimal design for universal multiport interferometers",
"authors": [
{
"first": "William",
"middle": [
"R"
],
"last": "Clements",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"C"
],
"last": "Humphreys",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"J"
],
"last": "Metcalf",
"suffix": ""
},
{
"first": "W",
"middle": [
"Steven"
],
"last": "Kolthammer",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"A"
],
"last": "Walmsley",
"suffix": ""
}
],
"year": 2016,
"venue": "Optica",
"volume": "3",
"issue": "",
"pages": "1460--1465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William R. Clements, Peter C. Humphreys, Benjamin J. Metcalf, W. Steven Kolthammer, and Ian A. Walmsley. 2016. Optimal design for universal multiport interferometers. Optica, 3:1460-1465.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Spherical CNNs",
"authors": [
{
"first": "S",
"middle": [],
"last": "Taco",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Geiger",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "K\u00f6hler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taco S. Cohen, Mario Geiger, Jonas K\u00f6hler, and Max Welling. 2018. Spherical CNNs. In Proceedings of the 6th International Conference on Learning Representations, Vancouver.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Associative long short-term memory",
"authors": [
{
"first": "Ivo",
"middle": [],
"last": "Danihelka",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "Benigno",
"middle": [],
"last": "Uria",
"suffix": ""
},
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 33rd International Conference on Machine Learning, ICMvL '16",
"volume": "",
"issue": "",
"pages": "1986--1994",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivo Danihelka, Greg Wayne, Benigno Uria, Nal Kalchbrenner, and Alex Graves. 2016. Asso- ciative long short-term memory. In Proceed- ings of the 33rd International Conference on Machine Learning, ICMvL '16, pages 1986-1994. New York, NY.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Generating sequences with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1308.0850"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural Turing machines",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "Ivo",
"middle": [],
"last": "Danihelka",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1410.5401"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural Turing machines. arXiv preprint arXiv:1410.5401.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Hybrid computing using a neural network with dynamic external memory",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "Malcolm",
"middle": [],
"last": "Reynolds",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Harley",
"suffix": ""
},
{
"first": "Ivo",
"middle": [],
"last": "Danihelka",
"suffix": ""
},
{
"first": "Agnieszka",
"middle": [],
"last": "Grabska-Barwi\u0144ska",
"suffix": ""
},
{
"first": "Sergio",
"middle": [
"G\u00f3mez"
],
"last": "Colmenarejo",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Tiago",
"middle": [],
"last": "Ramalho",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Agapiou",
"suffix": ""
},
{
"first": "Adri\u00e1",
"middle": [
"Puigdom\u00e9nech"
],
"last": "Badia",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Mortiz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Yori",
"middle": [],
"last": "Zwols",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Ostrovski",
"suffix": ""
}
],
"year": 2016,
"venue": "Nature",
"volume": "538",
"issue": "",
"pages": "471--476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwi\u0144ska, Sergio G\u00f3mez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adri\u00e1 Puigdom\u00e9nech Badia, Karl Mortiz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. 2016. Hybrid computing using a neural network with dynamic external memory. Nature, 538:471-476.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Hypernetworks",
"authors": [
{
"first": "David",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 4th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Ha, Andrew Dai, and Quoc V. Le. 2016. Hypernetworks. In Proceedings of the 4th International Conference on Learning Representations, San Juan, Puerto Rico.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Lie Groups, Lie Algebras, and Representations",
"authors": [
{
"first": "Brian",
"middle": [
"C"
],
"last": "Hall",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian C. Hall. 2015. Lie Groups, Lie Algebras, and Representations, Springer.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Recurrent orthogonal networks and longmemory tasks",
"authors": [
{
"first": "Mikael",
"middle": [],
"last": "Henaff",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 33rd International Conference on International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2034--2042",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikael Henaff, Arthur Szlam, and Yann LeCun. 2016. Recurrent orthogonal networks and long- memory tasks. In Proceedings of the 33rd Inter- national Conference on International Confer- ence on Machine Learning, pages 2034-2042, New York, NY.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Tracking the world state with recurrent entity networks",
"authors": [
{
"first": "Mikael",
"middle": [],
"last": "Henaff",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2017. Tracking the world state with recurrent entity networks. In Proceedings of the 5th International Conference on Learning Representations, Toulon.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u00fd",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems",
"volume": "28",
"issue": "",
"pages": "1693--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u00fd, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems 28, pages 1693-1701.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Gated orthogonal recurrent units: On learning to forget",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Yichen",
"middle": [],
"last": "Peurifoy",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Marin",
"middle": [],
"last": "Tegmark",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Solja\u010di\u0107",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.02761"
]
},
"num": null,
"urls": [],
"raw_text": "Li Jing, \u00c7 aglar G\u00fcl\u00e7ehre, John Peurifoy, Yichen Shen, Max Tegmark, Marin Solja\u010di\u0107, and Yoshua Bengio. 2017a. Gated orthogonal recurrent units: On learning to forget. arXiv preprint arXiv:1706.02761.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Tunable efficient unitary neural networks (EUNN) and their application to RNNs",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "Yichen",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Tena",
"middle": [],
"last": "Dubcek",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Peurifoy",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Skirlo",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Tegmark",
"suffix": ""
},
{
"first": "Marin",
"middle": [],
"last": "Solja\u010di\u0107",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1733--1741",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Jing, Yichen Shen, Tena Dubcek, John Peurifoy, Scott Skirlo, Yann LeCun, Max Tegmark, and Marin Solja\u010di\u0107. 2017b. Tunable efficient unitary neural networks (EUNN) and their application to RNNs. In Proceedings of the 34th International Conference on Machine Learning, pages 1733-1741, Sydney.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Computational Rigid Vehicle Dynamics",
"authors": [
{
"first": "Amnon",
"middle": [],
"last": "Katz",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amnon Katz. 2001. Computational Rigid Vehicle Dynamics, Krieger Publishing Co.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Pro- ceedings of the 3rd International Conference on Learning Representations, San Diego, CA.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "An adaptive associative memory principle",
"authors": [
{
"first": "",
"middle": [],
"last": "Teuvo Kohonen",
"suffix": ""
}
],
"year": 1974,
"venue": "IEEE Transactions on Computers, C",
"volume": "23",
"issue": "",
"pages": "444--445",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Teuvo Kohonen. 1974. An adaptive associative memory principle. IEEE Transactions on Computers, C-23:444-445.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Dense associative memory for pattern recognition",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Krotov",
"suffix": ""
},
{
"first": "John",
"middle": [
"J"
],
"last": "Hopfield",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "1172--1180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Krotov and John J. Hopfield. 2016. Dense associative memory for pattern recognition. In Proceedings of the Annual Conference on Neu- ral Information Processing Systems: Advances in Neural Information Processing Systems 29, pages 1172-1180, Barcelona.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Zoneout: Regularizing RNNs by randomly preserving hidden activations",
"authors": [
{
"first": "David",
"middle": [],
"last": "Krueger",
"suffix": ""
},
{
"first": "Tegan",
"middle": [],
"last": "Maharaj",
"suffix": ""
},
{
"first": "J\u00e1nos",
"middle": [],
"last": "Kram\u00e1r",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Pazeshki",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Ballas",
"suffix": ""
},
{
"first": "Nan",
"middle": [
"Rosemary"
],
"last": "Ke",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Krueger, Tegan Maharaj, J\u00e1nos Kram\u00e1r, Mohammad Pazeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, and Chris Pal. 2017. Zoneout: Regularizing RNNs by randomly preserving hidden activations. In Proceedings of the 5th International Conference on Learning Representations, Toulon.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Quaternions and Rotation Sequences. A Primer with Applications to Orbits, Aerospace and Virtual Reality",
"authors": [
{
"first": "Jack",
"middle": [
"B"
],
"last": "Kuipers",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jack B. Kuipers. 2002. Quaternions and Rota- tion Sequences. A Primer with Applications to Orbits, Aerospace and Virtual Reality. Princeton University Press.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Visualizing the loss landscape of neural nets",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Gavin",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Cristoph",
"middle": [],
"last": "Studer",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Goldstein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems 31",
"volume": "",
"issue": "",
"pages": "6391--6401",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Li, Zheng Xu, Gavin Taylor, Cristoph Studer, and Tom Goldstein. 2018. Visualizing the loss landscape of neural nets. In Proceedings of the Annual Conference on Neural Information Pro- cessing Systems: Advances in Neural Informa- tion Processing Systems 31, pages 6391-6401, Montr\u00e9al.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Automatic evaluation of summaries using n-gram cooccurrence statistics",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "71--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram co- occurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 71-78, Edmonton.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Marry",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Marry A. Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn Tree- bank. Computational Linguistics, 19:313-330.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "An analysis of neural language modeling at multiple scales",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Shirish Keskar",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.08240"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. An analysis of neural language modeling at multiple scales. arXiv preprint arXiv:1803.08240.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Fast-slow recurrent neural networks",
"authors": [
{
"first": "Asier",
"middle": [],
"last": "Mujika",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Meier",
"suffix": ""
},
{
"first": "Angelika",
"middle": [],
"last": "Steger",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5915--5924",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asier Mujika, Florian Meier, and Angelika Steger. 2017. Fast-slow recurrent neural networks. In Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems 30, pages 5915-5924, Long Beach, CA.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "SummaRuNNer: A recurrent neural network based sequence model for extractive summarization of documents",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Feifei",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3075--3081",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. SummaRuNNer: A recurrent neural network based sequence model for extractive summarization of documents. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 3075-3081, San Francisco, CA.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Abstractive text summarization using sequence-to-sequence RNNs and beyond",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "C\u00edcero",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Aglar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "280--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bowen Zhou, C\u00edcero Nogueira dos Santos, \u00c7 aglar G\u00fcl\u00e7ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280-290, Berlin.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Ranking sentences for extractive summarization with reinforcement learning",
"authors": [
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1747--1759",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 16th Annual Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 1747-1759, New Orleans, LA.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Automatic summarization. Foundations and Trends in Information Retrieval",
"authors": [
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "5",
"issue": "",
"pages": "103--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ani Nenkova and Kathleen McKeown. 2011. Automatic summarization. Foundations and Trends in Information Retrieval, 5:103-233.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Data-driven summarization of scientific articles",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Nikolov",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Pfeiffer",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Hahnloser",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.08875"
]
},
"num": null,
"urls": [],
"raw_text": "Nikola Nikolov, Michael Pfeiffer, and Richard Hahnloser. 2018. Data-driven summarization of scientific articles. arXiv preprint arXiv: 1804.08875.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Gated end-toend memory networks",
"authors": [
{
"first": "Julien",
"middle": [],
"last": "Perez",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julien Perez and Fei Liu. 2017. Gated end-to- end memory networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 1-10, Valencia.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Holographic Reduced Representation: Distributed Representation for Cognitive Structures",
"authors": [
{
"first": "Tony",
"middle": [
"A"
],
"last": "Plate",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tony A. Plate. 2003. Holographic Reduced Representation: Distributed Representation for Cognitive Structures, CSLI Publications.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Learning internal representations by error propagation. Parallel Distributed Processing: Explorations in the Microstructure of Cognition",
"authors": [
{
"first": "David",
"middle": [
"E"
],
"last": "Rumelhart",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "1",
"issue": "",
"pages": "318--362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning internal representations by error propagation. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, 1:318-362. MIT Press.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Modern Quantum Mechanics",
"authors": [
{
"first": "J",
"middle": [],
"last": "Jun",
"suffix": ""
},
{
"first": "Jim",
"middle": [
"J"
],
"last": "Sakurai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Napolitano",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun J. Sakurai and Jim J. Napolitano. 2010. Modern Quantum Mechanics, Pearson.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Get to the point: Summarization with pointer-generator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1073--1083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summa- rization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1073-1083, Vancouver.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Query-reduction networks for question answering",
"authors": [
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minjoon Seo, Sewon Min, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Query-reduction networks for question answering. In Pro- ceedings of the 5th International Conference on Learning Representations, Toulon,",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Computer Vision",
"authors": [
{
"first": "Linda",
"middle": [
"G"
],
"last": "Shapiro",
"suffix": ""
},
{
"first": "George",
"middle": [
"C"
],
"last": "Stockman",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linda G. Shapiro and George C. Stockman. 2001. Computer Vision. Prentice Hall.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "1",
"issue": "15",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 1(15): 1929-1958.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "End-to-end memory networks",
"authors": [
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems 28",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems 28. Montr\u00e9al.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinayals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinayals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems 27, pages 3104-3112, Montr\u00e9al.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "AI writing bots are about to revolutionise science journalism: We must shape how this is done",
"authors": [
{
"first": "Mi\u0107o",
"middle": [],
"last": "Tatalovi\u0107",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Science Communication",
"volume": "",
"issue": "01",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mi\u0107o Tatalovi\u0107. 2018. AI writing bots are about to revolutionise science journalism: We must shape how this is done. Journal of Science Communication, 17(01).",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Summarizing scientific articles: Experiments with relevance and rhetorical status",
"authors": [
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "4",
"pages": "409--445",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Teufel and Marc Moens. 2002. Summa- rizing scientific articles: Experiments with rel- evance and rhetorical status. Computational Linguistics, 28(4):409-445. Cambridge, MA.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "When science journalism meets artificial intelligence: An interactive demonstration",
"authors": [
{
"first": "Raghuram",
"middle": [],
"last": "Vadapalli",
"suffix": ""
},
{
"first": "Bakhtiyar",
"middle": [],
"last": "Syed",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Prabhu",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Balaji Vasan Srinivasan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "163--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raghuram Vadapalli, Bakhtiyar Syed, Nishant Prabhu, Balaji Vasan Srinivasan, and Vasudeva Varma. 2018. When science journalism meets artificial intelligence: An interactive demon- stration. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 163-168, Brussels.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Pointer networks",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Meire",
"middle": [],
"last": "Fortunato",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems 28",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Meire Fortunato, Navdeep Jaitly, and Chris Pal. 2015. Pointer networks. In Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems 28. Montr\u00e9al.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "On orthogonality and learning recurrent networks with long term dependencies",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Vorontsov",
"suffix": ""
},
{
"first": "Chiheb",
"middle": [],
"last": "Trabelsi",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Kadoury",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Pal",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "3570--3578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, and Chris Pal. 2017. On orthogonality and learning recurrent networks with long term dependencies. In Proceedings of the 34th International Conference on Machine Learning, pages 3570-3578, Sydney.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Learning steerable filters for rotation equivariant CNNs",
"authors": [
{
"first": "Maurice",
"middle": [],
"last": "Weiler",
"suffix": ""
},
{
"first": "Fred",
"middle": [
"A"
],
"last": "Hamprecht",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Storath",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "849--858",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maurice Weiler, Fred A. Hamprecht, and Martin Storath. 2018. Learning steerable filters for rotation equivariant CNNs. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, pages 849-858, Salt Lake City, UT.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Towards AI-complete question answering: A set of prerequisite toy tasks",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 4th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merri\u00ebnboer, Armand Joulin, and Tom\u00e1\u0161 Mikolov. 2016. Towards AI-complete question answering: A set of prerequisite toy tasks. In Proceedings of the 4th International Conference on Learning Representations, San Juan, Puerto Rico.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Fullcapacity unitary recurrent neural networks",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Wisdom",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Powers",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Hershey",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"Le"
],
"last": "Roux",
"suffix": ""
},
{
"first": "Les",
"middle": [],
"last": "Atlas",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "4880--4888",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Wisdom, Thomas Powers, John Hershey, Jonathan Le Roux, and Les Atlas. 2016. Full- capacity unitary recurrent neural networks. In Proceedings of the Annual Conference on Neural Information Processing Systems: Advances in Neural Information Processing Systems 29, pages 4880-4888, Barcelona.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Harmonic networks: Deep translation and rotation equivariance",
"authors": [
{
"first": "Daniel",
"middle": [
"E"
],
"last": "Worrall",
"suffix": ""
},
{
"first": "Stephan",
"middle": [
"J"
],
"last": "Garbin",
"suffix": ""
},
{
"first": "Daniyar",
"middle": [],
"last": "Turmukhambetov",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [
"J"
],
"last": "Brostow",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "5028--5037",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel E. Worrall, Stephan J. Garbin, Daniyar Turmukhambetov, and Gabriel J. Brostow. 2017. Harmonic networks: Deep translation and rotation equivariance. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, pages 5028-5037, Honolulu, HI.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Dynamic memory networks for visual and textual question answering",
"authors": [
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 33rd International",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In Proceedings of the 33rd International",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Transactions of the Association for Computational Linguistics, vol. 7, pp. 121-138, 2019. Action Editor: Phil Blunsom. Submission batch: 8/2018; Revision batch: 11/2018; Published 4/2019. c 2019 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "RUM vs. LSTM (a) Synthetic sequence of emojis: A RUM-based RNN recalls the emoji at position 1 whereas LSTM does not. (b) Text summarization: A seq2seq model with RUM recalls relevant information whereas LSTM generates repetitions near the end."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Model: (a) RUM's operation R, which projects and rotates h; (b) the information pipeline in RUM."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "(a) for a =\u03b5 and b = \u03c4 (for simplicity, we drop the time indices). The computations are as follows. The angle between two vectors a and b is calculated as \u03b8 = arccos(a \u2022 b/( a b ))An orthonormal basis for the plane is (u, v):"
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Derivatives of popular activations."
},
"FIGREF7": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "(a) A tempe of the variables when the model is learned. The task is Associative Recall, T RUM, = 1, with Nh = 50 and without time normalization. (b) An interpr the diagonal and off-diagonal activations of RUM's Whh kernel on NLP tas Level Penn Treebank and the model is = 0 RUM, Nh = 2000, \u2318 = additional examples.5.2 THEORETICAL ANALYSISIt is natural to view the Rotational Unit of Memory and many other appr matrices to fall into the category of phase-encoding architectures: R = R information matrix. For instance, we can parameterize any orthogonal mat cient Unitary Neural Networks(EUNN, Jing et al. (2017b)) architecture: R U0 is a block diagonal matrix containing N/2 numbers of 2-by-2 rotation an one-by-(N/2) parameter vector. Therefore, the rotational memory equa 8"
},
"FIGREF8": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Associative recall study. (a) temperature map for the weight kernels' values for a trained model; (b,c) training evolution of the distribution of cos \u03b8 throughout the sequence of T + 3 = 53 time-steps (53 numbers in each histogram). For each time step t, 1 \u2264 t \u2264 T + 3, we average the values of cos \u03b8 across the minibatch dimension and we show the mean."
},
"FIGREF9": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Text summarization study on CNN/ Daily Mail. (a) Global norm of the gradients over time; (b) Norm of the last hidden state over time; (c) Encoder gradients of the cost wrt the bi-directional output (400 encoder steps); (d) Decoder gradients of the cost wrt the decoder output (100 decoder steps). Note that (c,d) are evaluated upon convergence, at a specific batch, and the norms for each time step are averaged across the batch and the hidden dimension altogether."
},
"FIGREF10": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Science Daily-style highlights for the research paper with DOI 10.1002/smll.201200013."
},
"FIGREF11": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Energy landscape visualization for our best RUM (a,b) and GORU (c,d) models on associative recall. The first batch from the training split is fixed. The weight vectors w 1 , w 2 , w * , w \u03b4 , w \u03bd are randomly chosen instances of the weights used for phase-coding. Subfigures (a) and (c) show a linear interpolation by varying \u03b1, while (b) and (d) visualize a twodimensional landscape by varying \u03b1 and \u03b2. All other weights are fixed, as they do not appear in the rotations."
},
"TABREF0": {
"num": null,
"text": "Associative recall results. T is the input length. Note that line 8 still learns the task completely for T = 50, but it needs more than 100k training steps. Moreover, varying the activations or removing the update gate does not change the result in the last line.",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF1": {
"num": null,
"text": "",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF5": {
"num": null,
"text": "Text summarization results. Shown are ROUGE F-{1,2,L} scores on the test split for the CNN / Daily Mail and the Science Daily datasets.",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Some settings are different from ours: lines 8-9 show</td></tr><tr><td>results when training and testing on an anonymized</td></tr><tr><td>data set, and lines 12-14 use reinforcement learning.</td></tr><tr><td>The ROUGE scores have a 95% confidence interval</td></tr><tr><td>ranging within \u00b10.25 points absolute. For lines 2 and 7, the maximum decoder steps during testing is</td></tr><tr><td>100. In lines 15-18, L/dR stands for LEAD/decRUM.</td></tr><tr><td>Replacing ReLU with tanh or removing the update</td></tr><tr><td>gate in decRUM line 17 yields a drop in ROUGE</td></tr><tr><td>of 0.01/0.09/0.25 and 0.36/0.39/0.42 points absolute,</td></tr><tr><td>respectively.</td></tr></table>"
},
"TABREF7": {
"num": null,
"text": "RUM modeling ingredients: Tasks (A-E).",
"type_str": "table",
"html": null,
"content": "<table/>"
}
}
}
}