| { |
| "paper_id": "2022", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:16:36.302488Z" |
| }, |
| "title": "A Neural Model for Compositional Word Embeddings and Sentence Processing", |
| "authors": [ |
| { |
| "first": "Jean-Philippe", |
| "middle": [], |
| "last": "Bernardy", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Centre for Linguistic Theory and Studies in Probability", |
| "institution": "University of Gothenburg", |
| "location": {} |
| }, |
| "email": "jean-philippe.bernardy@gu.se" |
| }, |
| { |
| "first": "Shalom", |
| "middle": [], |
| "last": "Lappin", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Centre for Linguistic Theory and Studies in Probability", |
| "institution": "University of Gothenburg", |
| "location": {} |
| }, |
| "email": "shalom.lappin@gu.se" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We propose a new neural model for word embeddings, which uses Unitary Matrices as the primary device for encoding lexical information. It uses simple matrix multiplication to derive matrices for large units, yielding a sentence processing model that is strictly compositional, does not lose information over time steps, and is transparent, in the sense that word embeddings can be analysed regardless of context. This model does not employ activation functions, and so the network is fully accessible to analysis by the methods of linear algebra at each point in its operation on an input sequence. We test it in two NLP agreement tasks and obtain rule like perfect accuracy, with greater stability than current state-of-the-art systems. Our proposed model goes some way towards offering a class of computationally powerful deep learning systems that can be fully understood and compared to human cognitive processes for natural language learning and representation.", |
| "pdf_parse": { |
| "paper_id": "2022", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We propose a new neural model for word embeddings, which uses Unitary Matrices as the primary device for encoding lexical information. It uses simple matrix multiplication to derive matrices for large units, yielding a sentence processing model that is strictly compositional, does not lose information over time steps, and is transparent, in the sense that word embeddings can be analysed regardless of context. This model does not employ activation functions, and so the network is fully accessible to analysis by the methods of linear algebra at each point in its operation on an input sequence. We test it in two NLP agreement tasks and obtain rule like perfect accuracy, with greater stability than current state-of-the-art systems. Our proposed model goes some way towards offering a class of computationally powerful deep learning systems that can be fully understood and compared to human cognitive processes for natural language learning and representation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The word embeddings that deep neural networks (DNNs) learn are encoded as vectors. The various dimensions of the vectors correspond to distributional properties of words, as measured in corpora. Combining word embeddings into phrasal and sentence vectors can be achieved through various means, often through task-specific models with many parameters of their own, optimised by gradient descent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper we use unitary matrices in place of arbitrary vector embeddings. Arjovsky et al. (2016) propose Unitary-Evolution Recurrent Neural Networks (URNs), to eliminate exploding or vanishing gradients in gradient descent. By the definition of unitary-evolution, at each step, a unitary transformation is applied to the state of the RNN. This means that each input symbol is interpreted as a unitary transformation, or equivalently as a unitary matrix. No activation functions are applied between the time-steps. This design provides a lightweight DNN, with several attractive mathematical and computational properties. URNs are strictly compositional. The effect of embeddings can be analysed independently of context. Therefore the model is transparent, in the sense that it can be analysed by direct inspection, rather than through black box testing methods. So, for example, researchers are forced to resort to probe techniques (Hewitt and Manning, 2019) to ascertain the syntactic structure which transformers and other DNNs represent.", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 101, |
| "text": "Arjovsky et al. (2016)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 938, |
| "end": 964, |
| "text": "(Hewitt and Manning, 2019)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Because of the reversibility of unitary transformations, long distance dependency relations can, in principle, be reliably and efficiently recognised, without additional special-purpose machinery of the kind required in an LSTM. This has been demonstrated to hold for copying and adding tasks (Arjovsky et al., 2016; Jing et al., 2017; Vorontsov et al., 2017 ) (See also section 6.4).", |
| "cite_spans": [ |
| { |
| "start": 293, |
| "end": 316, |
| "text": "(Arjovsky et al., 2016;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 317, |
| "end": 335, |
| "text": "Jing et al., 2017;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 336, |
| "end": 358, |
| "text": "Vorontsov et al., 2017", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Here we view the unitary matrices learned by a URN as word embeddings. Doing so gives a richer structure to embeddings, with computational and formal advantages that are absent from the traditional vector format that dominates current work in deep learning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We demonstrate these advantages by applying the URN architecture to two tasks: (i) bracket matching in a generalised Dyck language, and (ii) the more challenging task of subject-verb number agreement in English. These experiments confirm the long-distance capabilities of URNs, even on a linguistically interesting and difficult task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The richer structure of unitary embeddings permits us to measure the relative effects and distances of different words and phrases. We illustrate the application of such metrics for both experiments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In section 2 we describe the design of the URN, and our implementation of it. Sections 4 and 5 present our experiments and their results, leverag-ing the theory presented in section 3. 1 We discuss related work in section 6, and we draw conclusions and sketch future work in section 7.", |
| "cite_spans": [ |
| { |
| "start": 185, |
| "end": 186, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The computational perspicuity of URNs allows them to be compared to psychologically and neurologically attested models of human learning and representation. Most deep neural networks, particularly powerful transformers, use non-linear activation functions which render their operation opaque and difficult to understand. By contrast, the computations of an URN are explicitly given as simple matrix multiplications, and they are open to inspection at each point in the processing sequence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In its full generality, a recurrent network is a function from an input state vector s 0 and a sequence of input vectors x i , such that the state at each timestep is a function of the state at the previous step and the input at that step:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "s i+1 = f (x i , s i ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The function f is constant across steps, and it is called a \"cell\" of the network.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Since the simple recurrent networks of Elman (1990) , the dominant architectures of RNNs, including the influential LSTM (Hochreiter and Schmidhuber, 1997) , use non-linear activation functions (sigmoid , tanh, ReLU) at each timestep. Transformer models, like BERT, are even more opaque in their operations, due the their reliance on a large number of attention heads that apply non-linear functions at each level. By contrast our URNs invoke only linear cells. In fact, the cell that we use is a linear transformation of the unitary space, 2 so that it takes unit state vectors to unit state vectors, hence the term \"unitary-evolution\". Expressed as an equation, we have f (x, s) = Q(x)s, where Q(x) is unitary. Therefore, only state vectors s i of norm 1 play a role in URNs.", |
| "cite_spans": [ |
| { |
| "start": 39, |
| "end": 51, |
| "text": "Elman (1990)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 121, |
| "end": 155, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In our implementation of the URN architecture we limit ourselves to real numbers, and so Q(x) is properly described as an orthogonal matrix. We follow this terminology in what follows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Let n be the dimension of the state vectors s i , and N the length of the sequence of inputs. We will consider only the case of n even. In all our experiments, we take s 0 to be the vector [1, 0, . . . ] without loss of generality. For predictions, we extract a probability distribution from state vectors 1 The code and relevant linear algebra proofs for our model is available at https://github.com/GU-CLASP/ unitary-recurrent-network.", |
| "cite_spans": [ |
| { |
| "start": 306, |
| "end": 307, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "2 The subspace of vectors of unit norm by applying a dense layer with softmax activation to each s i . We need to ensure that Q(x) is (and remains) orthogonal when it is subjected to gradient descent. In general, subtracting a gradient to an orthogonal matrix does not preserve orthogonality of the matrix. So we cannot make Q(x) a simple lookup table from symbol to orthogonal matrix without additional restrictions. While one could project the matrix onto an orthogonal space (Wisdom et al., 2016; Kiani et al., 2022) , our solution is to use a lookup table mapping each word to a skew-hermitian matrix S(x). 3 We follow Hyland and R\u00e4tsch (2017) in doing this. We then let Q(x) = e S(x) , which ensures the orthogonality of Q(x). It is not difficult to ensure that S(x) is skew-symmetric. It suffices to store only the elements of S(x) above the diagonal, and let those below it be their anti-symmetric image, while the diagonal is set at zero.", |
| "cite_spans": [ |
| { |
| "start": 478, |
| "end": 499, |
| "text": "(Wisdom et al., 2016;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 500, |
| "end": 519, |
| "text": "Kiani et al., 2022)", |
| "ref_id": null |
| }, |
| { |
| "start": 611, |
| "end": 612, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Another important issue is that the number of parameters in S(x) grows with the square of n. This would entail that doubling a model's power requires quadrupling the number of its parameters. To remedy this problem we limit ourselves to matrices S(x) which have non-zero entries only on the first k rows (and consequently k columns). In this way we limit the total size of the embedding to (n \u2212 1) + (n \u2212 2) + \u2022 \u2022 \u2022 + (n \u2212 k + 1), due to the constraint of symmetry. Consequently, S(x) has at most rank 2k. Below, we refer to this setup as consisting of truncated embeddings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "As an example, the 3\u00d73 skew-symmetric matrix", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "0 a b \u2212a 0 c \u2212b \u2212c 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "is 1-truncated if c = 0. This truncation reduces its informational content to the single row (and column) (a b).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We use the acronym URN to refer to the general class of unitary-evolution networks, k-TURN to refer to our specific model architecture with ktruncation of embeddings ( fig. 1) , and Full-URN for our model architecture with no truncation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 168, |
| "end": 175, |
| "text": "fig. 1)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We employ a standard training regime for our experiments. We apply a dropout function on both inputs of f , so that some entries of s i or Q(x i ) will be zeroed out according to a Bernoulli distribution", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u00d7 \u00d7 s 1 y 0 A exp Q(x 0 ) emb S(x 0 ) x 0 \u00d7 \u00d7 s 2 y 1 A exp Q(x 1 ) emb S(x 1 ) x 1 \u00d7 \u00d7 s 3 y 2 A exp Q(x 2 ) emb S(x 2 ) x 2 \u00d7 \u00d7 s 4 y 3 A exp Q(x 3 ) emb S(x 3 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "x 3 of rate \u03c1. 4 The embeddings are optimised by means of the Adam gradient descent algorithm (Kingma and Ba, 2014), with no further adjustment. Our implementation uses the TensorFlow (Abadi et al., 2016) framework (version 2.2), including its implementation of matrix exponential.", |
| "cite_spans": [ |
| { |
| "start": 184, |
| "end": 204, |
| "text": "(Abadi et al., 2016)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The absence of activation functions in the URN make it more amenable to theoretical analysis than the general class of RNNs with activation functions, including LSTMs and GRUs. The key feature of this design is that the behaviour of the cell is entirely defined by the matrix Q(x), the orthogonal embedding of x. The cell only multiplies by word embeddings, and we can focus solely on those embeddings to understand the model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Since the work of Mikolov et al. (2013) , vector embeddings have proven to be an extremely successful modelling tool. However, their structure is opaque. The only way of analysing their relations is through geometric distance metrics like cosine similarity. The unit vectors u and v are deemed similar if u, v is close to 1. Here we work with orthogonal matrix embeddings, which exhibit much richer structure. We use mathematical analysis to get a better sense of this structure, and relate it to vector embeddings. 4 Even though we follow this regime to be standard, experiments indicate that dropout rates appear not critical when we restrict transformations to be unitary.", |
| "cite_spans": [ |
| { |
| "start": 18, |
| "end": 39, |
| "text": "Mikolov et al. (2013)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 516, |
| "end": 517, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Composition of Embeddings A decisive benefit of unitary (and orthogonal) matrix embeddings is that they form a group. We can obtain the inverse of a word embedding simply by transposing it: Q(x) \u22121 = Q(x) T . We can also compose two embeddings to obtain an embedding for the composition. Thanks to the associativity of multiplication, we have", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "f (x 1 , f (x 0 , s 0 )) = Q(x 1 )(Q(x 0 )s 0 ) = (Q(x 1 ) \u00d7 Q(x 0 ))s 0 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "So, we can define the embedding of any sequence as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Q(x 0 . . . x i ) = Q(x i ) \u00d7 Q(x i\u22121 ) \u00d7 \u2022 \u2022 \u2022 \u00d7 Q(x 0 ). Using this notation, the final state of an URN is Q(x 0 . . . x N \u22121 )s 0 . Hence, the URN is composi- tional by design. 5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "It is important to recognise that compositionality is strictly a consequence of the structure of a URN. It follows directly from the use of unitary matrix multiplication, through which the successive states in the RNN's processing sequence are computed, without activation functions, It is not necessary to demonstrate this result experimentally, since it is a formal consequence of the associativity of orthogonal matrix multiplication, as shown above. Because URNs do not incorporate additional nonlinear activation functions, a simple matrix is always sufficient to express any combination of word and phrasal embeddings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Distance and Similarity For vector embeddings, one often uses cosine similarity as a metric of proximity. With unit vectors, this cosine similarity is equal to the inner product u,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "v = i u i v i . In unitary space, it is equivalent to working with euclidean distance squared, because u \u2212 v 2 = 2(1 \u2212 u, v ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Notions of vector similarity and distance can be naturally extended to matrices. The Frobenius inner product P, Q = \u03a3 ij P ij Q ij extends cosine similarity, and the Frobenius norm", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "A 2 = ij A 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "ij extends euclidean norm. Furthermore, for orthogonal matrices they relate in an analogous way to unit vectors:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "P \u2212 Q 2 = 2(n \u2212 P, Q ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Why is the Frobenius norm a natural extension of cosine similarity for vectors? It is not merely due to the similarity of the respective formulas.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The connection is deeper. A crucial property of the Frobenius inner product (and associated norm) is that it measures the average behaviour of orthogonal matrices on state vectors. More precisely, the following holds: E s [ P s, Qs ] = 1 n P, Q , and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "E s [ P s \u2212 Qs 2 ] = 1 n P \u2212 Q 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In sum, as a fallback, one can analyse unitary embeddings using the methods developed for plain vector embeddings. Doing so is theoretically sound. Together with the fact that matrix embeddings can be composed, it means that one can analyse the distances between phrases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Average Effect A useful metric for unitary embeddings is the squared distance to the identity matrix, Q \u2212 I 2 . By the above result, it is the average squared distance between s and Qs -essentially, the average effect that Q has, relative to the task for which the URN is trained. Note that this sort of metric is unavailable when using opaque vector embeddings. In particular, the norm of a vector embedding is not directly interpretable as a measure of its effect. In the case of an LSTM, for example, vector embeddings first undergo linear transformations followed by activation functions, before effecting the state, in several separate stages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Signature of Embeddings While the average effect is a useful measure, it is rather crude. Averaging over random state vectors considers all features as equivalent. But we might be interested in the effect of Q along specific dimensions, measured separately.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For this purpose, it is useful to note that any orthogonal matrix Q can be decomposed as the effect of n/2 independent rotations, in n/2 orthogonal planes. The angles of these rotations define how strongly Q effects the state vectors lying in this plane. We refer to such a list of angles as the signature of Q, and we denote it as sig(Q). When displaying a signature, we omit any zero angle. This is useful because a k-truncated embedding has at most k non-zero angles in its signature. Nonzero angles will be represented graphically as a dial, with small angles pointing up , and large angles pointing down .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Properties of Orthogonal Embeddings", |
| "sec_num": "3" |
| }, |
| { |
| "text": "It may seem that the extreme simplicity of the TURN architecture renders it unsuitable for any non-trivial processing task. In fact, this is not at all the case. Our first experiment applies a TURN to a natural language agreement task proposed by Linzen et al. (2016) . This task is to predict the number of third person verbs in English text, with supervised training. In the phrase \"The keys to the cabinet are on the table\", the RNN is trained to predict the plural \"are\" rather than the singular \"is\".", |
| "cite_spans": [ |
| { |
| "start": 247, |
| "end": 267, |
| "text": "Linzen et al. (2016)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Natural Language Agreement Task", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The training data is composed of 1.7 million sentences with a selected subject-verb pair, extracted from Wikipedia. The vocabulary size is 50,000, and out-of-vocabulary tokens are replaced by their part-of-speech tag. Training is performed for ten epochs, with a learning rate of 0.01, and a dropout rate of \u03c1 = 0.05. We use 90% of the data for training and 10% for validation and testing. A development subset is not necessary since no effort was made to tune hyperparameters. Our first experiment proved sufficient to illustrate our main claims. In any case, a TURN has few hyperparameters to optimise. Linzen et al. (2016) point out that solving the agreement task requires knowledge of hierarchical syntactic structure. That is, if an RNN captures the long-distance dependencies involved in agreement relations, it cannot rely solely on the linear sequence of nouns (in particular their number inflections) preceding the predicted verb in a sentence. In particular, the accuracy must be sustained as the number attractors increases. An attractor is defined as a noun occurring between the subject and the verb which additionally exhibits the wrong number feature required to control the verb. In the above example sentence, \"cabinet\" is an attractor. Figure 2 shows the results for a 50-unit TURN with 3-truncated embeddings for the agreement task, for up to 12 attractors. We see that the TURN \"solves\" this task, with error rates well under one percent. Crucially, there is no evidence of accuracy dropping as the number of attractors increases. Even though the statistical uncertainty increases with the number of attractors, due to decreasing numbers of examples, the TURN makes no mistakes for the higher number of attractor cases.", |
| "cite_spans": [ |
| { |
| "start": 605, |
| "end": 625, |
| "text": "Linzen et al. (2016)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1255, |
| "end": 1263, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Natural Language Agreement Task", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In this section we illustrate the notion of average effect developed in 3, for this task. We report the average effect for the embeddings of the most common words in the dataset (table 1) , and other selected words and phrases obtained by composition. We stress that this is not done by measuring the average effect on the data set; but rather using the formula Q \u2212 I 2 for each unitary embedding Q. Looking at the table of effects for these words and phrases (ordered from smallest to largest effect) confirms the analysis of 3: tokens which are relevant to the task (e.g. verbs, relative pronouns) generally have a larger effect than those which are not (e.g. the dot, \"not\").", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 178, |
| "end": 187, |
| "text": "(table 1)", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Average effect", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We also computed the distance between pairs of the most frequent nouns, with both singular and plural inflections (table 2) . We observe, as our account predicts, that nouns with the same number inflection tend to be grouped (with a distance of 7.5 or less between them), while nouns with differing numbers are further apart (with a distance of 7.5 or more).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 114, |
| "end": 123, |
| "text": "(table 2)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Average effect", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "To evaluate the theoretical long-distance modelling capabilities of an RNN in a way that abstracts away from the noise in natural language, one can construct synthetic data. Following Bernardy (2018) we use a (generalised) Dyck language. This language is composed solely of matching parenthesis pairs. So the strings \"{([])}<>\" and \"{()[<>]}\" are part of the language, while \"[}\" is not. This experiment is an idealised version of the agreement task, where opening parentheses correspond to subjects, and closing parentheses to verbs. An attractor is an opening parenthesis occurring between the pair, but of a different kind. Matching of parentheses corresponds to agreement. Because we use five distinct kinds of parentheses, the majority class baseline is at 20%. This makes it easier to evaluate the performance of a model on the matching task than for the third person agreement task, where the majority class baseline for the training corpus is above 70%. We complicate the matching task with an additional difficulty. We vary the nesting depth between training and test phases. The depth of the string is the maximum nesting level reached within it. For instance \"[{}]\" has depth 2, while \"{([()]<>)}\" has depth 4. In this task, we use strings with a length of exactly 20 characters. We train on 102,400 randomly generated strings, with maximum depth 3, and test it on 5120 random strings of maximum depth 10. Training is performed with a learning rate of 0.01, and a dropout rate of \u03c1 = 0.05, for 100 epochs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dyck-language modelling task", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The training phase treats the URN as a generative language model, applying a cross-entropy loss function at each position in the string. At test time, we evaluate the model's ability to predict the right kind of closing parenthesis at each point (this is the equivalent of predicting the number of a verb). We ignore predictions regarding opening parentheses, because they are always acceptable for the language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dyck-language modelling task", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We ran three versions of this experiment. One with truncated embeddings, one with full embeddings, and a third using a baseline RNN with full embeddings that are not constrained to be orthogonal. In all cases, the size of matrices is 50 by 50. We report accuracy on the task by number of attractors in fig. 3 Table 2 : Distances between embeddings of most frequent nouns and their plural variants. Words which can be both nouns and verbs were excluded. We note that even the baseline model is capable of generalising to longer distances. Up to 9 attractors, it achieves performance that is well above a majority class baseline (20%). However, it shows steadily decreasing accuracy as the number of attractors increases.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 302, |
| "end": 308, |
| "text": "fig. 3", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 309, |
| "end": 316, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dyck-language modelling task", |
| "sec_num": "5" |
| }, |
| { |
| "text": "By contrast, the URN models remain accurate as the number of attractors grows. Perhaps surprisingly, the URN improves in relation to the number of attractors. We will solve this apparent puzzle below, through analysis of the embeddings. The explanation will hinge on the fact that truncating embeddings affects performance only when the number of attractors is low.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dyck-language modelling task", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Comparing the arbitrary embeddings model with with full URN highlights the importance of limiting the network to orthogonal matrices. The performance of the full URN is better over the long term and in general, with a validation loss of 1.47213 compared to 1.52914 for the arbitrary case. This happens despite the fact that the orthogonal system 0.33 0.35 1.35 0.46 1.73 0.2 1.09 0.2 0.34 is a special case of the arbitrary network, and so orthogonal embeddings are, in principle, available to the baseline RNN. But it is not able to converge on the preferred solution (even for absolute loss). In sum, restricting to orthogonal matrices acts like a regularising constraint which offers a significant net benefit in generalisation and tracking power.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dyck-language modelling task", |
| "sec_num": "5" |
| }, |
| { |
| "text": "As in the previous experiment, matrix embeddings can be analysed regardless of contexts, offering a direct view of how the model works. We consider the embeddings produced by training the 3-TURN model, and we start with the embeddings of individual characters and their signatures (table 4) . The average effect, and even the signatures of all embeddings are strikingly similar. This does not imply that they are equal. Indeed, they rotate different planes. We see in table 3 that the planes which undergo rotation by similar angles are far from orthogonal to each other-one pair even exhibits a similarity of 1.73. This corresponds to the fact that the transformations of ( and [ manipulate a common subset of coordinates. On the other hand, those planes that undergo rotation by different angles tend to be in a closer to orthogonal relationship.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 281, |
| "end": 290, |
| "text": "(table 4)", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "To further clarify the formal properties of our model let's look at the embeddings of matching pairs, computed as the product of the respective embeddings of the pairs. Such compositions are close to identity (table 4) . This observation explains the extraordinarily accurate long-distance performance of the URN on the matching task. Because a matching pair has essentially no effect on the state, by the time all parentheses have been closed, the state returns to its original condition. Accordingly, the model experiences the highest level of confusion when it is inside a deeply nested structure, and not when a deep structure is inserted between the governing opening parenthesis and the prediction conditioned on that parenthesis.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 209, |
| "end": 218, |
| "text": "(table 4)", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Composition of Matching Parentheses", |
| "sec_num": null |
| }, |
| { |
| "text": "6 Related Work", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Composition of Matching Parentheses", |
| "sec_num": null |
| }, |
| { |
| "text": "It has frequently been observed that DNNs are complex and opaque in the way in which they operate. It is often unclear how they arrive at their results, or why they identify the patterns that they extract from training data. This has given rise to a concerted effort to render deep learning systems explainable (Linzen et al., , 2019 . This problem has become more acute with the rapid development of very large pre-trained transformer models (Vaswani et al., 2017) , like BERT (Devlin et al., 2018) , GPT2 (Solaiman et al., 2019) , GPT3 (Brown et al., 2020) , and XLNet (Yang et al., 2019) . URNs avoid this difficulty by being compositional by design. If they prove robust for a wide variety of NLP tasks, they will go some way to solving the problem of explainability in deep learning.", |
| "cite_spans": [ |
| { |
| "start": 311, |
| "end": 333, |
| "text": "(Linzen et al., , 2019", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 443, |
| "end": 465, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": null |
| }, |
| { |
| "start": 478, |
| "end": 499, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 507, |
| "end": 530, |
| "text": "(Solaiman et al., 2019)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 538, |
| "end": 558, |
| "text": "(Brown et al., 2020)", |
| "ref_id": null |
| }, |
| { |
| "start": 571, |
| "end": 590, |
| "text": "(Yang et al., 2019)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explainable NLP", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Learning Agreement The question of whether generative language models can learn long-distance agreement was proposed by Linzen et al. (2016) . If accuracy is insensitive to the number of attractors, then we know that the model can work on long distances. The results of Linzen et al. (2016) are inconclusive on this question. Even though the model does better than the majority class baseline for up to four attractors, accuracy declines steadily as the number of attractor increases. This trend is confirmed by Bernardy and Lappin (2017) , who ran the same experiment on a larger dataset and thoroughly explored the space of hyperparameters. It is also confirmed by Gulordava et al. (2018) , who analysed languages other than English. Marvin and Linzen (2018) focused on other linguistic phenomena, reaching similar conclusions. Lakretz et al. (2021) recently showed that an LSTM may extract bounded nested tree structures, without learning a systematic recursive rule. These results do not hold directly for BERT-style models, because they are not generative, even though Goldberg (2019) provides a tentative approach. For a more detailed review of these results, see the recent account of Lappin (2021) .", |
| "cite_spans": [ |
| { |
| "start": 120, |
| "end": 140, |
| "text": "Linzen et al. (2016)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 270, |
| "end": 290, |
| "text": "Linzen et al. (2016)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 512, |
| "end": 538, |
| "text": "Bernardy and Lappin (2017)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 667, |
| "end": 690, |
| "text": "Gulordava et al. (2018)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 736, |
| "end": 760, |
| "text": "Marvin and Linzen (2018)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 830, |
| "end": 851, |
| "text": "Lakretz et al. (2021)", |
| "ref_id": null |
| }, |
| { |
| "start": 1192, |
| "end": 1205, |
| "text": "Lappin (2021)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explainable NLP", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Our experiment shows that URNs can surpass state of the art results for this kind of task. This is not surprising. URNs are designed so that they cannot forget information, and so it is expected that they will perform well on tracking long distance relations. The conservation of information is explained by the fact that multiplying by an orthogonal matrix conserves cosine similarities: Qs 0 , Qs 1 = s 0 , s 1 . Therefore any embedding Q, be it of a single word or of a long phrase, maps a change in its input state to an equal change in its output state. Considering all possible states as a distribution, Q conserves the density of states. Hence, contrary to the claims of Sennhauser and Berwick (2018) , URNs demonstrate that a class of RNNs can achieve rule-like accuracy in syntactic learning.", |
| "cite_spans": [ |
| { |
| "start": 678, |
| "end": 707, |
| "text": "Sennhauser and Berwick (2018)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explainable NLP", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Dyck Languages Elman (1991) already ob-served that it is useful to experiment with artificial systems to filter out the noise of real world natural language data. However, to ensure that the model actually learns recursive patterns instead of bounded-level ones, it is necessary to test on more deeply nested structures than the ones that the model is trained on, as we did. Generalised Dyck languages are ideal for this purpose (Bernardy, 2018) . While LSTMs (and GRUs) exhibit a certain capacity to generalise to deeper nesting their performance declines in proportion to the depth of the nesting, as is the case with their handling of natural language agreement data. Other experimental work has also illustrated this effect (Hewitt et al., 2020; Sennhauser and Berwick, 2018) . Similar conclusions are observed for generative self-attention architectures (Yu et al., 2019) , while BERT-like, non-generative self-attention architectures simply fail at this task (Bernardy et al., 2021) .", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 27, |
| "text": "Elman (1991)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 429, |
| "end": 445, |
| "text": "(Bernardy, 2018)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 728, |
| "end": 749, |
| "text": "(Hewitt et al., 2020;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 750, |
| "end": 779, |
| "text": "Sennhauser and Berwick, 2018)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 859, |
| "end": 876, |
| "text": "(Yu et al., 2019)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 965, |
| "end": 988, |
| "text": "(Bernardy et al., 2021)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explainable NLP", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "By contrast URNs achieve excellent performance on this task, without declining in relation to either depth of nesting or the number of attractors. Careful analysis of the learned embeddings explains this level of accuracy in a principled way, as the direct consequence of their formal processing design.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explainable NLP", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Unitary matrices are essential elements of quantum mechanics, and quantum computing. There, too, they insure that the relevant system does not lose information through time. Coecke et al. (2010) ; Grefenstette et al. (2011) propose what they describe as a quantum inspired model of linguistic representation. It computes vector values for sentences in a category theoretic representation of the types of a pregroup grammar (Lambek, 2008) . The category theoretic structure in which this grammar is formulated is isomorphic with the one for quantum logic. 6 A difficulty of this approach is that it requires the input to be already annotated as parsed data. Another problem is that the size of the tensors associated with higher-types is very large, making them hard to learn. By contrast, URNs do not require a syntactic type system. In fact, our experiments indicate that, with the right processing network, it is possible to learn syntactic structure and semantic composition from unannotated input.", |
| "cite_spans": [ |
| { |
| "start": 174, |
| "end": 194, |
| "text": "Coecke et al. (2010)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 197, |
| "end": 223, |
| "text": "Grefenstette et al. (2011)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 423, |
| "end": 437, |
| "text": "(Lambek, 2008)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 555, |
| "end": 556, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantum-Inspired Systems", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Compositionality of phrase and sentence matri-6 See Lappin (2021) for additional discussion of this theory.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantum-Inspired Systems", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "ces is intrinsic to the formal specification of the network. Sutskever et al. (2011) describe what they call a \"tensor recurrent neural network\" in which the transition matrix is determined by each input symbol. This design appears to be similar to URNs. However, unlike URNs, they use non-linear activation functions, and so they inherit the complications that these functions bring.", |
| "cite_spans": [ |
| { |
| "start": 61, |
| "end": 84, |
| "text": "Sutskever et al. (2011)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantum-Inspired Systems", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Arjovsky et al. 2016proposed Unitary-Evolution recurrent networks to solve the problem of exploding and vanishing gradients, caused by the presence of non-linear activation functions. Despite this, Arjovsky et al. (2016) suggest that they use ReLU activation between time-steps, unlike URNs. Moreover, we are primarily concerned with the structure of the underlying unitary embeddings. The connection between the two lines work is that, if an RNN suffers exploding/vanisihing gradients, it cannot track long-term dependencies. Arjovsky et al. (2016) 's embeddings are computationally cheaper than ours, because they can be multiplied in linear time. Like us, they do not cover the whole space of unitary matrices. Jing et al. (2017) propose another representation which is computationally less expensive than ours, but which has asymptotically the same number of parameters. A third option is let back-propagation update the unitary matrices arbitrarily n \u00d7 n, and project them onto the unitary space periodically (Wisdom et al., 2016; Kiani et al., 2022 ).", |
| "cite_spans": [ |
| { |
| "start": 527, |
| "end": 549, |
| "text": "Arjovsky et al. (2016)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 714, |
| "end": 732, |
| "text": "Jing et al. (2017)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1014, |
| "end": 1035, |
| "text": "(Wisdom et al., 2016;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 1036, |
| "end": 1054, |
| "text": "Kiani et al., 2022", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unitary-Evolution Recurrent Networks", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "Because we use a fully general matrix exponential implementation, our model is computationally more expensive than all the other options mentioned above. We can however report that when experimenting with the unitary matrix encodings Jing et al. (2017) and Arjovsky et al. (2016) , we got much worse results for our experiments. This may be because we do not include a ReLU activation, while they do use one.", |
| "cite_spans": [ |
| { |
| "start": 234, |
| "end": 252, |
| "text": "Jing et al. (2017)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 257, |
| "end": 279, |
| "text": "Arjovsky et al. (2016)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unitary-Evolution Recurrent Networks", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "To the best of our knowledge, no previous study of URNs has addressed agreement or other language modelling tasks. Rather, they have been directed at data-copying tasks, which is of limited linguistic interest. This includes the work of Vorontsov et al. (2017) , even though it is ostensibly concerned with long distance dependencies.", |
| "cite_spans": [ |
| { |
| "start": 237, |
| "end": 260, |
| "text": "Vorontsov et al. (2017)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unitary-Evolution Recurrent Networks", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "In conclusion, we have shown that the URN is a useful architecture for syntactic tasks, for which it can reach or surpass state-of-the art precision. We strongly suspect that it will also prove effective for NLP tasks requiring fine-grained semantic knowledge. Unlike other DNNs, a URN is transparent and mathematically grounded in straightforward operations of linear algebra. It is possible to trace and understand what is happening at each level of the network, and at each point in the sequence that makes up the processing flow of the network.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Additionally, URNs learn unitary embeddings. These offer two important advantages. First, they have a rich internal structure from which we can analyse the learned model. Second they handle compositionality without stipulated constraints, or additional mechanisms. Therefore we can obtain unitary embeddings for any phrase or sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "The refined distance, effect, and relatedness metrics that unitary embeddings facilitate, open up the possibility of more interesting procedures for identifying natural syntactic and semantic word classes. These can be textured and dynamic, rather than static. They can focus on specific dimensions of meaning and structure, and they can be driven by specific NLP tasks. If additional types of input data are encoded in a matrix, such as visual content, then these classes could also be grounded in extralinguistic contexts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "In order to render URNs efficient, it is necessary to reduce the number of parameters from which the matrix can be derived. We found that a simple k-truncation of underlying anti-symmetric matrices is a useful strategy to limit the size of word embeddings. It also makes the learned embeddings more accessible to formal analysis, because they can be decomposed as rotations along k planes. For the tasks that we considered, truncation does not seriously degrade the performance of the TURN model. Kiani et al. (2022) recently applied this strategy to another subset of tasks, suggesting general viability of this strategy.", |
| "cite_spans": [ |
| { |
| "start": 497, |
| "end": 516, |
| "text": "Kiani et al. (2022)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "In preliminary work we have applied URNs to the recognition of mildly context-sensitive languages containing cross serial dependencies of the sort found in Swiss German and in Dutch. The performance of the model is even more robust and stable than it is for the agreement tasks reported here. We will be extending this work to a variety of other linguistically and cognitively interesting NLP tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Given the radical computational transparency of URN architecture, these models are natural candidates for comparison with human processing systems, both at the neurological level, and on more abstract psychological planes. Identifying and measuring the content of their acquired knowledge for particular tasks can be done through direct observation of their processing patterns, and the application of straightforward distance metrics. In this respect they are of particular interest in the study of the cognitive foundations of linguistic learning and representation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "A matrix S is skew-symmetric iff S T = \u2212S. Here, we rely on the the property that the exponential of any skewsymmetric matrix is orthogonal . The mathematical tools that we employ are standard(Gantmacher, 1959). The key results and their proofs are available at https://github.com/ GU-CLASP/unitary-recurrent-network/blob/ main/proofs.pdf.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "One might expect that the composition of embeddings can be done at the level of skew-symmetric embeddings:S(x0x1) = S(x0) + S(x1).However, this will not work. The law e S 0 +S 1 = e S 0 e S 1 holds only when S0 and S1 commute, which is, in general, not true in our setup. This noncommutativity makes it possible to obtain, by composition, embeddings of higher rank, by which way we make use of all the dimensions of the orthogonal group.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The research reported in this paper was supported by grant 2014-39 from the Swedish Research Council, which funds the Centre for Linguistic Theory and Studies in Probability (CLASP) in the Department of Philosophy, Linguistics, and Theory of Science at the University of Gothenburg We thank three anonymous reviewers for their helpful comments on an earlier draft of this paper. We presented the main ideas of this paper to the CLASP Seminar, in December 2021, and to the Cognitive Science Seminar of the School of Electronic Engineering and Computer Science, Queen Mary University of London, in February 2022. We are grateful to the audiences of these two events for useful discussion and feedback.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": "8" |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Tensorflow: A system for large-scale machine learning", |
| "authors": [ |
| { |
| "first": "Mart\u00edn", |
| "middle": [], |
| "last": "Abadi", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Barham", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianmin", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhifeng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Andy", |
| "middle": [], |
| "last": "Davis", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthieu", |
| "middle": [], |
| "last": "Devin", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanjay", |
| "middle": [], |
| "last": "Ghemawat", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Irving", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Isard", |
| "suffix": "" |
| }, |
| { |
| "first": "Manjunath", |
| "middle": [], |
| "last": "Kudlur", |
| "suffix": "" |
| }, |
| { |
| "first": "Josh", |
| "middle": [], |
| "last": "Levenberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Rajat", |
| "middle": [], |
| "last": "Monga", |
| "suffix": "" |
| }, |
| { |
| "first": "Sherry", |
| "middle": [], |
| "last": "Moore", |
| "suffix": "" |
| }, |
| { |
| "first": "Derek", |
| "middle": [ |
| "G" |
| ], |
| "last": "Murray", |
| "suffix": "" |
| }, |
| { |
| "first": "Benoit", |
| "middle": [], |
| "last": "Steiner", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Tucker", |
| "suffix": "" |
| }, |
| { |
| "first": "Vijay", |
| "middle": [], |
| "last": "Vasudevan", |
| "suffix": "" |
| }, |
| { |
| "first": "Pete", |
| "middle": [], |
| "last": "Warden", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Wicke", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuan", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaoqiang", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "OSDI", |
| "volume": "16", |
| "issue": "", |
| "pages": "265--283", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mart\u00edn Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. Ten- sorflow: A system for large-scale machine learning. In OSDI, volume 16, pages 265-283.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Unitary evolution recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Arjovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Amar", |
| "middle": [], |
| "last": "Shah", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 33rd International Conference on International Conference on Machine Learning", |
| "volume": "48", |
| "issue": "", |
| "pages": "1120--1128", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Martin Arjovsky, Amar Shah, and Yoshua Bengio. 2016. Unitary evolution recurrent neural networks. In Pro- ceedings of the 33rd International Conference on International Conference on Machine Learning -Vol- ume 48, ICML'16, pages 1120-1128. JMLR.org.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Can rnns learn nested recursion? Linguistic Issues in Language Technology", |
| "authors": [ |
| { |
| "first": "Jean-Philippe", |
| "middle": [], |
| "last": "Bernardy", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jean-Philippe Bernardy. 2018. Can rnns learn nested re- cursion? Linguistic Issues in Language Technology, 16.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Can the transformer learn nested recursion with symbol masking?", |
| "authors": [ |
| { |
| "first": "Jean-Philippe", |
| "middle": [], |
| "last": "Bernardy", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Ek", |
| "suffix": "" |
| }, |
| { |
| "first": "Vladislav", |
| "middle": [], |
| "last": "Maraev", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Findings of the ACL 2021", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jean-Philippe Bernardy, Adam Ek, and Vladislav Maraev. 2021. Can the transformer learn nested re- cursion with symbol masking? In Findings of the ACL 2021.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Using deep neural networks to learn syntactic agreement. Linguistic Issues In Language Technology", |
| "authors": [ |
| { |
| "first": "Jean-", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Philippe", |
| "middle": [], |
| "last": "Bernardy", |
| "suffix": "" |
| }, |
| { |
| "first": "Shalom", |
| "middle": [], |
| "last": "Lappin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "15", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jean-Philippe Bernardy and Shalom Lappin. 2017. Us- ing deep neural networks to learn syntactic agree- ment. Linguistic Issues In Language Technology, 15(2):15.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Mathematical foundations for a compositional distributional model of meaning", |
| "authors": [ |
| { |
| "first": "Bob", |
| "middle": [], |
| "last": "Coecke", |
| "suffix": "" |
| }, |
| { |
| "first": "Mehrnoosh", |
| "middle": [], |
| "last": "Sadrzadeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Linguistic Analysis", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a compositional distributional model of meaning. Lambek Festschrift, Linguistic Analysis, 36.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1810.04805" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Finding structure in time", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [ |
| "L" |
| ], |
| "last": "Elman", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Cognitive Science", |
| "volume": "14", |
| "issue": "2", |
| "pages": "179--211", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey L. Elman. 1990. Finding structure in time. Cog- nitive Science, 14(2):179-211.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Distributed representations, simple recurrent networks, and grammatical structure", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Jeffrey L Elman", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Machine learning", |
| "volume": "7", |
| "issue": "2-3", |
| "pages": "195--225", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey L Elman. 1991. Distributed representations, simple recurrent networks, and grammatical structure. Machine learning, 7(2-3):195-225.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "The Theory of Matrices", |
| "authors": [ |
| { |
| "first": "Gantmacher", |
| "middle": [], |
| "last": "Felix Ruvimovich", |
| "suffix": "" |
| } |
| ], |
| "year": 1959, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felix Ruvimovich Gantmacher. 1959. The Theory of Matrices. AMS Chelsea publishing.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Assessing bert's syntactic abilities. ArXiv", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Goldberg. 2019. Assessing bert's syntactic abili- ties. ArXiv, abs/1901.05287.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Concrete sentence spaces for compositional distributional models of meaning", |
| "authors": [ |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Grefenstette", |
| "suffix": "" |
| }, |
| { |
| "first": "Mehrnoosh", |
| "middle": [], |
| "last": "Sadrzadeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Bob", |
| "middle": [], |
| "last": "Coecke", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Pulman", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Ninth International Conference on Computational Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edward Grefenstette, Mehrnoosh Sadrzadeh, Stephen Clark, Bob Coecke, and Stephen Pulman. 2011. Con- crete sentence spaces for compositional distributional models of meaning. In Proceedings of the Ninth In- ternational Conference on Computational Semantics (IWCS 2011).", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Colorless green recurrent networks dream hierarchically", |
| "authors": [ |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Gulordava", |
| "suffix": "" |
| }, |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Bojanowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "Tal", |
| "middle": [], |
| "last": "Linzen", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "1195--1205", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195-1205, New Orleans, Louisiana. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Rnns can generate bounded hierarchical languages with optimal memory", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Hewitt", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Hahn", |
| "suffix": "" |
| }, |
| { |
| "first": "Surya", |
| "middle": [], |
| "last": "Ganguli", |
| "suffix": "" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2010.07515" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Hewitt, Michael Hahn, Surya Ganguli, Percy Liang, and Christopher D Manning. 2020. Rnns can generate bounded hierarchical languages with optimal memory. arXiv preprint arXiv:2010.07515.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "A structural probe for finding syntax in word representations", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Hewitt", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "4129--4138", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word represen- tations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural Computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": { |
| "DOI": [ |
| "10.1162/neco.1997.9.8.1735" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735- 1780.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Learning unitary operators with help from u (n)", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Stephanie", |
| "suffix": "" |
| }, |
| { |
| "first": "Gunnar", |
| "middle": [], |
| "last": "Hyland", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "R\u00e4tsch", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Thirty-First AAAI Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephanie L Hyland and Gunnar R\u00e4tsch. 2017. Learning unitary operators with help from u (n). In Thirty-First AAAI Conference on Artificial Intelligence.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Tunable efficient unitary neural networks (EUNN) and their application to RNN", |
| "authors": [ |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Jing", |
| "suffix": "" |
| }, |
| { |
| "first": "Yichen", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Tena", |
| "middle": [], |
| "last": "Dub\u010dek", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Peurifoi", |
| "suffix": "" |
| }, |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Skirlo", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li Jing, Yichen Shen, Tena Dub\u010dek, John Peurifoi, Scott Skirlo, Yann LeCun, Max Tegmark, and Marin Sol- ja\u010di\u0107. 2017. Tunable efficient unitary neural networks (EUNN) and their application to RNN. In arXiv.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "2022. projunn: efficient method for training deep networks with unitary matrices", |
| "authors": [ |
| { |
| "first": "Bobak", |
| "middle": [], |
| "last": "Kiani", |
| "suffix": "" |
| }, |
| { |
| "first": "Randall", |
| "middle": [], |
| "last": "Balestriero", |
| "suffix": "" |
| }, |
| { |
| "first": "Yann", |
| "middle": [], |
| "last": "Lecun", |
| "suffix": "" |
| }, |
| { |
| "first": "Seth", |
| "middle": [], |
| "last": "Lloyd", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bobak Kiani, Randall Balestriero, Yann Lecun, and Seth Lloyd. 2022. projunn: efficient method for training deep networks with unitary matrices.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Beno\u00eet Crabb\u00e9, Maxime Oquab, and Stanislas Dehaene. 2021. Can rnns learn recursive nested subject-verb agreements? arXiv preprint", |
| "authors": [ |
| { |
| "first": "Yair", |
| "middle": [], |
| "last": "Lakretz", |
| "suffix": "" |
| }, |
| { |
| "first": "Th\u00e9o", |
| "middle": [], |
| "last": "Desbordes", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean-R\u00e9mi", |
| "middle": [], |
| "last": "King", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2101.02258" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yair Lakretz, Th\u00e9o Desbordes, Jean-R\u00e9mi King, Beno\u00eet Crabb\u00e9, Maxime Oquab, and Stanislas Dehaene. 2021. Can rnns learn recursive nested subject-verb agreements? arXiv preprint arXiv:2101.02258.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Pregroup grammars and Chomsky's earliest examples", |
| "authors": [ |
| { |
| "first": "Joachim", |
| "middle": [], |
| "last": "Lambek", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Journal of Logic, Language and Information", |
| "volume": "17", |
| "issue": "", |
| "pages": "141--160", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joachim Lambek. 2008. Pregroup grammars and Chom- sky's earliest examples. Journal of Logic, Language and Information, 17:141-160.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Deep Learning and Linguistic Representation", |
| "authors": [ |
| { |
| "first": "Shalom", |
| "middle": [], |
| "last": "Lappin", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shalom Lappin. 2021. Deep Learning and Linguistic Representation. CRC Press, Taylor & Francis, Boca Raton, London, New York.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
| "authors": [ |
| { |
| "first": "Tal", |
| "middle": [], |
| "last": "Linzen", |
| "suffix": "" |
| }, |
| { |
| "first": "Grzegorz", |
| "middle": [], |
| "last": "Chrupa\u0142a", |
| "suffix": "" |
| }, |
| { |
| "first": "Afra", |
| "middle": [], |
| "last": "Alishahi", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tal Linzen, Grzegorz Chrupa\u0142a, and Afra Alishahi, edi- tors. 2018. Proceedings of the 2018 EMNLP Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP. Association for Computational Linguistics, Brussels, Belgium.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
| "authors": [ |
| { |
| "first": "Tal", |
| "middle": [], |
| "last": "Linzen", |
| "suffix": "" |
| }, |
| { |
| "first": "Grzegorz", |
| "middle": [], |
| "last": "Chrupa\u0142a", |
| "suffix": "" |
| }, |
| { |
| "first": "Yonatan", |
| "middle": [], |
| "last": "Belinkov", |
| "suffix": "" |
| }, |
| { |
| "first": "Dieuwke", |
| "middle": [], |
| "last": "Hupkes", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tal Linzen, Grzegorz Chrupa\u0142a, Yonatan Belinkov, and Dieuwke Hupkes, editors. 2019. Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, Florence, Italy.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Assessing the ability of LSTMs to learn syntaxsensitive dependencies", |
| "authors": [ |
| { |
| "first": "Tal", |
| "middle": [], |
| "last": "Linzen", |
| "suffix": "" |
| }, |
| { |
| "first": "Emmanuel", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Golberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Transactions of the Association of Computational Linguistics", |
| "volume": "4", |
| "issue": "", |
| "pages": "521--535", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Golberg. 2016. Assessing the ability of LSTMs to learn syntax- sensitive dependencies. Transactions of the Associa- tion of Computational Linguistics, 4:521-535.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Targeted syntactic evaluation of language models", |
| "authors": [ |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Marvin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tal", |
| "middle": [], |
| "last": "Linzen", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1192--1202", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed representa- tions of words and phrases and their compositionality.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Proceedings of the 26th International Conference on Neural Information Processing Systems", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "In Proceedings of the 26th International Conference on Neural Information Processing Systems -Volume 2, NIPS'13, pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Evaluating the ability of LSTMs to learn context-free grammars", |
| "authors": [ |
| { |
| "first": "Luzi", |
| "middle": [], |
| "last": "Sennhauser", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Berwick", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "115--124", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luzi Sennhauser and Robert Berwick. 2018. Evaluat- ing the ability of LSTMs to learn context-free gram- mars. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 115-124, Brussels, Bel- gium. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Release strategies and the social impacts of language models", |
| "authors": [ |
| { |
| "first": "Irene", |
| "middle": [], |
| "last": "Solaiman", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Miles Brundage", |
| "suffix": "" |
| }, |
| { |
| "first": "Amanda", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Ariel", |
| "middle": [], |
| "last": "Askell", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Herbert-Voss", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Radford", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "ArXiv", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Irene Solaiman, Miles Brundage, J. Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, A. Radford, and J. Wang. 2019. Release strategies and the social im- pacts of language models. ArXiv, abs/1908.09203.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Generating text with recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Martens", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 28th International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "1017--1024", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilya Sutskever, James Martens, and Geoffrey E. Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Wash- ington, USA, June 28 -July 2, 2011, pages 1017- 1024. Omnipress.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "On orthogonality and learning recurrent networks with long term dependencies", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Vorontsov", |
| "suffix": "" |
| }, |
| { |
| "first": "Chiheb", |
| "middle": [], |
| "last": "Trabelsi", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Kadoury", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Pal", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, and Chris Pal. 2017. On orthogonality and learning recurrent networks with long term dependencies. In arXiv.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Fullcapacity unitary recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Wisdom", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Powers", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Hershey", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [ |
| "Le" |
| ], |
| "last": "Roux", |
| "suffix": "" |
| }, |
| { |
| "first": "Les", |
| "middle": [], |
| "last": "Atlas", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Advances in neural information processing systems", |
| "volume": "29", |
| "issue": "", |
| "pages": "4880--4888", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Scott Wisdom, Thomas Powers, John Hershey, Jonathan Le Roux, and Les Atlas. 2016. Full- capacity unitary recurrent neural networks. Advances in neural information processing systems, 29:4880- 4888.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "XLNet: Generalized autoregressive pretraining for language understanding", |
| "authors": [ |
| { |
| "first": "Zhilin", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zihang", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "" |
| }, |
| { |
| "first": "Yiming", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaime", |
| "middle": [ |
| "G" |
| ], |
| "last": "Carbonell", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "ArXiv", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. ArXiv, abs/1906.08237.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Learning the dyck language with attention-based seq2seq models", |
| "authors": [ |
| { |
| "first": "Xiang", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ngoc", |
| "middle": [ |
| "Thang" |
| ], |
| "last": "Vu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonas", |
| "middle": [], |
| "last": "Kuhn", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "138--146", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiang Yu, Ngoc Thang Vu, and Jonas Kuhn. 2019. Learning the dyck language with attention-based seq2seq models. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 138-146.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "num": null, |
| "text": "TURN architecture. Each input symbol x i indexes an embedding layer, yielding a skew-symmetric matrix S(x i ). Taking its exponential yields an orthogonal matrix Q(x i ). Multiplying the state s i by Q(x i ) yields the next state, s i+1 .", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "num": null, |
| "text": "Accuracy per number of attractors for the verb number agreement task. Linzen et al. (2016) do not report performance of their LSTM past 4 attractors. Error bars represent binomial 95% confidence intervals.", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "num": null, |
| "text": "Accuracy of closing parenthesis prediction by number of attractors.", |
| "uris": null |
| }, |
| "TABREF1": { |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "text": "" |
| }, |
| "TABREF3": { |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "text": "Similarity for each pair of rotation planes, for the embeddings of ( and [. Headers show the rotation effected on the compared planes. A value of 2 indicates that the planes are equal (up to rotation of the basis vectors), and a value of 0 indicates that they are orthogonal." |
| }, |
| "TABREF5": { |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "text": "Average effect and signatures of parenthesis embeddings and matching pairs." |
| } |
| } |
| } |
| } |