ACL-OCL / Base_JSON /prefixS /json /semspace /2021.semspace-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:27:23.596410Z"
},
"title": "Should Semantic Vector Composition be Explicit? Can it be Linear?",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Widdows",
"suffix": "",
"affiliation": {},
"email": "dwiddows@liveperson.com"
},
{
"first": "Kristen",
"middle": [],
"last": "Howell",
"suffix": "",
"affiliation": {},
"email": "khowell@liveperson.com"
},
{
"first": "Trevor",
"middle": [],
"last": "Cohen",
"suffix": "",
"affiliation": {},
"email": "cohenta@uw.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Vector representations have become a central element in semantic language modelling, leading to mathematical overlaps with many fields including quantum theory. Compositionality is a core goal for such representations: given representations for 'wet' and 'fish', how should the concept 'wet fish' be represented? This position paper surveys this question from two points of view. The first considers the question of whether an explicit mathematical representation can be successful using only tools from within linear algebra, or whether other mathematical tools are needed. The second considers whether semantic vector composition should be explicitly described mathematically, or whether it can be a model-internal side-effect of training a neural network. A third and newer question is whether a compositional model can be implemented on a quantum computer. Given the fundamentally linear nature of quantum mechanics, we propose that these questions are related, and that this survey may help to highlight candidate operations for future quantum implementation.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Vector representations have become a central element in semantic language modelling, leading to mathematical overlaps with many fields including quantum theory. Compositionality is a core goal for such representations: given representations for 'wet' and 'fish', how should the concept 'wet fish' be represented? This position paper surveys this question from two points of view. The first considers the question of whether an explicit mathematical representation can be successful using only tools from within linear algebra, or whether other mathematical tools are needed. The second considers whether semantic vector composition should be explicitly described mathematically, or whether it can be a model-internal side-effect of training a neural network. A third and newer question is whether a compositional model can be implemented on a quantum computer. Given the fundamentally linear nature of quantum mechanics, we propose that these questions are related, and that this survey may help to highlight candidate operations for future quantum implementation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Semantic composition has been noted and studied since ancient times, including questions on which parts of language should be considered atomic, how these are combined to make new meanings, and how explicitly the process of combination can be modelled. 1 As vectors have come to play a central role in semantic representation, these questions have nat-",
"cite_spans": [
{
"start": 253,
"end": 254,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The word 'human' has meaning, but does not constitute a proposition, either positive or negative. It is only when other words are added that the whole will form an affirmation or denial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "urally become asked of semantic vector models. Early examples include the weighted summation of term vectors into document vectors in information retrieval (Salton et al. 1975) and the modelling of variable-value bindings using the tensor product 2 in artificial intelligence (Smolensky 1990 ). The use of vectors in the context of natural language processing grew from such roots, landmark papers including the introduction of Latent Semantic Analyis (Deerwester et al. 1990) , where the vectors are created using singular value decomposition, and Word Embeddings (Mikolov et al. 2013) , where the vectors are created by training a neural net to predict masked tokens.",
"cite_spans": [
{
"start": 156,
"end": 176,
"text": "(Salton et al. 1975)",
"ref_id": "BIBREF64"
},
{
"start": 276,
"end": 291,
"text": "(Smolensky 1990",
"ref_id": "BIBREF67"
},
{
"start": 452,
"end": 476,
"text": "(Deerwester et al. 1990)",
"ref_id": "BIBREF17"
},
{
"start": 565,
"end": 586,
"text": "(Mikolov et al. 2013)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "During the 20th century, logical semantics was also developed, based very much upon the discrete mathematical logic tradition of Boole (1854) and Frege (1884) rather than the continuous vector spaces that developed from the works of Hamilton (1847) and Grassmann (1862) . Thus, by the beginning of this century, compositional semantics was developed mathematically, provided frameworks such as Montague semantics for connecting the truth value of a sentence with its syntactic form, but provided little insight on how the atomic parts themselves should be represented. Good examples in this tradition can be found in Gamut (1991) and Partee et al. (1993) . In summary, by the year 2000, there were distributional language models with vectors, symbolic models with composition, but little in the way of distributional vector models with composition.",
"cite_spans": [
{
"start": 129,
"end": 141,
"text": "Boole (1854)",
"ref_id": "BIBREF8"
},
{
"start": 146,
"end": 158,
"text": "Frege (1884)",
"ref_id": "BIBREF23"
},
{
"start": 233,
"end": 248,
"text": "Hamilton (1847)",
"ref_id": "BIBREF32"
},
{
"start": 253,
"end": 269,
"text": "Grassmann (1862)",
"ref_id": "BIBREF30"
},
{
"start": 617,
"end": 629,
"text": "Gamut (1991)",
"ref_id": "BIBREF24"
},
{
"start": 634,
"end": 654,
"text": "Partee et al. (1993)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Familiarity with tensor products is assumed throughout this paper. Readers familiar with the linear algebra of vectors but not the multilinear algebra of tensors are encouraged to read the introduction to tensors in Widdows et al. (2021, \u00a75) .",
"cite_spans": [
{
"start": 218,
"end": 243,
"text": "Widdows et al. (2021, \u00a75)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The most standard operations used for composition and comparison in vector model information retrieval systems have, for many decades, been the vector sum and cosine similarity (see Salton et al. 1975) , and for an introduction, Widdows (2004, Ch 5) ). Cosine similarity is normally defined and calculated in terms of the natural scalar product induced by the coordinate system, i.e.,",
"cite_spans": [
{
"start": 182,
"end": 201,
"text": "Salton et al. 1975)",
"ref_id": "BIBREF64"
},
{
"start": 229,
"end": 249,
"text": "Widdows (2004, Ch 5)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Composition in Semantic Vector Models",
"sec_num": "2"
},
{
"text": "cos(a, b) = a \u2022 b (a \u2022 a)(b \u2022 b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Composition in Semantic Vector Models",
"sec_num": "2"
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Composition in Semantic Vector Models",
"sec_num": "2"
},
{
"text": "While the scalar product is a linear operator because \u03bba \u2022 \u00b5b = \u03bb\u00b5(a \u2022 b), cosine similarity is deliberately designed so that cos(\u03bba, \u00b5b) = cos(a, b), so that normalizing or otherwise reweighting vectors does not affect their similarity, which depends only on the angle between them. More sophisticated vector composition in AI was introduced with cognitive models and connectionism. The work of Smolensky (1990) has already been mentioned, and the holographic reduced representations of Plate (2003) is another widely-cited influence (discussed again below as part of Vector Symbolic Architectures). While Smolensky's work is often considered to be AI rather than NLP, the application to language was a key consideration:",
"cite_spans": [
{
"start": 396,
"end": 412,
"text": "Smolensky (1990)",
"ref_id": "BIBREF67"
},
{
"start": 488,
"end": 500,
"text": "Plate (2003)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Composition in Semantic Vector Models",
"sec_num": "2"
},
{
"text": "Any connectionist model of natural language processing must cope with the questions of how linguistic structures are represented in connectionist models. (Smolensky 1990, \u00a71.2) The use of more varied mathematical operators to model natural language operations with vectors accelerated considerably during the first decade of this century. In information retrieval, van Rijsbergen (2004) explored conditional implication and Widdows (2003) developed the use of orthogonal projection for negation in vector models.",
"cite_spans": [
{
"start": 154,
"end": 176,
"text": "(Smolensky 1990, \u00a71.2)",
"ref_id": null
},
{
"start": 424,
"end": 438,
"text": "Widdows (2003)",
"ref_id": "BIBREF73"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Composition in Semantic Vector Models",
"sec_num": "2"
},
{
"text": "Motivated partly by formal similarities with quantum theory, Aerts & Czachor (2004) proposed modelling a sentence w 1 , . . . , w n with a tensor product w 1 \u2297. . .\u2297w n in the Fock space \u221e k=1 V \u2297 k . The authors noted that comparing the spaces V \u2297 k and V \u2297 j when k = j is a difficulty shared by other frameworks, and of course the prospect of summing to k = \u221e is a mathematical notation that motivates the search for a tractable implementation proposal.",
"cite_spans": [
{
"start": 61,
"end": 83,
"text": "Aerts & Czachor (2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Composition in Semantic Vector Models",
"sec_num": "2"
},
{
"text": "By the middle of the decade, Clark & Pulman (2007) and Widdows (2008) proposed and experimented with the use of tensor products for semantic composition, and the parallelogram rule for relation extraction. Such methods were used to obtain strong empirical results for (intransitive) verb-noun composition by Mitchell & Lapata (2008) and for adjective-noun composition by Baroni & Zamparelli (2010) . One culmination of this line of research is the survey by Baroni et al. (2014) , who also addressed the problem of comparing tensors V \u2297 k and V \u2297 j when k = j. For example, if (as in Baroni & Zamparelli 2010)), nouns are represented by vectors and adjectives are represented by matrices, then the space of matrices is isomorphic to V \u2297 V which is not naturally comparable to V , and the authors note (Baroni & Zamparelli 2010, \u00a73.4 ",
"cite_spans": [
{
"start": 29,
"end": 50,
"text": "Clark & Pulman (2007)",
"ref_id": "BIBREF10"
},
{
"start": 55,
"end": 69,
"text": "Widdows (2008)",
"ref_id": "BIBREF75"
},
{
"start": 308,
"end": 332,
"text": "Mitchell & Lapata (2008)",
"ref_id": "BIBREF53"
},
{
"start": 371,
"end": 397,
"text": "Baroni & Zamparelli (2010)",
"ref_id": "BIBREF4"
},
{
"start": 458,
"end": 478,
"text": "Baroni et al. (2014)",
"ref_id": "BIBREF3"
},
{
"start": 801,
"end": 832,
"text": "(Baroni & Zamparelli 2010, \u00a73.4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Composition in Semantic Vector Models",
"sec_num": "2"
},
{
"text": "As a result, Rome and Roman, Italy and Italian cannot be declared similar, which is counter-intuitive. Even more counterintuitively, Roman used as an adjective would not be comparable to Roman used as a noun. We think that the best way to solve such apparent paradoxes is to look, on a case-by-case basis, at the linguistic structures involved, and to exploit them to develop specific solutions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "):",
"sec_num": null
},
{
"text": "Another approach is to use a full syntactic parse of a sentence to construct vectors in a sentence space S from nouns and verbs as constituents in their respective spaces. This features prominently in the model of Coecke et al. (2010) , which has become affectionately known as DisCoCat, from 'Distributional Compositional Categorical'. The mathematics is at the same time sophisticated but intuitive: its formal structure relies on pregroup grammars and morphisms between compact closed categories, and intuitively, the information from semantic word vectors flows through a network of tensor products that parallels the syntactic bindings and produces a single vector in the S space.",
"cite_spans": [
{
"start": 214,
"end": 234,
"text": "Coecke et al. (2010)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "):",
"sec_num": null
},
{
"text": "Various papers have demonstrated empirical successes for the DisCoCat and related models. Grefenstette & Sadrzadeh (2011) were among the first, showing comparable and sometimes improved results with those of Mitchell & Lapata (2008) . By 2014, several tensor composition operations were compared by Milajevs et al. (2014) , and Sadrzadeh et al. (2018) showed that word, phrase, and sentence entailment can be modelled using vectors and density matrices. (The use of density matrices to model probability distributions for entailment was pioneered partly by van Rijsbergen (2004, p. 80) and will be discussed further in Section 4.) Further mathematical tools used in DisCoCat research include copying a vector v \u2208 V into a tensor in V \u2297 V where the coordinates of v become the diagonal entries in the matrix representation of the corresponding tensor, and uncopying which takes the diagonal elements of a matrix representing a tensor in V \u2297 V and uses these as the coordinates of a vector. With these additional operators, the tensor algebra becomes more explicitly a Frobenius algebra. 3 These are used in DisCoCat models by Sadrzadeh et al. (2014a,b) to represent relative and then possessive pronoun attachments (for example, representing the affect of the phrase \"that chased the mouse\" as part of the phrase \"The cat that chased the mouse\"). The method involves detailed tracking of syntactic types and their bindings, and certainly follows the suggestion from Baroni & Zamparelli (2010) to look at linguistic structures on a case-by-case basis.",
"cite_spans": [
{
"start": 90,
"end": 121,
"text": "Grefenstette & Sadrzadeh (2011)",
"ref_id": "BIBREF31"
},
{
"start": 208,
"end": 232,
"text": "Mitchell & Lapata (2008)",
"ref_id": "BIBREF53"
},
{
"start": 299,
"end": 321,
"text": "Milajevs et al. (2014)",
"ref_id": "BIBREF52"
},
{
"start": 324,
"end": 351,
"text": "and Sadrzadeh et al. (2018)",
"ref_id": "BIBREF62"
},
{
"start": 561,
"end": 585,
"text": "Rijsbergen (2004, p. 80)",
"ref_id": null
},
{
"start": 1125,
"end": 1151,
"text": "Sadrzadeh et al. (2014a,b)",
"ref_id": null
},
{
"start": 1465,
"end": 1491,
"text": "Baroni & Zamparelli (2010)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "):",
"sec_num": null
},
{
"text": "There are practical concerns with tensor product formalisms. The lack of any isomorphism between V \u2297 k and V \u2297 j when k = j and dim V > 1 has already been noted, along with the difficulty this poses for comparing elements of each for similarity. Also, there is an obvious computational scaling problem: if V has dimension n, then V \u2297 k has dimension n k , which leads to exponential memory consumption with classical memory registers. Taking the example of relative pronouns in the DisCoCat models of Sadrzadeh et al. (2014a)these are represented as rank-4 tensors in spaces such as N \u2297 S \u2297 N \u2297 N and variants thereof, and if the basic noun space N and sentence space S have dimension 300 (a relatively standard number used e.g., by FastText vectors) then the relative pronouns would have dimension 8.1 billion. If this was represented densely and the coordinates are 4byte floating point numbers, then just representing one pronoun would require over 30GB of memory, which is intractable even by today's cloud computing standards.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "):",
"sec_num": null
},
{
"text": "The development of Vector Symbolic Architectures (VSAs) (Gayler 2004) was partly motivated by these concerns. VSAs grew from the holographic reduced representations of Plate (2003) : no-table works in this intersection of cognitive science and artificial intelligence include those of Eliasmith (2013) and Kanerva (2009) . At its core, a VSA is a vector space with an addition operator and a scalar product for computing similarity, along with a multiplication or binding operator (sometimes written as * , or \u2297 like the tensor product) which takes a product of two vectors and returns a new vector that it typically not similar to either of its inputs, so that (a * b) \u2022 a is small, but which is 'approximately reversible' -so there is an approximate inverse operator where (a * b) b is close to a. 4 The term 'binding' was used partly for continuity with the role-filler binding of Smolensky (1990) . 5 The VSA community has tended to avoid the full tensor product, for the reasons given above. In order to be directly comparable, it is desirable that a * b should be a vector in the space V . Plate (2003) thoroughly explored the use of circular correlation and circular convolution for these operations, which involves summing the elements of the outer product matrix along diagonals. This works as a method to map V \u2297 V back to V , though the mapping is of course basis dependent. Partly to optimize the binding operation to O(n) time, Plate (2003, Ch 5) introduces circular vectors, whose coordinates are unit complex numbers. There is some stretching of terminology here, because the circle group U (1) of unit complex numbers is not, strictly speaking, a vector space. Circular vectors are added by adding their rectangular coordinates in a linear fashion, and then normalizing back to the unit circle by discarding the magnitude, which Plate notes is not an associative operation. Kanerva (2009) departs perhaps even further from the vector space mainstream, using binary-valued vectors throughout, with binding implemented as pairwise exclusive OR (XOR).",
"cite_spans": [
{
"start": 56,
"end": 69,
"text": "(Gayler 2004)",
"ref_id": "BIBREF26"
},
{
"start": 168,
"end": 180,
"text": "Plate (2003)",
"ref_id": "BIBREF58"
},
{
"start": 285,
"end": 301,
"text": "Eliasmith (2013)",
"ref_id": "BIBREF20"
},
{
"start": 306,
"end": 320,
"text": "Kanerva (2009)",
"ref_id": "BIBREF40"
},
{
"start": 884,
"end": 900,
"text": "Smolensky (1990)",
"ref_id": "BIBREF67"
},
{
"start": 903,
"end": 904,
"text": "5",
"ref_id": null
},
{
"start": 1096,
"end": 1108,
"text": "Plate (2003)",
"ref_id": "BIBREF58"
},
{
"start": 1890,
"end": 1904,
"text": "Kanerva (2009)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "):",
"sec_num": null
},
{
"text": "VSA binding operations have been used for composition of semantic vector representations, both during the training process to generate composite vector representations of term or character n-grams, or semantic units such as predicate-argument pairs or syntactic dependencies, that are then further assembled to construct representations of larger units (Jones & Mewhort 2007 , Kachergis et al. 2011 , Cohen & Widdows 2017 , Paullada et al. 2020 ; and to compose larger units from pretrained semantic vectors for downstream machine learning tasks (Fishbein & Eliasmith 2008 , Mower et al. 2016 ). However, a concern with several of the standard VSA binding operators for the representation of sequences in particular is that they are commutative in nature: x * y = y * x. To address this concern, permutations of vector coordinates have been applied across a range of VSA implementations to break the commutative property of the binding operator, for example by permuting the second vector in sequence such that (Kanerva 2009 , Plate 2003 ).",
"cite_spans": [
{
"start": 353,
"end": 374,
"text": "(Jones & Mewhort 2007",
"ref_id": "BIBREF37"
},
{
"start": 375,
"end": 398,
"text": ", Kachergis et al. 2011",
"ref_id": "BIBREF39"
},
{
"start": 399,
"end": 421,
"text": ", Cohen & Widdows 2017",
"ref_id": "BIBREF14"
},
{
"start": 422,
"end": 444,
"text": ", Paullada et al. 2020",
"ref_id": "BIBREF56"
},
{
"start": 546,
"end": 572,
"text": "(Fishbein & Eliasmith 2008",
"ref_id": "BIBREF22"
},
{
"start": 573,
"end": 592,
"text": ", Mower et al. 2016",
"ref_id": "BIBREF54"
},
{
"start": 1011,
"end": 1024,
"text": "(Kanerva 2009",
"ref_id": "BIBREF40"
},
{
"start": 1025,
"end": 1037,
"text": ", Plate 2003",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "):",
"sec_num": null
},
{
"text": "\u2212 \u2212 \u2192 wet * ( \u2212\u2212\u2192 f ish) and \u2212\u2212\u2192 f ish * ( \u2212 \u2212 \u2192 wet) result in different vectors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "):",
"sec_num": null
},
{
"text": "Thanks to their general nature and computational simplicity, permutations have been used for several other encoding and composition experiments. The use of permutations to encode positional information into word vector representations was introduced by Sahlgren et al. (2008) . In this work a permutation (coordinate shuffling) operator was used to rearrange vector components during the course of training, with a different random permutation assigned to each sliding window position such that a context vector would be encoded differently depending upon its position relative to a focus term of interest. A subsequent evaluation of this method showed advantages in performance over the BEAGLE model (Jones & Mewhort 2007) , which uses circular convolutions to compose representations of word n-grams, on a range of intrinsic evaluation tasks -however these advantages were primarily attributable to the permutation-based approach's ability to scale to a larger training corpus (Recchia et al. 2015) . Random permutations have also been used to encode semantic relations (Cohen et al. 2009) and syntactic dependencies (Basile et al. 2011) into distributional models.",
"cite_spans": [
{
"start": 253,
"end": 275,
"text": "Sahlgren et al. (2008)",
"ref_id": "BIBREF63"
},
{
"start": 701,
"end": 723,
"text": "(Jones & Mewhort 2007)",
"ref_id": "BIBREF37"
},
{
"start": 979,
"end": 1000,
"text": "(Recchia et al. 2015)",
"ref_id": "BIBREF59"
},
{
"start": 1072,
"end": 1091,
"text": "(Cohen et al. 2009)",
"ref_id": "BIBREF13"
},
{
"start": 1119,
"end": 1139,
"text": "(Basile et al. 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "):",
"sec_num": null
},
{
"text": "In high-dimensional space, the application of two different random permutations to the same vector has a high probability of producing vectors that are close-to-orthogonal to one another (Sahlgren et al. 2008) . A more recent development has involved deliberately constructing 'graded' permutations by randomly permuting part of a parent permutation (Cohen & Widdows 2018) . When this process is repeated iteratively, it results in a set of permutations that when applied to the same vector will produce a result with similarity to the parent vector that decreases in an ordinal fashion. This permits the encoding of proximity rather than position, in such a way that words in proximal positions within a sliding window will be similarly but not identically encoded. The resulting proximity-based encodings have shown advantages over comparable encodings that are based on absolute position (at word and sentence level) or are position-agnostic (at word and character level) across a range of evaluations (Cohen & Widdows 2018 , Schubert et al. 2020 , Kelly et al. 2020 .",
"cite_spans": [
{
"start": 187,
"end": 209,
"text": "(Sahlgren et al. 2008)",
"ref_id": "BIBREF63"
},
{
"start": 350,
"end": 372,
"text": "(Cohen & Widdows 2018)",
"ref_id": "BIBREF15"
},
{
"start": 1005,
"end": 1026,
"text": "(Cohen & Widdows 2018",
"ref_id": "BIBREF15"
},
{
"start": 1027,
"end": 1049,
"text": ", Schubert et al. 2020",
"ref_id": "BIBREF66"
},
{
"start": 1050,
"end": 1069,
"text": ", Kelly et al. 2020",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "):",
"sec_num": null
},
{
"text": "Note that coordinate permutations are all Euclidean transformations: odd permutations are reflections, and even permutations are rotations. Thus all permutation operations are also linear.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "):",
"sec_num": null
},
{
"text": "This survey of explicit composition in semantic vector models is not exhaustive, but gives some idea of the range of linear and nonlinear operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "):",
"sec_num": null
},
{
"text": "During the past decade, many of the most successful and well-known advances in semantic vector representations have been developed using neural networks. 6 In general, such networks are trained with some objective function designed to maximize the probability of predicting a given word, sentence, or group of characters in a given context. Various related results such as those of Scarselli & Tsoi (1998) are known to demonstrate that, given enough computational resources and training data, neural networks can be used to approximate any example from large classes of functions. If these target functions are nonlinear, this cannot be done with a network of entirely linear operations, because the composition of two linear maps is another linear map -\"The hidden units should be nonlinear because multiple layers of linear units can only produce linear functions.\" (Wichert 2020, \u00a713.5). Thus, part of the appeal of neural networks is that they are not bound by linearity: though often at considerable computational cost.",
"cite_spans": [
{
"start": 154,
"end": 155,
"text": "6",
"ref_id": null
},
{
"start": 382,
"end": 405,
"text": "Scarselli & Tsoi (1998)",
"ref_id": "BIBREF65"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Semantics in Neural Networks",
"sec_num": "3"
},
{
"text": "The skip gram with negative sampling method was introduced by Mikolov et al. 2013, imple-mentations including the word2vec 7 package from Google and the FastText package from Facebook. 8 The objective function is analyzed more thoroughly by Goldberg & Levy (2014) , and takes the form:",
"cite_spans": [
{
"start": 185,
"end": 186,
"text": "8",
"ref_id": null
},
{
"start": 241,
"end": 263,
"text": "Goldberg & Levy (2014)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Semantics in Neural Networks",
"sec_num": "3"
},
{
"text": "(w,c)\u2208D log \u03c3( \u2212 \u2192 w \u2022 \u2212 \u2192 c ) + (w,\u00acc)\u2208D log \u03c3(\u2212 \u2212 \u2192 w \u2022 \u2212 \u2192 \u00acc)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Semantics in Neural Networks",
"sec_num": "3"
},
{
"text": "Here w is a word, c is a context feature (such as a nearby word), D represents observed term/context pairs in the document collection, D represents randomly drawn counterexamples, and \u2212 \u2192 w and \u2212 \u2192 c are word and context vectors (input and output weights of the network, respectively). \u03c3 is the sigmoid function, 1 1+e \u2212x . The mathematical structure here is in the family of logistic and softmax functions -the interaction between the word and context vectors involves exponential / logarithmic concepts, not just linear operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Semantics in Neural Networks",
"sec_num": "3"
},
{
"text": "There have been efforts to incorporate syntactic information explicitly in the training process of neural network models. In the specific case of adjectives, Maillard & Clark (2015) use the skip gram technique to create matrices for adjectives following the pattern of Baroni & Zamparelli (2010) discussed in Section 2. The most recent culmination of this work is its adaptation to cover a much more comprehensive collection of categorial types by Wijnholds et al. (2020) . Another early example comes from Socher et al. 2012, who train a Recursive Neural Network where each node in a syntactic parse tree becomes represented by a matrix that operates on a pair of inputs. Research on tree-structured LSTMs (see inter alia Tai et al. 2015 , Maillard et al. 2019 leverages syntactic parse trees in the input and composes its hidden state using an arbitrary number of child nodes, as represented in the syntax tree. Syntax-BERT (Bai et al. 2021) uses syntactic parses to generate masks that reflect different aspects of tree structure (parent, child, sibling). KERMIT (Zanzotto et al. 2020 ) uses compositional structure explicitly by embedding syntactic subtrees in the representation space. In both cases, the use of explicit compositional syntactic structure leads to a boost in performance on various semantic tasks.",
"cite_spans": [
{
"start": 158,
"end": 181,
"text": "Maillard & Clark (2015)",
"ref_id": "BIBREF46"
},
{
"start": 269,
"end": 295,
"text": "Baroni & Zamparelli (2010)",
"ref_id": "BIBREF4"
},
{
"start": 448,
"end": 471,
"text": "Wijnholds et al. (2020)",
"ref_id": "BIBREF79"
},
{
"start": 723,
"end": 738,
"text": "Tai et al. 2015",
"ref_id": "BIBREF69"
},
{
"start": 739,
"end": 761,
"text": ", Maillard et al. 2019",
"ref_id": "BIBREF47"
},
{
"start": 926,
"end": 943,
"text": "(Bai et al. 2021)",
"ref_id": "BIBREF1"
},
{
"start": 1066,
"end": 1087,
"text": "(Zanzotto et al. 2020",
"ref_id": "BIBREF80"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Semantics in Neural Networks",
"sec_num": "3"
},
{
"text": "In KERMIT, the embedding of trees and (recursively) their subtrees follows a well-developed line of research on representing discrete structures as vectors, in particular combining circular convolution and permutations to introduce shuffled circular convolution (Ferrone & Zanzotto 2014) . Even when combined in a recursive sum over constituents called a Distributed Tree Kernel operation, this is still a sum of linear inputs, so this form of composition is still linear throughout. In such methods, the result may be a collection of related linear operators representing explicit syntactic bindings, but the training method is typically not linear due to the activation functions.",
"cite_spans": [
{
"start": 262,
"end": 287,
"text": "(Ferrone & Zanzotto 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Semantics in Neural Networks",
"sec_num": "3"
},
{
"text": "What these neural net methods and the models described in the previous section have in common is that they encode some explicit compositional structure: a weighted sum of word or character n-grams, a role / value binding, or a relationship in a grammatical parse tree. This raises the question: can neural language models go beyond the bag-of-words drawbacks and encode more orderdependent language structures without using traditional logical compositional machinery?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Semantics in Neural Networks",
"sec_num": "3"
},
{
"text": "A recent and comprehensive survey of this topic is provided by Hupkes et al. (2020) . This work provides a valuable survey of the field to date, and conducts experiments with compositional behavior on artificial datasets designed to demonstrate various aspects of compositionality, such as productivity (can larger unseen sequences be produced?) and substitutivity (are outputs the same when synonymous tokens are switched?). This systematic approach to breaking compositionality into many tasks is a useful guide in itself.",
"cite_spans": [
{
"start": 63,
"end": 83,
"text": "Hupkes et al. (2020)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Semantics in Neural Networks",
"sec_num": "3"
},
{
"text": "Since then, attention-based networks were developed and have come to the forefront of the field (Vaswani et al. 2017) . The attention mechanism is designed to learn when pairs of inputs depend crucially on one another, a capability that has demonstrably improved machine translation by making sure that the translated output represents all of the given input even when their word-orders do not correspond exactly. The 'scaled dot-product attention' used by Vaswani et al. (2017) for computing the attention between a pair of constituents uses softmax normalization, another nonlinear operation.",
"cite_spans": [
{
"start": 96,
"end": 117,
"text": "(Vaswani et al. 2017)",
"ref_id": "BIBREF71"
},
{
"start": 457,
"end": 478,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF71"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Semantics in Neural Networks",
"sec_num": "3"
},
{
"text": "The use of attention mechanisms has led to rapid advances in the field, including the contextualized BERT (Devlin et al. 2018) and ELMo (Peters et al. 2018) models. For example, the ELMo model reports good results on traditional NLP tasks including question answering, coreference resolution, semantic role labelling, and part-of-speech tagging, and the authors speculate that this success is due to the model's different neural-network layers implicitly representing several different kinds of linguistic structure. This idea is further investigated by Hewitt & Manning (2019) and Jawahar et al. (2019) , who probe BERT and ELMo models to find evidence that syntactic structure is implicitly encoded in their vector representations. The survey and experiments of Hupkes et al. (2020) evaluate three such neural networks on a range of tasks related to composition, concluding that each network has strengths and weaknesses, that the results are a stepping stone rather than an endpoint, and that developing consensus around how such tasks should be designed, tested and shared is a crucial task in itself.",
"cite_spans": [
{
"start": 106,
"end": 126,
"text": "(Devlin et al. 2018)",
"ref_id": "BIBREF18"
},
{
"start": 136,
"end": 156,
"text": "(Peters et al. 2018)",
"ref_id": "BIBREF57"
},
{
"start": 554,
"end": 577,
"text": "Hewitt & Manning (2019)",
"ref_id": "BIBREF33"
},
{
"start": 582,
"end": 603,
"text": "Jawahar et al. (2019)",
"ref_id": "BIBREF36"
},
{
"start": 764,
"end": 784,
"text": "Hupkes et al. (2020)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Semantics in Neural Networks",
"sec_num": "3"
},
{
"text": "At the time of writing, such systems are contributors to answering a very open research question: do neural networks need extra linguistic information in their input to properly work with language, or can they actually recover such information as a byproduct of training on raw text input? For example, a DisCoCat model requires parsed sentences as input -so if another system performed as well without requiring grammatical sentences as input and the support of a distinct parsing component in the implementation pipeline, that would be preferable in most production applications. (Running a parser is a requirement than today can often be satisfied, albeit with an implementational and computational cost. Requiring users to type only grammatical input is a requirement that cannot typically be met at all.) At the same time, does performance on the current NLP tasks used for evaluation directly indicate semantic composition at play? If the performance of a model without linguistic information in the input is up to par, would the internal operations of such an implicit model be largely inscrutable, or can we describe the way meaningful units are composed into larger meaningful structures explicitly?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Semantics in Neural Networks",
"sec_num": "3"
},
{
"text": "Tensor networks are one of the possible mathematical answers to this question, and continue to build upon Smolensky's introduction of tensors to AI. For example McCoy et al. (2020) present evidence that the sequence-composition effects of Recurrent Neural Networks (RNNs) can be approximated by Tensor Product Decomposition Networks, at least in cases where using this structure provides measurable benefits over bag-of-words models. It has also been shown that Tensor Product Networks can be used to construct an attention mechanism from which grammatical structure can be recovered by unbinding role-filler tensor compositions (Huang et al. 2019) .",
"cite_spans": [
{
"start": 161,
"end": 180,
"text": "McCoy et al. (2020)",
"ref_id": "BIBREF48"
},
{
"start": 629,
"end": 648,
"text": "(Huang et al. 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Semantics in Neural Networks",
"sec_num": "3"
},
{
"text": "While there are many more networks that could be examined in a survey like this, those described in this section illustrate that neural networks have been used to improve results with many NLP tasks, and the training of such networks often crucially depends on nonlinear operations on vectors. Furthermore, while tensor networks have been developed as a proposed family of techniques for understanding and exploiting compositional structures more explicitly, in some of the most state-of-the-art models, relationships between such operations to more traditional semantic composition are often absent or at least not well-understood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Semantics in Neural Networks",
"sec_num": "3"
},
{
"text": "Mathematical correspondences between vector models for semantics and quantum theory have been recognized for some years (van Rijsbergen 2004) , and are surveyed by Widdows et al. (2021) . The advent of practical quantum computing makes these correspondences especially interesting, and constructs from quantum theory have been used increasingly deliberately in NLP. In quantum computing, tensor products no longer incur quadratic costs: instead, the tensor product A \u2297 B is the natural mathematical representation of the physical state that arises when systems in states A and B are allowed to interact. Heightened interest in quantum computing and quantum structures in general has led to specific semantic contributions already.",
"cite_spans": [
{
"start": 120,
"end": 141,
"text": "(van Rijsbergen 2004)",
"ref_id": "BIBREF70"
},
{
"start": 164,
"end": 185,
"text": "Widdows et al. (2021)",
"ref_id": "BIBREF77"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Operators from Quantum Models",
"sec_num": "4"
},
{
"text": "Mathematically, there is a historic relationship between linearity and quantum mechanics: the principle of superposition guarantees that for any state A, the vector cA corresponds to the same physical state for any complex number c (Dirac 1930, \u00a75) . 9 Hence the question of whether a compositional operator is linear or not is particularly relevant when we consider the practicality of implementation on a quantum computer. 10 Many developments have followed from the Dis-CoCat framework, whose mathematical structure is closely related to quantum mechanics through category theory (Coecke et al. 2017) . As of 2021, the tensor network components of two DisCoCat models have even been implemented successfully on a quantum computer (Lorenz et al. 2021) , and there are proposals for how to implement the syntactic parse on quantum hardware as well (Wiebe et al. 2019 , Bausch et al. 2021 ). Of particular semantic and mathematical interest, topics such as hyponymy (Bankova et al. 2019) and negation (Lewis 2020) have been investigated, using density matrices and positive operator-valued measures, which are mathematical generalizations of state vectors and projection operators that enable the theory to describe systems that are not in 'pure' states. Density matrices have also been used to model sentence entailment (Sadrzadeh et al. 2018) and recently lexical ambiguity (Meyer & Lewis 2020) .",
"cite_spans": [
{
"start": 232,
"end": 248,
"text": "(Dirac 1930, \u00a75)",
"ref_id": null
},
{
"start": 251,
"end": 252,
"text": "9",
"ref_id": null
},
{
"start": 425,
"end": 427,
"text": "10",
"ref_id": null
},
{
"start": 583,
"end": 603,
"text": "(Coecke et al. 2017)",
"ref_id": "BIBREF11"
},
{
"start": 733,
"end": 753,
"text": "(Lorenz et al. 2021)",
"ref_id": "BIBREF45"
},
{
"start": 849,
"end": 867,
"text": "(Wiebe et al. 2019",
"ref_id": "BIBREF78"
},
{
"start": 868,
"end": 888,
"text": ", Bausch et al. 2021",
"ref_id": "BIBREF6"
},
{
"start": 966,
"end": 987,
"text": "(Bankova et al. 2019)",
"ref_id": "BIBREF2"
},
{
"start": 1001,
"end": 1013,
"text": "(Lewis 2020)",
"ref_id": "BIBREF43"
},
{
"start": 1321,
"end": 1344,
"text": "(Sadrzadeh et al. 2018)",
"ref_id": "BIBREF62"
},
{
"start": 1376,
"end": 1396,
"text": "(Meyer & Lewis 2020)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Operators from Quantum Models",
"sec_num": "4"
},
{
"text": "A comprehensive presentation of the use of density matrices to model joint probability distributions is given by Bradley (2020) . This work deliberately takes a quantum probability framework and applies it to language modelling, by way of the lattice structures of Formal Concept Analysis (Ganter & Wille 1999) . This work uses the partial trace of density operators (which are tensors in V \u2297 V ) to project tensors in V \u2297 V to vectors in V . This is analogous to summing the rows or columns of a two-variable joint distribution to get a single-variable marginal distribution. This captures interference and overlap between the initial concepts, and in a framework such as DisCoCat, this might be used to model transitive verb-noun composition (as in Grefenstette & Sadrzadeh 2011, Sadrzadeh et al. 2018, and others) .",
"cite_spans": [
{
"start": 113,
"end": 127,
"text": "Bradley (2020)",
"ref_id": "BIBREF9"
},
{
"start": 289,
"end": 310,
"text": "(Ganter & Wille 1999)",
"ref_id": "BIBREF25"
},
{
"start": 751,
"end": 765,
"text": "Grefenstette &",
"ref_id": "BIBREF31"
},
{
"start": 766,
"end": 808,
"text": "Sadrzadeh 2011, Sadrzadeh et al. 2018, and",
"ref_id": null
},
{
"start": 809,
"end": 816,
"text": "others)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Operators from Quantum Models",
"sec_num": "4"
},
{
"text": "Another mathematical development is the quantum Procrustes alignment method of Lloyd et al. (2020) , where Procrustes alignment refers to the challenge of mapping one vector space into another preserving relationships as closely as possible. Procrustes techniques have been used to align multilingual FastText word vectors (Joulin et al. 2018) , and it is possible that one day these methods may be combined to give faster and more noise-tolerant multilingual concept alignment.",
"cite_spans": [
{
"start": 79,
"end": 98,
"text": "Lloyd et al. (2020)",
"ref_id": "BIBREF44"
},
{
"start": 323,
"end": 343,
"text": "(Joulin et al. 2018)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Operators from Quantum Models",
"sec_num": "4"
},
{
"text": "This again is not a complete survey, but we hope it demonstrates that the interplay between quantum theory, semantic vector composition, and practical implementation has much to offer, and that work in this area is accelerating.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Operators from Quantum Models",
"sec_num": "4"
},
{
"text": "This paper has surveyed vector composition techniques used for aspects of semantic composition in explicit linguistic models, neural networks, and quantum models, while acknowledging that these areas overlap. The operations considered are gathered and summarized in Table 1 . Some of the most successful neural network models to date have used operations that are nonlinear and implicit. Though models such as BERT and ELMo have performed exceptionally well on several benchmark tasks, they are famously difficult to explain and computationally expensive. Therefore, scientific researchers and commercial user-facing enterprises have good reason to be impressed, but still to seek alternatives that are clearer and cheaper. At the same time, progress in quantum computing raises the possibility that the practical cost of different mathematical operations may be considerably revised over the coming year. For example, if the expense of tensor products becomes linear rather than quadratic, tensor networks may find a position at the forefront of 'neural quantum computing'.",
"cite_spans": [],
"ref_spans": [
{
"start": 266,
"end": 273,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Summary, Conclusion, and Future Work",
"sec_num": "5"
},
{
"text": "In addition, there is emerging evidence that such models can be augmented by approaches that draw on structured semantic knowledge (Michalopoulos et al. 2020 , Colon-Hernandez et al. 2021 , suggesting the combination of implicit and explicit approaches to semantic composition as a fruitful area for future methodological research. We hope that this approach of surveying and comparing the semantic, mathematical and computational elements of various vector operations will serve as a guide to territory yet to be explored at the intersection of compositional operators and vector representations of language. ",
"cite_spans": [
{
"start": 131,
"end": 157,
"text": "(Michalopoulos et al. 2020",
"ref_id": "BIBREF50"
},
{
"start": 158,
"end": 187,
"text": ", Colon-Hernandez et al. 2021",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary, Conclusion, and Future Work",
"sec_num": "5"
},
{
"text": "Early examples are given by Aristotle, such as (De Interpretatione, Ch IV):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Named after Georg Frobenius (1849-1917, a group theorist who contributed particularly to group representation theory. SeeKartsaklis (2015) for a thorough presentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This also explains why it is tempting to reuse or abuse the tensor product notation and use the symbol \u2297 for binding and for the inverse or release operator, as inWiddows & Cohen (2015).5 The requirement that the binding a * b be dissimilar to both a and b makes the Frobenius uncopying ofKartsaklis (2015) operator unsuitable for a VSA, because the coordinates of v * w are the products of the corresponding coordinates in v and w, which typically makes the scalar product with either factor quite large. This is however a rather shallow observation, and the relationship between VSAs and Frobenius algebras may be a fruitful topic to investigate more thoroughly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "An introduction to this huge topic is beyond the scope of this paper. Unfamiliar readers are encouraged to start with a general survey such as that ofG\u00e9ron (2019), Chapter 16 being particularly relevant to the discussion here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pypi.org/project/word2vec/ 8 https://fasttext.cc/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This itself could lead to a mathematical discussion -the magnitude of a state vector in quantum mechanics is ignored, just like cosine similarity ignores the scale factors of the scalar product, and its resilience to scale factors makes the cosine similarity technically not a linear operator.10 The dependence of quantum computing on linearity should not go unquestioned -for example, the use of quantum harmonic oscillators rather than qubits has been proposed as a way to incorporate nonlinearity into quantum hardware byGoto (2016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank the anonymous reviewers for helping to improve the coverage of this survey, and the conference organizers for allowing extra pages to accommodate these additions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Quantum aspects of semantic analysis and symbolic artificial intelligence",
"authors": [
{
"first": "D",
"middle": [],
"last": "Aerts",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Czachor",
"suffix": ""
}
],
"year": 2004,
"venue": "J. Phys. A: Math. Gen",
"volume": "37",
"issue": "",
"pages": "123--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aerts, D. & Czachor, M. (2004), 'Quantum aspects of semantic analysis and symbolic artificial intelli- gence', J. Phys. A: Math. Gen. 37, L123-L132.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Syntax-BERT: Improving pre-trained transformers with syntax trees",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Tong",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2103.04350"
]
},
"num": null,
"urls": [],
"raw_text": "Bai, J., Wang, Y., Chen, Y., Yang, Y., Bai, J., Yu, J. & Tong, Y. (2021), 'Syntax-BERT: Improving pre-trained transformers with syntax trees', arXiv preprint arXiv:2103.04350 .",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Graded hyponymy for compositional distributional semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bankova",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marsden",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Language Modelling",
"volume": "6",
"issue": "2",
"pages": "225--260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bankova, D., Coecke, B., Lewis, M. & Marsden, D. (2019), 'Graded hyponymy for compositional distri- butional semantics', Journal of Language Modelling 6(2), 225-260.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Frege in space: A program for compositional distributional semantics",
"authors": [
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2014,
"venue": "Linguistic Issues in language technology",
"volume": "9",
"issue": "6",
"pages": "5--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baroni, M., Bernardi, R., Zamparelli, R. et al. (2014), 'Frege in space: A program for compositional dis- tributional semantics', Linguistic Issues in language technology 9(6), 5-110.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Nouns are vectors, adjectives are matrices: Representing adjectivenoun constructions in semantic space",
"authors": [
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baroni, M. & Zamparelli, R. (2010), Nouns are vec- tors, adjectives are matrices: Representing adjective- noun constructions in semantic space, in 'Proceed- ings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP)'.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Encoding syntactic dependencies by vector permutation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Caputo",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Semeraro",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics",
"volume": "",
"issue": "",
"pages": "43--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Basile, P., Caputo, A. & Semeraro, G. (2011), Encod- ing syntactic dependencies by vector permutation, in 'Proceedings of the GEMS 2011 Workshop on GE- ometrical Models of Natural Language Semantics', pp. 43-51.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A quantum search decoder for natural language processing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bausch",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Piddock",
"suffix": ""
}
],
"year": 2021,
"venue": "Quantum Machine Intelligence",
"volume": "3",
"issue": "",
"pages": "1--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bausch, J., Subramanian, S. & Piddock, S. (2021), 'A quantum search decoder for natural language pro- cessing', Quantum Machine Intelligence 3(1), 1-24.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "P",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bojanowski, P., Grave, E., Joulin, A. & Mikolov, T. (2017), 'Enriching word vectors with subword infor- mation', Transactions of the Association for Compu- tational Linguistics 5, 135-146.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An Investigation of the Laws of Thought",
"authors": [
{
"first": "G",
"middle": [],
"last": "Boole",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boole, G. (1854), An Investigation of the Laws of Thought, Macmillan. Dover edition, 1958.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "At the interface of algebra and statistics",
"authors": [
{
"first": "T.-D",
"middle": [],
"last": "Bradley",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.05631"
]
},
"num": null,
"urls": [],
"raw_text": "Bradley, T.-D. (2020), 'At the interface of algebra and statistics', PhD dissertation, arXiv preprint arXiv:2004.05631 .",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Combining symbolic and distributional models of meaning",
"authors": [
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pulman",
"suffix": ""
}
],
"year": 2007,
"venue": "'AAAI Spring Symposium: Quantum Interaction",
"volume": "",
"issue": "",
"pages": "52--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clark, S. & Pulman, S. (2007), Combining symbolic and distributional models of meaning., in 'AAAI Spring Symposium: Quantum Interaction', pp. 52- 55.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Uniqueness of composition in quantum theory and linguistics",
"authors": [
{
"first": "B",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Genovese",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Gogioso",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marsden",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Piedeleu",
"suffix": ""
}
],
"year": 2017,
"venue": "14th International Conference on Quantum Physics and Logic (QPL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Coecke, B., Genovese, F., Gogioso, S., Marsden, D. & Piedeleu, R. (2017), 'Uniqueness of composition in quantum theory and linguistics', 14th International Conference on Quantum Physics and Logic (QPL) .",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Mathematical foundations for a compositional distributional model of meaning",
"authors": [
{
"first": "B",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Coecke, B., Sadrzadeh, M. & Clark, S. (2010), 'Math- ematical foundations for a compositional distribu- tional model of meaning', CoRR abs/1003.4394.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Predication-based semantic indexing: Permutations as a means to encode predications in semantic space",
"authors": [
{
"first": "T",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "R",
"middle": [
"W"
],
"last": "Schvaneveldt",
"suffix": ""
},
{
"first": "T",
"middle": [
"C"
],
"last": "Rindflesch",
"suffix": ""
}
],
"year": 2009,
"venue": "American Medical Informatics Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, T., Schvaneveldt, R. W. & Rindflesch, T. C. (2009), Predication-based semantic indexing: Per- mutations as a means to encode predications in se- mantic space, in 'AMIA Annual Symposium Pro- ceedings', Vol. 2009, American Medical Informatics Association, p. 114.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Embedding of semantic predications",
"authors": [
{
"first": "T",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Widdows",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of biomedical informatics",
"volume": "68",
"issue": "",
"pages": "150--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, T. & Widdows, D. (2017), 'Embedding of se- mantic predications', Journal of biomedical infor- matics 68, 150-166.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bringing order to neural word embeddings with embeddings augmented by random permutations (earp), in 'Proceedings of the 22nd Conference on Computational Natural Language Learning",
"authors": [
{
"first": "T",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Widdows",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "465--475",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, T. & Widdows, D. (2018), Bringing order to neural word embeddings with embeddings aug- mented by random permutations (earp), in 'Proceed- ings of the 22nd Conference on Computational Nat- ural Language Learning', pp. 465-475.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Combining pre-trained language models and structured knowledge",
"authors": [
{
"first": "P",
"middle": [],
"last": "Colon-Hernandez",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Havasi",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Alonso",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Huggins",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Breazeal",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2101.12294"
]
},
"num": null,
"urls": [],
"raw_text": "Colon-Hernandez, P., Havasi, C., Alonso, J., Huggins, M. & Breazeal, C. (2021), 'Combining pre-trained language models and structured knowledge', arXiv preprint arXiv:2101.12294 .",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Indexing by latent semantic analysis",
"authors": [
{
"first": "S",
"middle": [],
"last": "Deerwester",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Dumais",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Furnas",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Landauer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Society for Information Science",
"volume": "41",
"issue": "6",
"pages": "391--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deerwester, S., Dumais, S., Furnas, G., Landauer, T. & Harshman, R. (1990), 'Indexing by latent semantic analysis', Journal of the American Society for Infor- mation Science 41(6), 391-407.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. (2018), 'BERT: Pre-training of deep bidirectional transformers for language understanding', arXiv preprint arXiv:1810.04805 .",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The Principles of Quantum Mechanics",
"authors": [
{
"first": "P",
"middle": [],
"last": "Dirac",
"suffix": ""
}
],
"year": 1930,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirac, P. (1930), The Principles of Quantum Mechan- ics, 4th edition, 1958, reprinted 1982 edn, Clarendon Press, Oxford.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "How to Build a Brian: A Neural Architecture for Biological Cognition",
"authors": [
{
"first": "C",
"middle": [],
"last": "Eliasmith",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eliasmith, C. (2013), How to Build a Brian: A Neural Architecture for Biological Cognition, Oxford Uni- versity Press.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Towards syntaxaware compositional distributional semantic models",
"authors": [
{
"first": "L",
"middle": [],
"last": "Ferrone",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Zanzotto",
"suffix": ""
}
],
"year": 2014,
"venue": "25th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ferrone, L. & Zanzotto, F. (2014), Towards syntax- aware compositional distributional semantic models, in 'COLING 2014, 25th International Conference on Computational Linguistics'.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Integrating structure and meaning: A new method for encoding structure for text classification, in 'European Conference on Information Retrieval",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Fishbein",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Eliasmith",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "514--521",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fishbein, J. M. & Eliasmith, C. (2008), Integrating structure and meaning: A new method for encoding structure for text classification, in 'European Confer- ence on Information Retrieval', Springer, pp. 514- 521.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The Foundations of Arithmetic (1884)",
"authors": [
{
"first": "G",
"middle": [],
"last": "Frege",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frege, G. (1884), The Foundations of Arithmetic (1884), 1974 (translated by J. L. Austin) edn, Black- well.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Logic, Language, and Meaning",
"authors": [
{
"first": "L",
"middle": [],
"last": "Gamut",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gamut, L. (1991), Logic, Language, and Meaning, Uni- versity of Chicago Press.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Formal Concept Analysis: Mathematical Foundations",
"authors": [
{
"first": "B",
"middle": [],
"last": "Ganter",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wille",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ganter, B. & Wille, R. (1999), Formal Concept Analy- sis: Mathematical Foundations, Springer.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Vector symbolic architectures answer Jackendoff's challenges for cognitive neuroscience",
"authors": [
{
"first": "R",
"middle": [
"W"
],
"last": "Gayler",
"suffix": ""
}
],
"year": 2004,
"venue": "ICCS/ASCS International Conference on Cognitive Science",
"volume": "",
"issue": "",
"pages": "133--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gayler, R. W. (2004), Vector symbolic architectures answer Jackendoff's challenges for cognitive neuro- science, in 'In Peter Slezak (Ed.), ICCS/ASCS Inter- national Conference on Cognitive Science', Sydney, Australia. University of New South Wales., pp. 133- 138.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems",
"authors": [
{
"first": "A",
"middle": [],
"last": "G\u00e9ron",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00e9ron, A. (2019), Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, O'Reilly Media.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Word2Vec explained: deriving Mikolov et al.'s negativesampling word-embedding method",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1402.3722"
]
},
"num": null,
"urls": [],
"raw_text": "Goldberg, Y. & Levy, O. (2014), 'Word2Vec ex- plained: deriving Mikolov et al.'s negative- sampling word-embedding method', arXiv preprint arXiv:1402.3722 .",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network",
"authors": [
{
"first": "H",
"middle": [],
"last": "Goto",
"suffix": ""
}
],
"year": 2016,
"venue": "Scientific reports",
"volume": "6",
"issue": "1",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goto, H. (2016), 'Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network', Scientific reports 6(1), 1-8.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Extension Theory, History of Mathematics Sources",
"authors": [
{
"first": "H",
"middle": [],
"last": "Grassmann",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grassmann, H. (1862), Extension Theory, History of Mathematics Sources, American Mathematical So- ciety, London Mathematical Society. Translated by Lloyd C. Kannenberg (2000).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Experimental support for a categorical compositional distributional model of meaning",
"authors": [
{
"first": "E",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grefenstette, E. & Sadrzadeh, M. (2011), 'Experimen- tal support for a categorical compositional distribu- tional model of meaning', EMNLP .",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "On quaternions",
"authors": [
{
"first": "S",
"middle": [
"W R"
],
"last": "Hamilton",
"suffix": ""
}
],
"year": null,
"venue": "Proc. Royal Irish Acad",
"volume": "3",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hamilton, S. W. R. (1847), 'On quaternions', Proc. Royal Irish Acad. 3, 1-16.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A structural probe for finding syntax in word representations",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "4129--4138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hewitt, J. & Manning, C. D. (2019), A structural probe for finding syntax in word representations, in 'ACL', pp. 4129-4138.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Attentive tensor product learning",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "1344--1351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, Q., Deng, L., Wu, D., Liu, C. & He, X. (2019), Attentive tensor product learning, in 'Proceedings of the AAAI Conference on Artificial Intelligence', Vol. 33, pp. 1344-1351.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Compositionality decomposed: how do neural networks generalise?",
"authors": [
{
"first": "D",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dankers",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mul",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Bruni",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Artificial Intelligence Research",
"volume": "67",
"issue": "",
"pages": "757--795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hupkes, D., Dankers, V., Mul, M. & Bruni, E. (2020), 'Compositionality decomposed: how do neural net- works generalise?', Journal of Artificial Intelligence Research 67, 757-795.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "What does bert learn about the structure of language?, in 'ACL 2019-57th Annual Meeting of the Association for Computational Linguistics",
"authors": [
{
"first": "G",
"middle": [],
"last": "Jawahar",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Seddah",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jawahar, G., Sagot, B. & Seddah, D. (2019), What does bert learn about the structure of language?, in 'ACL 2019-57th Annual Meeting of the Association for Computational Linguistics'.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Representing word meaning and order information in a composite holographic lexicon",
"authors": [
{
"first": "M",
"middle": [
"N"
],
"last": "Jones",
"suffix": ""
},
{
"first": "D",
"middle": [
"J K"
],
"last": "Mewhort",
"suffix": ""
}
],
"year": 2007,
"venue": "Psychological Review",
"volume": "114",
"issue": "",
"pages": "1--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jones, M. N. & Mewhort, D. J. K. (2007), 'Represent- ing word meaning and order information in a com- posite holographic lexicon', Psychological Review 114, 1-37.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Loss in translation: Learning bilingual word mapping with a retrieval criterion",
"authors": [
{
"first": "A",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Grave",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joulin, A., Bojanowski, P., Mikolov, T., J\u00e9gou, H. & Grave, E. (2018), Loss in translation: Learning bilingual word mapping with a retrieval criterion, in 'EMNLP'.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Orbeagle: integrating orthography into a holographic model of the lexicon",
"authors": [
{
"first": "G",
"middle": [],
"last": "Kachergis",
"suffix": ""
},
{
"first": "G",
"middle": [
"E"
],
"last": "Cox",
"suffix": ""
},
{
"first": "M",
"middle": [
"N"
],
"last": "Jones",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "307--314",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kachergis, G., Cox, G. E. & Jones, M. N. (2011), Or- beagle: integrating orthography into a holographic model of the lexicon, in 'International conference on artificial neural networks', Springer, pp. 307-314.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors",
"authors": [
{
"first": "P",
"middle": [],
"last": "Kanerva",
"suffix": ""
}
],
"year": 2009,
"venue": "Cognitive Computation",
"volume": "1",
"issue": "",
"pages": "139--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kanerva, P. (2009), 'Hyperdimensional computing: An introduction to computing in distributed representa- tion with high-dimensional random vectors', Cogni- tive Computation 1(2), 139-159.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Compositional distributional semantics with compact closed categories and frobenius algebras",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kartsaklis",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1505.00138"
]
},
"num": null,
"urls": [],
"raw_text": "Kartsaklis, D. (2015), 'Compositional distributional se- mantics with compact closed categories and frobe- nius algebras', PhD thesis, Wolfson College Oxford. arXiv preprint arXiv:1505.00138 .",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Which sentence embeddings and which layers encode syntactic structure",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Kelly",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Calvillo",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Reitter",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelly, M. A., Xu, Y., Calvillo, J. & Reitter, D. (2020), 'Which sentence embeddings and which layers en- code syntactic structure?'.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Towards logical negation for compositional distributional semantics",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.04929"
]
},
"num": null,
"urls": [],
"raw_text": "Lewis, M. (2020), 'Towards logical negation for com- positional distributional semantics', arXiv preprint arXiv:2005.04929 .",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Quantum polar decomposition algorithm",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lloyd",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bosch",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "De Palma",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kiani",
"suffix": ""
},
{
"first": "Z.-W",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marvian",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rebentrost",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Arvidsson-Shukur",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.00841"
]
},
"num": null,
"urls": [],
"raw_text": "Lloyd, S., Bosch, S., De Palma, G., Kiani, B., Liu, Z.-W., Marvian, M., Rebentrost, P. & Arvidsson- Shukur, D. M. (2020), 'Quantum polar decomposi- tion algorithm', arXiv preprint arXiv:2006.00841 .",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "QNLP in practice: Running compositional models of meaning on a quantum computer",
"authors": [
{
"first": "R",
"middle": [],
"last": "Lorenz",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Pearson",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Meichanetzidis",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kartsaklis",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Coecke",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lorenz, R., Pearson, A., Meichanetzidis, K., Kartsaklis, D. & Coecke, B. (2021), 'QNLP in practice: Run- ning compositional models of meaning on a quan- tum computer'.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Learning adjective meanings with a tensor-based skip-gram model",
"authors": [
{
"first": "J",
"middle": [],
"last": "Maillard",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "327--331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maillard, J. & Clark, S. (2015), Learning adjective meanings with a tensor-based skip-gram model, in 'Proceedings of the Nineteenth Conference on Com- putational Natural Language Learning', pp. 327- 331.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Jointly learning sentence embeddings and syntax with unsupervised tree-lstms",
"authors": [
{
"first": "J",
"middle": [],
"last": "Maillard",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yogatama",
"suffix": ""
}
],
"year": 2019,
"venue": "Natural Language Engineering",
"volume": "25",
"issue": "4",
"pages": "433--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maillard, J., Clark, S. & Yogatama, D. (2019), 'Jointly learning sentence embeddings and syntax with unsu- pervised tree-lstms', Natural Language Engineering 25(4), 433-449.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Tensor product decomposition networks: Uncovering representations of structure learned by neural networks'",
"authors": [
{
"first": "R",
"middle": [
"T"
],
"last": "Mccoy",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Dunbar",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Smolensky",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Society for Computation in Linguistics",
"volume": "3",
"issue": "1",
"pages": "474--475",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCoy, R. T., Linzen, T., Dunbar, E. & Smolensky, P. (2020), 'Tensor product decomposition networks: Uncovering representations of structure learned by neural networks', Proceedings of the Society for Computation in Linguistics 3(1), 474-475.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Modelling lexical ambiguity with density matrices",
"authors": [
{
"first": "F",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.05670"
]
},
"num": null,
"urls": [],
"raw_text": "Meyer, F. & Lewis, M. (2020), 'Modelling lexical ambiguity with density matrices', arXiv preprint arXiv:2010.05670 .",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Umlsbert: Clinical domain knowledge augmentation of contextual embeddings using the unified medical language system metathesaurus",
"authors": [
{
"first": "G",
"middle": [],
"last": "Michalopoulos",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Kaka",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.10391"
]
},
"num": null,
"urls": [],
"raw_text": "Michalopoulos, G., Wang, Y., Kaka, H., Chen, H. & Wong, A. (2020), 'Umlsbert: Clinical domain knowledge augmentation of contextual embeddings using the unified medical language system metathe- saurus', arXiv preprint arXiv:2010.10391 .",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Chen, K., Corrado, G. & Dean, J. (2013), 'Efficient estimation of word representations in vec- tor space', arXiv preprint arXiv:1301.3781 .",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Evaluating neural word representations in tensor-based compositional settings",
"authors": [
{
"first": "D",
"middle": [],
"last": "Milajevs",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kartsaklis",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Purver",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "708--719",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milajevs, D., Kartsaklis, D., Sadrzadeh, M. & Purver, M. (2014), Evaluating neural word representations in tensor-based compositional settings, in 'EMNLP', pp. 708-719.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Vector-based models of semantic composition",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "236--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell, J. & Lapata, M. (2008), Vector-based models of semantic composition., in 'ACL', pp. 236-244.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Classification-by-analogy: using vector representations of implicit relationships to identify plausibly causal drug/side-effect relationships, in 'AMIA annual symposium proceedings",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mower",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2016,
"venue": "American Medical Informatics Association",
"volume": "2016",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mower, J., Subramanian, D., Shang, N. & Cohen, T. (2016), Classification-by-analogy: using vector rep- resentations of implicit relationships to identify plau- sibly causal drug/side-effect relationships, in 'AMIA annual symposium proceedings', Vol. 2016, Ameri- can Medical Informatics Association, p. 1940.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Mathematical Methods in Linguistics",
"authors": [
{
"first": "B",
"middle": [
"H"
],
"last": "Partee",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ter Meulen",
"suffix": ""
},
{
"first": "R",
"middle": [
"E"
],
"last": "Wall",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Partee, B. H., ter Meulen, A. & Wall, R. E. (1993), Mathematical Methods in Linguistics, Kluwer.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Improving biomedical analogical retrieval with embedding of structural dependencies",
"authors": [
{
"first": "A",
"middle": [],
"last": "Paullada",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Percha",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing",
"volume": "",
"issue": "",
"pages": "38--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paullada, A., Percha, B. & Cohn, T. (2020), Improv- ing biomedical analogical retrieval with embedding of structural dependencies, in 'Proceedings of the 19th SIGBioMed Workshop on Biomedical Lan- guage Processing', pp. 38-48.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Deep contextualized word representations', arXiv preprint",
"authors": [
{
"first": "M",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K. & Zettlemoyer, L. (2018), 'Deep contextualized word representations', arXiv preprint arXiv:1802.05365 .",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Holographic Reduced Representations: Distributed Representation for Cognitive Structures",
"authors": [
{
"first": "T",
"middle": [
"A"
],
"last": "Plate",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Plate, T. A. (2003), Holographic Reduced Represen- tations: Distributed Representation for Cognitive Structures, CSLI Publications.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Encoding sequential information in semantic space models: comparing holographic reduced representation and random permutation",
"authors": [
{
"first": "G",
"middle": [],
"last": "Recchia",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sahlgren",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "M",
"middle": [
"N"
],
"last": "Jones",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Recchia, G., Sahlgren, M., Kanerva, P. & Jones, M. N. (2015), 'Encoding sequential information in seman- tic space models: comparing holographic reduced representation and random permutation.', Computa- tional intelligence and neuroscience .",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "The frobenius anatomy of word meanings ii: possessive relative pronouns",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Coecke",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Logic and Computation",
"volume": "26",
"issue": "2",
"pages": "785--815",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadrzadeh, M., Clark, S. & Coecke, B. (2014a), 'The frobenius anatomy of word meanings ii: possessive relative pronouns', Journal of Logic and Computa- tion 26(2), 785-815.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "The frobenius anatomy of word meanings ii: possessive relative pronouns",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Coecke",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Logic and Computation",
"volume": "26",
"issue": "2",
"pages": "785--815",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadrzadeh, M., Clark, S. & Coecke, B. (2014b), 'The frobenius anatomy of word meanings ii: possessive relative pronouns', Journal of Logic and Computa- tion 26(2), 785-815.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Sentence entailment in compositional distributional semantics",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kartsaklis",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Balk\u0131r",
"suffix": ""
}
],
"year": 2018,
"venue": "Annals of Mathematics and Artificial Intelligence",
"volume": "82",
"issue": "4",
"pages": "189--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadrzadeh, M., Kartsaklis, D. & Balk\u0131r, E. (2018), 'Sentence entailment in compositional distributional semantics', Annals of Mathematics and Artificial In- telligence 82(4), 189-218.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Permutations as a means to encode order in word space",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sahlgren",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Holst",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kanerva",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 30th Annual Meeting of the Cognitive Science Society (CogSci'08)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sahlgren, M., Holst, A. & Kanerva, P. (2008), Permu- tations as a means to encode order in word space., in 'Proceedings of the 30th Annual Meeting of the Cognitive Science Society (CogSci'08), July 23-26, Washington D.C., USA.'.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "A vector space model for automatic indexing",
"authors": [
{
"first": "G",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "C.-S",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 1975,
"venue": "Communications of the ACM",
"volume": "18",
"issue": "11",
"pages": "613--620",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Salton, G., Wong, A. & Yang, C.-S. (1975), 'A vector space model for automatic indexing', Communica- tions of the ACM 18(11), 613-620.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Universal approximation using feedforward neural networks: A survey of some existing methods, and some new results",
"authors": [
{
"first": "F",
"middle": [],
"last": "Scarselli",
"suffix": ""
},
{
"first": "A",
"middle": [
"C"
],
"last": "Tsoi",
"suffix": ""
}
],
"year": 1998,
"venue": "Neural networks",
"volume": "11",
"issue": "1",
"pages": "15--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scarselli, F. & Tsoi, A. C. (1998), 'Universal approxi- mation using feedforward neural networks: A survey of some existing methods, and some new results', Neural networks 11(1), 15-37.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Reading the written language environment: Learning orthographic structure from statistical regularities",
"authors": [
{
"first": "T",
"middle": [
"M"
],
"last": "Schubert",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Fischer-Baum",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Memory and Language",
"volume": "114",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schubert, T. M., Cohen, T. & Fischer-Baum, S. (2020), 'Reading the written language environ- ment: Learning orthographic structure from statis- tical regularities', Journal of Memory and Language 114, 104148.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Tensor product variable binding and the representation of symbolic structures in connectionist systems",
"authors": [
{
"first": "P",
"middle": [],
"last": "Smolensky",
"suffix": ""
}
],
"year": 1990,
"venue": "Artificial intelligence",
"volume": "46",
"issue": "1",
"pages": "159--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smolensky, P. (1990), 'Tensor product variable bind- ing and the representation of symbolic structures in connectionist systems', Artificial intelligence 46(1), 159-216.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Semantic compositionality through recursive matrix-vector spaces",
"authors": [
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Huval",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "A",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "1201--1211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Socher, R., Huval, B., Manning, C. D. & Ng, A. Y. (2012), Semantic compositionality through recur- sive matrix-vector spaces, in 'EMNLP', pp. 1201- 1211.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Improved semantic representations from treestructured long short-term memory networks",
"authors": [
{
"first": "K",
"middle": [
"S"
],
"last": "Tai",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.00075"
]
},
"num": null,
"urls": [],
"raw_text": "Tai, K. S., Socher, R. & Manning, C. D. (2015), 'Improved semantic representations from tree- structured long short-term memory networks', arXiv preprint arXiv:1503.00075 .",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "The Geometry of Information Retrieval",
"authors": [
{
"first": "K",
"middle": [],
"last": "Van Rijsbergen",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "van Rijsbergen, K. (2004), The Geometry of Informa- tion Retrieval, Cambridge University Press.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Attention is all you need, in 'Advances in neural information processing systems",
"authors": [
{
"first": "A",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "A",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141. & Polosukhin, I. (2017), Attention is all you need, in 'Advances in neural information processing systems', pp. 5998- 6008.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "World scientific",
"authors": [
{
"first": "A",
"middle": [],
"last": "Wichert",
"suffix": ""
}
],
"year": 2020,
"venue": "Principles of quantum artificial intelligence: Quantum Problem Solving and Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wichert, A. (2020), Principles of quantum artificial in- telligence: Quantum Problem Solving and Machine Learning (Second Edition), World scientific.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "Orthogonal negation in vector spaces for modelling word-meanings and document retrieval",
"authors": [
{
"first": "D",
"middle": [],
"last": "Widdows",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Widdows, D. (2003), Orthogonal negation in vector spaces for modelling word-meanings and document retrieval, in 'ACL 2003', Sapporo, Japan.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "Geometry and Meaning",
"authors": [
{
"first": "D",
"middle": [],
"last": "Widdows",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Widdows, D. (2004), Geometry and Meaning, CSLI Publications.",
"links": null
},
"BIBREF75": {
"ref_id": "b75",
"title": "Semantic vector products: Some initial investigations",
"authors": [
{
"first": "D",
"middle": [],
"last": "Widdows",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Second International Symposium on Quantum Interaction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Widdows, D. (2008), Semantic vector products: Some initial investigations, in 'Proceedings of the Second International Symposium on Quantum Interaction'.",
"links": null
},
"BIBREF76": {
"ref_id": "b76",
"title": "Reasoning with vectors: a continuous model for fast robust inference",
"authors": [
{
"first": "D",
"middle": [],
"last": "Widdows",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2015,
"venue": "Logic Journal of IGPL",
"volume": "23",
"issue": "2",
"pages": "141--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Widdows, D. & Cohen, T. (2015), 'Reasoning with vec- tors: a continuous model for fast robust inference', Logic Journal of IGPL 23(2), 141-173.",
"links": null
},
"BIBREF77": {
"ref_id": "b77",
"title": "Quantum mathematics in artificial intelligence",
"authors": [
{
"first": "D",
"middle": [],
"last": "Widdows",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kitto",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2101.04255"
]
},
"num": null,
"urls": [],
"raw_text": "Widdows, D., Kitto, K. & Cohen, T. (2021), 'Quan- tum mathematics in artificial intelligence', arXiv preprint arXiv:2101.04255 .",
"links": null
},
"BIBREF78": {
"ref_id": "b78",
"title": "Quantum language processing', arXiv preprint",
"authors": [
{
"first": "N",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bocharov",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Smolensky",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Troyer",
"suffix": ""
},
{
"first": "K",
"middle": [
"M"
],
"last": "Svore",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.05162"
]
},
"num": null,
"urls": [],
"raw_text": "Wiebe, N., Bocharov, A., Smolensky, P., Troyer, M. & Svore, K. M. (2019), 'Quantum language process- ing', arXiv preprint arXiv:1902.05162 .",
"links": null
},
"BIBREF79": {
"ref_id": "b79",
"title": "Representation learning for type-driven composition",
"authors": [
{
"first": "G",
"middle": [],
"last": "Wijnholds",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 24th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "313--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wijnholds, G., Sadrzadeh, M. & Clark, S. (2020), Rep- resentation learning for type-driven composition, in 'Proceedings of the 24th Conference on Computa- tional Natural Language Learning', pp. 313-324.",
"links": null
},
"BIBREF80": {
"ref_id": "b80",
"title": "KERMIT: Complementing transformer architectures with encoders of explicit syntactic interpretations",
"authors": [
{
"first": "F",
"middle": [
"M"
],
"last": "Zanzotto",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Santilli",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ranaldi",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Onorati",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Tommasino",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Fallucchi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "256--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zanzotto, F. M., Santilli, A., Ranaldi, L., Onorati, D., Tommasino, P. & Fallucchi, F. (2020), KERMIT: Complementing transformer architectures with en- coders of explicit syntactic interpretations, in 'EMNLP', pp. 256-267.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"num": null,
"content": "<table><tr><td>Mathematical</td><td>Use Case</td><td>Inputs</td><td>Outputs</td><td>Explicitness</td><td>Linearity</td><td>Described By</td></tr><tr><td>Method</td><td/><td/><td/><td/><td/><td/></tr><tr><td>Vector sum</td><td>Document retrieval and</td><td>Word vectors</td><td>Document vectors</td><td>Explicit</td><td>Linear</td><td>Widespread includ-</td></tr><tr><td/><td>many others</td><td/><td/><td/><td/><td>ing Salton et al.</td></tr><tr><td/><td/><td/><td/><td/><td/><td>(1975)</td></tr><tr><td>Scalar product</td><td>Similarity scoring</td><td>Any vector</td><td>Any vector</td><td>Explicit</td><td>Linear</td><td>Various</td></tr><tr><td>Cosine similarity</td><td>Similarity scoring</td><td>Any vector</td><td>Any vector</td><td>Explicit</td><td>Nonlinear -</td><td>Various</td></tr><tr><td/><td/><td/><td/><td/><td>deliberately ig-</td><td/></tr><tr><td/><td/><td/><td/><td/><td>nores magnitude</td><td/></tr><tr><td>Tensor product</td><td>Role-filler binding</td><td>Variable name</td><td>Variable value</td><td>Explicit</td><td>Linear</td><td>Smolensky (1990)</td></tr><tr><td>Circular vector sum</td><td>Holographic reduced</td><td>Circular vectors</td><td>Circular vectors</td><td>Explicit</td><td>Nonlinear, and</td><td>Plate (2003)</td></tr><tr><td/><td>representations</td><td/><td/><td/><td>non-associative</td><td/></tr><tr><td>(Stalnaker) Condi-</td><td>Implication</td><td>Vector / subspace</td><td>Subspace rep-</td><td>Explicit</td><td>Linear (though</td><td>van Rijsbergen</td></tr><tr><td>tional</td><td/><td>representing propo-</td><td>resenting truth</td><td/><td>for subspaces it</td><td>(2004, Ch 5)</td></tr><tr><td/><td/><td>sitions</td><td>conditions</td><td/><td>doesn't matter)</td><td/></tr><tr><td>Orthogonal projec-</td><td>Negation</td><td>Vector or subspace</td><td>Vector</td><td>Explicit</td><td>Linear</td><td>Widdows (2003)</td></tr><tr><td>tion</td><td/><td/><td/><td/><td/><td/></tr><tr><td>Sum of subspaces</td><td>Disjunction</td><td>Vectors or sub-</td><td>Subspace</td><td>Explicit</td><td>Linear</td><td>Widdows (2003)</td></tr><tr><td/><td/><td>spaces</td><td/><td/><td/><td/></tr><tr><td>Parallelogram rule</td><td>Proportional analogy</td><td>Three vectors</td><td>Fourth vector</td><td>Explicit</td><td>Linear</td><td>Various incl. Wid-</td></tr><tr><td/><td/><td/><td/><td/><td/><td>dows (2008),</td></tr><tr><td/><td/><td/><td/><td/><td/><td>Mikolov et al.</td></tr><tr><td/><td/><td/><td/><td/><td/><td>(2013)</td></tr><tr><td>Tensor product</td><td>Word vector composi-</td><td>Word vector</td><td>Sentence-fragment</td><td>Explicit</td><td>Linear</td><td>Various since Aerts</td></tr><tr><td/><td>tion</td><td/><td>tensor</td><td/><td/><td>&amp; Czachor (2004),</td></tr><tr><td/><td/><td/><td/><td/><td/><td>Clark &amp; Pulman</td></tr><tr><td/><td/><td/><td/><td/><td/><td>(2007)</td></tr><tr><td>Tensor and monoidal</td><td>Parallel semantic,</td><td>(vector, syntactic</td><td>Sentence vectors</td><td>Explicit</td><td>Linear</td><td>Various since Co-</td></tr><tr><td>product</td><td>syntactic composition</td><td>type) pairs</td><td/><td/><td/><td>ecke et al. (2010)</td></tr><tr><td>Matrix multiplica-</td><td>Adjective / noun com-</td><td>Matrix and vector</td><td>Vector</td><td>Explicit</td><td>Linear</td><td>Baroni &amp; Zampar-</td></tr><tr><td>tion</td><td>position</td><td/><td/><td/><td/><td>elli (2010)</td></tr><tr><td>Circular convolution</td><td>Vector binding</td><td>VSA vector</td><td>VSA vector</td><td>Explicit</td><td>Sometimes</td><td>Plate (2003), op-</td></tr><tr><td/><td/><td/><td/><td/><td/><td>tions in Widdows</td></tr><tr><td/><td/><td/><td/><td/><td/><td>&amp; Cohen (2015)</td></tr><tr><td>Binary XOR</td><td>Binary vector binding</td><td>VSA vector</td><td>VSA vector</td><td>Explicit</td><td>Binary vectors</td><td>Kanerva (2009)</td></tr><tr><td/><td/><td/><td/><td/><td>warrant more</td><td/></tr><tr><td/><td/><td/><td/><td/><td>discussion!</td><td/></tr><tr><td>Permutation of</td><td>Non-additive composi-</td><td>Vector</td><td>Vector</td><td>Explicit,</td><td>Linear (because</td><td>Sahlgren et al.</td></tr><tr><td>coordinates</td><td>tion</td><td/><td/><td>though often</td><td>rotation or</td><td>(2008) and various</td></tr><tr><td/><td/><td/><td/><td>random</td><td>reflection)</td><td/></tr><tr><td>Skipgram objective</td><td>Vector interation in</td><td>Word and context</td><td>Update to both</td><td>Explicit though</td><td>Nonlinear</td><td>Mikolov et al.</td></tr><tr><td/><td>training</td><td>vector</td><td/><td>internal</td><td/><td>(2013)</td></tr><tr><td>tanh, Sigmoid,</td><td>Activation functions in</td><td>Input weights</td><td>Output weights</td><td>Typically</td><td>Nonlinear</td><td>Many including</td></tr><tr><td>ReLU, Softmax, etc.</td><td>neural networks</td><td/><td/><td>implicit</td><td/><td>G\u00e9ron (2019, Ch</td></tr><tr><td/><td/><td/><td/><td/><td/><td>10)</td></tr><tr><td>Scaled dot-product</td><td>Learning pairwise</td><td>Vectors</td><td>Updated vectors</td><td>Typically</td><td>Nonlinear</td><td>Vaswani et al.</td></tr><tr><td>attention</td><td>dependence</td><td/><td/><td>internal</td><td/><td>(2017)</td></tr><tr><td>Distributed tree ker-</td><td>Embedding syntactic</td><td>Parse tree</td><td>Sentence vector</td><td>Explicit</td><td>Linear</td><td>(Ferrone &amp; Zan-</td></tr><tr><td>nel / shuffled circular</td><td>tree in vector space</td><td/><td/><td/><td/><td>zotto 2014)</td></tr><tr><td>convolution</td><td/><td/><td/><td/><td/><td/></tr><tr><td>Density matrices,</td><td>More general distri-</td><td>Several, e.g., super-</td><td>Projected vectors</td><td>Often explicit</td><td>Linear</td><td>van Rijsbergen</td></tr><tr><td>POVMs</td><td>butions over vector</td><td>positions of pairs</td><td>and / or probabili-</td><td/><td/><td>(2004), Sadrzadeh</td></tr><tr><td/><td>spaces, e.g., repre-</td><td>of vectors</td><td>ties</td><td/><td/><td>et al. (2018), Lewis</td></tr><tr><td/><td>senting categories,</td><td/><td/><td/><td/><td>(2020), Bradley</td></tr><tr><td/><td>implication</td><td/><td/><td/><td/><td>(2020)</td></tr><tr><td colspan=\"2\">Procrustes alignment Aligning vector models</td><td>Pairs of source,</td><td>Linear mapping</td><td>Explicit</td><td>Linear</td><td>Bojanowski et al.</td></tr><tr><td/><td>U and V</td><td>target vectors</td><td>from U to V</td><td/><td/><td>(2017), Lloyd et al.</td></tr></table>",
"text": "Survey Summary of Mathematical Methods Used for Semantic Vector Composition",
"type_str": "table"
}
}
}
}