ACL-OCL / Base_JSON /prefixW /json /W13 /W13-0109.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W13-0109",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:48:06.014912Z"
},
"title": "Towards a semantics for distributional representations",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Texas at Austin",
"location": {}
},
"email": "katrin.erk@mail.utexas.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Distributional representations have recently been proposed as a general-purpose representation of natural language meaning, to replace logical form. There is, however, one important difference between logical and distributional representations: Logical languages have a clear semantics, while distributional representations do not. In this paper, we propose a semantics for distributional representations that links points in vector space to mental concepts. We extend this framework to a joint semantics of logic and distributions by linking intensions of logical expressions to mental concepts.",
"pdf_parse": {
"paper_id": "W13-0109",
"_pdf_hash": "",
"abstract": [
{
"text": "Distributional representations have recently been proposed as a general-purpose representation of natural language meaning, to replace logical form. There is, however, one important difference between logical and distributional representations: Logical languages have a clear semantics, while distributional representations do not. In this paper, we propose a semantics for distributional representations that links points in vector space to mental concepts. We extend this framework to a joint semantics of logic and distributions by linking intensions of logical expressions to mental concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Distributional similarity can model a surprising range of phenomena (e.g., Lund et al. (1995) ; Landauer and Dumais (1997) ) and is useful in many NLP tasks (Turney and Pantel, 2010) . Recently, it has been suggested that a general-purpose framework for representing natural language semantics should be distributional, such that it could represent word similarity and phrase similarity (Coecke et al., 2010; Baroni and Zamparelli, 2010; Grefenstette and Sadrzadeh, 2011; Clarke, 2012) . Another suggestion has been to combine distributional representations and logical form, with the argument that the strengths of the two frameworks are in complementary areas (Garrette et al., 2011) .",
"cite_spans": [
{
"start": 75,
"end": 93,
"text": "Lund et al. (1995)",
"ref_id": "BIBREF24"
},
{
"start": 96,
"end": 122,
"text": "Landauer and Dumais (1997)",
"ref_id": "BIBREF21"
},
{
"start": 157,
"end": 182,
"text": "(Turney and Pantel, 2010)",
"ref_id": "BIBREF32"
},
{
"start": 387,
"end": 408,
"text": "(Coecke et al., 2010;",
"ref_id": "BIBREF11"
},
{
"start": 409,
"end": 437,
"text": "Baroni and Zamparelli, 2010;",
"ref_id": "BIBREF2"
},
{
"start": 438,
"end": 471,
"text": "Grefenstette and Sadrzadeh, 2011;",
"ref_id": "BIBREF19"
},
{
"start": 472,
"end": 485,
"text": "Clarke, 2012)",
"ref_id": "BIBREF10"
},
{
"start": 662,
"end": 685,
"text": "(Garrette et al., 2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One important difference between logic and distributional representations is that logics have a semantics. For example, a model 1 in model-theoretic semantics provides a truth assignment to each sentence of a logical language. More generally, it associates expressions of a logic with set-theoretic structures, for example the constant cat could be interpreted as the set of all cats in a given world. But what is the interpretation of a distributional representation? What does a point in vector space, where the dimensions are typically uninterpretable symbols, stand for? 2 In this paper, we propose a semantics in which distributional representations stand for mental concepts, and are linked to intensions of logical expressions. This gives us a joint semantics for distributional and logical representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Distributional representations stand for mental concepts. One central function of models is that they evaluate sentences of a logic as being either true or false. Distributional representations have been evaluated on a variety of phenomena connected to human concept representation (e.g., Lund et al. (1995) ; Landauer and Dumais (1997) ; Burgess and Lund (1997) ). Here, evaluation means that predictions based on distributional similarity are compared to experimental results from human subjects. So we will interpret distributional representations over a conceptual structure.",
"cite_spans": [
{
"start": 289,
"end": 307,
"text": "Lund et al. (1995)",
"ref_id": "BIBREF24"
},
{
"start": 310,
"end": 336,
"text": "Landauer and Dumais (1997)",
"ref_id": "BIBREF21"
},
{
"start": 339,
"end": 362,
"text": "Burgess and Lund (1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Distributional representations stand for intensions. G\u00e4rdenfors (2004) suggests that the intensions of logical expressions should be mental concepts. By adopting this view, we can link distributional representations and logic through a common semantics: Both the intensions of logical expressions and the interpretations of distributional representations are mental concepts. However, there is a technical 1 In the context of logical languages, \"models\" are structures that provide interpretations. In the context of distributional approaches, \"distributional models\" are particular choices of parameters. To avoid confusion, this paper will reserve the term \"model\" for the model-of-a-logic sense.",
"cite_spans": [
{
"start": 53,
"end": 70,
"text": "G\u00e4rdenfors (2004)",
"ref_id": "BIBREF17"
},
{
"start": 406,
"end": 407,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Clark et al. (2008) encode a model in a vector space in which natural language sentences are mapped to a single-dimensional space that encodes truth and falsehood. This is a vector space representation, but it is not distributional as it is not derived from observed contexts. In particular, it does not constitute a semantics for a distributional representation. \u2203x woodchuck(x) \u2227 see (John, x) sim(woodchuck, groundhog) > \u03b8 \u2203x groundhog(x) \u2227 see(John, x) Figure 1 : Sketch of an example interaction of distributional and logical representations problem: If intensions are mental concepts, they cannot be mappings from possible worlds to extensions, which is the prevalent way of defining intensions. We address this problem through hyper-intensional semantics. Hyper-intensional approaches in formal semantics Lappin, 2001, 2005; Muskens, 2007) were originally introduced to address problems in the granularity of intensions. Crucially, some hyper-intensional approaches have intensions that are abstract objects, with minimal requirements on the nature of these objects. So we can build on them to link some intensions to conceptual structure.",
"cite_spans": [
{
"start": 2,
"end": 21,
"text": "Clark et al. (2008)",
"ref_id": "BIBREF8"
},
{
"start": 388,
"end": 397,
"text": "(John, x)",
"ref_id": null
},
{
"start": 814,
"end": 833,
"text": "Lappin, 2001, 2005;",
"ref_id": null
},
{
"start": 834,
"end": 848,
"text": "Muskens, 2007)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 459,
"end": 467,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Why design a semantics for distributional representations? Our aim is not to explicitly construct conceptual models; that would be at least as hard as constructing an ontology. Rather, our aim is to support inferences. Distributional representations induce synonyms and paraphrases automatically based on distributional similarity (Lin, 1998; Lin and Pantel, 2001) . As Garrette et al. (2011) point out, and as illustrated in Figure 1 , these can be used as inference rules within logical form. But when is such inference projection valid? Our main aim for constructing a joint semantics is to provide a principled basis for answering this question.",
"cite_spans": [
{
"start": 331,
"end": 342,
"text": "(Lin, 1998;",
"ref_id": "BIBREF22"
},
{
"start": 343,
"end": 364,
"text": "Lin and Pantel, 2001)",
"ref_id": "BIBREF23"
},
{
"start": 370,
"end": 392,
"text": "Garrette et al. (2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 426,
"end": 434,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the current paper, we construct a first semantics along the lines sketched above. In order to be able to take this first step, we simplify distributional predictions greatly by discretizing them. We want to stress, however, that this is a temporary restriction; our eventual aim is to make use of the ability of distributional models to handle graded and uncertain information as well as ambiguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Predicting sentence similarity with distributional representations. The distributional representation for a word is typically based on the textual contexts in which it has been observed (Turney and Pantel, 2010) . The distributional representation of a document is typically based on the words that it contains, or on latent classes derived from co-occurrences of those words (Landauer and Dumais, 1997; Blei et al., 2003) . Phrases and sentences occupy an unhappy middle ground between words and documents. They re-appear too rarely for a representation in terms of the textual contexts in which they have been observed, and they are too short for a document-like representation. There are multiple approaches to predicting similarity between sentences based on distributional information. The first computes a single vector space representation for a phrase or sentence in a compositional manner from the representations of the individual words (Coecke et al., 2010; Baroni and Zamparelli, 2010; Grefenstette and Sadrzadeh, 2011) . This approach currently still faces big hurdles, including the problem of encoding the meaning of function words and the problem of predicting similarity for sentences of different structure. The second approach compares two phrases or sentences by computing multiple pairwise similarity values between components (words or smaller phrases) of the two sentences and then combining those similarity values (Socher et al., 2011; Turney, 2012) . The third approach seeks to transform the representation of one sentence into another through term rewriting, where the rewriting rules are based on distributional similarity between words and smaller phrases (Bar-Haim et al., 2007) . The approach of Garrette et al. (2011) can be viewed as falling into the third group. It represents sentences not as syntactic graphs as Bar-Haim et al. (2007) but through logic, and injects weighted inference rules derived from distributional similarity. Our approach belongs into this third group. The aim of the semantics that we present in Section 3 is to show that the use of distributional rewriting rules does not change the semantics of a logical expression. A fourth approach is the taken by Clarke (2007 Clarke ( , 2012 , who formalizes the idea of \"meaning as context\" in an algebraic framework that replaces concrete corpora with a generative corpus model that can assign probabilities to arbitrary word sequences. This eliminates the sparseness problem of finite corpora, such that both words and arbitrary phrases can be given distributional representations. Clarke also combines vector spaces and logic-based semantics by proposing a space in which the dimensions",
"cite_spans": [
{
"start": 186,
"end": 211,
"text": "(Turney and Pantel, 2010)",
"ref_id": "BIBREF32"
},
{
"start": 376,
"end": 403,
"text": "(Landauer and Dumais, 1997;",
"ref_id": "BIBREF21"
},
{
"start": 404,
"end": 422,
"text": "Blei et al., 2003)",
"ref_id": "BIBREF5"
},
{
"start": 947,
"end": 968,
"text": "(Coecke et al., 2010;",
"ref_id": "BIBREF11"
},
{
"start": 969,
"end": 997,
"text": "Baroni and Zamparelli, 2010;",
"ref_id": "BIBREF2"
},
{
"start": 998,
"end": 1031,
"text": "Grefenstette and Sadrzadeh, 2011)",
"ref_id": "BIBREF19"
},
{
"start": 1439,
"end": 1460,
"text": "(Socher et al., 2011;",
"ref_id": "BIBREF30"
},
{
"start": 1461,
"end": 1474,
"text": "Turney, 2012)",
"ref_id": "BIBREF33"
},
{
"start": 1686,
"end": 1709,
"text": "(Bar-Haim et al., 2007)",
"ref_id": "BIBREF1"
},
{
"start": 1728,
"end": 1750,
"text": "Garrette et al. (2011)",
"ref_id": "BIBREF18"
},
{
"start": 1849,
"end": 1871,
"text": "Bar-Haim et al. (2007)",
"ref_id": "BIBREF1"
},
{
"start": 2213,
"end": 2225,
"text": "Clarke (2007",
"ref_id": "BIBREF9"
},
{
"start": 2226,
"end": 2241,
"text": "Clarke ( , 2012",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "(IHTT1) p (IHTT2) \u22a5 p (IHTT3) \u00acp \u2194 p \u2192 \u22a5 (IHTT4) r p \u2227 q iff r p and r q (IHTT5) p \u2228 q r iff p r or q r (IHTT6) p q \u2192 r iff p \u2227 q r (IHTT7) p \u2200x B \u03c6 B,\u03a0 iff p \u03c6 (IHTT8) \u03c6(a) \u2203x B \u03c6(x) (where \u03c6 \u2208 B, \u03a0 , and a is a constant in B) (IHTT9) \u03bbu\u03c6(v) \u223c = \u03c6[u/v] (where u is a variable in A, v \u2208 A, \u03c6 \u2208 A, B , and v is not bound when substituted for u in \u03c6) (IHTT10) \u2200s, t \u03a0 s \u223c = t \u2194 (s \u2194 t) (IHTT11) \u2200\u03c6, \u03c8 A,B \u2200u A (\u03c6(u) \u223c = \u03c8(u)) \u2192 \u03c6 \u223c = \u03c8 (IHTT12) \u2200u, v A \u2200\u03c6 A,B u = v \u2192 \u03c6(u) \u223c = \u03c6(v) (IHTT13) \u2200t \u03a0 t \u2228 \u00act)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Table 1: Axioms of the intensional higher-order type theory IHTT of Fox and Lappin (2001) \u2022 If \u03b1 A is a non-logical constant, then Table 2 : Interpretation of IHTT expressions correspond to logic formulas. A word or phrase x is linked to formulas for sequences uxv in which it occurs, and each formula F is generalized to other formulas G that entail F . But it is not clear yet how this representation could be used for inferences.",
"cite_spans": [
{
"start": 68,
"end": 89,
"text": "Fox and Lappin (2001)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "||\u03b1|| M,g = F (I(\u03b1)) \u2022 If \u03b1 A is a variable, then ||\u03b1|| M,g = g(\u03b1) \u2022 ||\u03b1 A,B (\u03b2 A )|| M,g = ||\u03b1|| M,g ||\u03b2|| M,g \u2022 If \u03b1 is in A and u is a variable in B, then ||\u03bbu\u03b1|| M,g is a function h : D A \u2192 D B such that for any a \u2208 D A , h(a) = ||\u03b1|| M,g[u/a] \u2022 ||\u00ac\u03c6 \u03a0 || M,g = t iff ||\u03c6|| M,g = f \u2022 ||\u03c6 \u03a0 \u2227 \u03c8 \u03a0 || M,g = t iff ||\u03c6|| M,g = ||\u03c8|| M,g = t \u2022 ||\u03c6 \u03a0 \u2228 \u03c8 \u03a0 || M,g = t iff ||\u03c6|| M,g = t or ||\u03c8|| M,g = t \u2022 ||\u03c6 \u03a0 \u2192 \u03c8 \u03a0 || M,g = t iff ||\u03c6|| M,g = f or ||\u03c8|| M,g = t \u2022 ||\u03c6 \u03a0 \u2194 \u03c8 \u03a0 || M,g = t iff ||\u03c6|| M,g = ||\u03c8|| M,g \u2022 ||\u03b1 A \u223c = \u03b2 A || M,g = t iff ||\u03b1|| M,g = ||\u03b2|| M,g \u2022 ||\u03b1 A = \u03b2 A || M,g = t iff I(\u03b1) = I(\u03b2) \u2022 ||\u2200u A \u03c6 \u03a0 || M,g = t iff for all a \u2208 D A ||\u03c6|| M,g[u/a] = t \u2022 ||\u2203u A \u03c6 \u03a0 || M,g = t iff for some a \u2208 D A ||\u03c6|| M,g[u/a] = t \u2022 \u03c6 \u03a0 is true in M (false in M ) iff ||\u03c6|| M,g = t (f ) for all g. \u2022 \u03c6 \u03a0 is logically true (false) iff \u03c6 is true (false) in every M \u2022 \u03c6 \u03a0 |= \u03c8 \u03a0 iff for every M such that \u03c6 is true in M , \u03c8 is true in M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Distributions, extensions, and intensions Like the current paper, Copestake and Herbelot (2012) consider the connection between distributional representations and the semantics of logical languages. However, they reach a very different conclusion. They propose using distributional representations as intensions of logical expressions. In addition, they link distributions to extensions by noting that each sentence that contributes to the distributional representation for the word \"woodchuck\" is about some member of the extension of woodchuck. They define the ideal distribution for a concept, for example \"woodchuck\", as the collection of all true statements about all members of the category, in this case all woodchucks in the world.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In our view, distributions describe general, intensional knowledge, and do not provide reference to extensions, so we will link distributional representations to intensions and not extensions. Concerning the Copestake and Herbelot proposal of distributions as intensions, we consider distributions as representations in need of an interpretation or intension, rather than representations that constitute the intension. 3 Also it is a somewhat unclear how the intension would be defined in practice in the Copestake and Herbelot framework, as it is based on the hypothetical ideal distribution with its potentially infinite number of sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Hyper-intensional semantics The axiom of Extensionality states that if two expressions have the same extension, then they share all their properties. Together with the standard formulation of intensions as functions from possible worlds to extensions, this generates the problem that logically equivalent statements like \"John sleeps\" and \"John sleeps, and Mary runs or does not run\" become intersubstitutable in all contexts, even in contexts like \"Sue believes that. . . \" where they should not be exchangeable. Hyperintensional semantics addresses this problem. In particular, some approaches Lappin, 2001, 2005; Muskens, 2007) address the problem by (1) dropping the axiom of Extensionality, (2) mapping expressions of the logic first to intensions and then mapping the intensions to extensions, and (3) adopting a notion of intensions as abstract objects with minimal restrictions. This makes these approaches relevant for our purposes, as we can add the axioms that we need for a joint semantics of logical and distributional representations. Muskens (2007) has one constraint on intensions that makes the approach unusable for our purposes in its current form: It has intensions and extensions be objects from the same collections of domains -but we would not want to force extensions to be mental objects. Instead we build on the intensional higher-order type theory IHTT from Fox and Lappin (2001) . The set of types of IHTT contains the basic types e (for entity) and \u03a0 (proposition), and if A, B are types, then so is A, B . The logic contains all the usual connectives, plus \" \u223c =\" for extensional equality and \"=\" for intensional equality. Fox and Lappin adopt the axioms shown in Table 1 , which do not include the axiom of Extensionality. 4 A model for IHTT is a tuple M = D, S, L, I, F , where D is a family of non-empty sets such that D A is the set of possible extensions for expressions of type A. S is the set of possible intensions, and L \u2286 S is the set of possible intensions for non-logical constants of the logic. I is a function that maps arbitrary expressions of IHTT to the set S of intensions. If \u03b1 is a non-logical constant, then",
"cite_spans": [
{
"start": 596,
"end": 615,
"text": "Lappin, 2001, 2005;",
"ref_id": null
},
{
"start": 616,
"end": 630,
"text": "Muskens, 2007)",
"ref_id": "BIBREF26"
},
{
"start": 1049,
"end": 1063,
"text": "Muskens (2007)",
"ref_id": "BIBREF26"
},
{
"start": 1385,
"end": 1406,
"text": "Fox and Lappin (2001)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 1694,
"end": 1701,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "I(\u03b1) is in L, otherwise I(\u03b1) is in S \u2212 L.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The function F is a mapping from L (intensions of non-logical constants) to members of D (extensions). A valuation g is a function from the variables of IHTT to members of D such that for all v A it holds that g(v) \u2208 D A . A model of IHTT has to satisfy the following constraints: Table 2 shows the definition of extensions ||.|| M,g of expressions of IHTT.",
"cite_spans": [],
"ref_spans": [
{
"start": 281,
"end": 288,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "5 (M1) If v is a variable, then I(v) = v. (M2) For a model M , if I(\u03b1) = I(\u03b2), then for all g, ||\u03b1|| M,g = ||\u03b2|| M,g .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In this section we construct a first implementation of the semantics for distributional representations sketched in the introduction. In this semantics, distributional interpretations are interpreted over mental concepts and are linked to the intensions of some logical expressions. We use as a basis the hyperintensional logic IHTT of Fox and Lappin (2001) (Section 2), which does not require intensions to be mappings from possible worlds to extensions, such that we are free to link intensions to mental concepts. The central result of this section will be that the interpretation of sentences of the logic is invariant to rewriting steps such as the one in Figure 1 , which replace a non-logical constant by another based on distributional similarity. The semantics that we present in this paper constitutes a first step. It leaves some important questions open, such as paraphrasing beyond the word level, or graded concept membership.",
"cite_spans": [
{
"start": 336,
"end": 357,
"text": "Fox and Lappin (2001)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 661,
"end": 669,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A joint semantics for distributional and logical representations",
"sec_num": "3"
},
{
"text": "Typically, the distributional representation for a target word t is computed from the occurrences, or usages, of t in a given corpus. Minimally, a usage is a sequence of words in which the target appears at least once. We will allow for two additional pieces of information in a usage, namely larger discourse context, and non-linguistic context. (Recently, there have been distributional approaches that make use of non-linguistic context, in particular image data (Feng and Lapata, 2010; Bruni et al., 2012) .)",
"cite_spans": [
{
"start": 466,
"end": 489,
"text": "(Feng and Lapata, 2010;",
"ref_id": "BIBREF14"
},
{
"start": 490,
"end": 509,
"text": "Bruni et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional representations",
"sec_num": "3.1"
},
{
"text": "Let W be a set of words (the lexicon), and let Seq(W ) be the set of finite sequences over W . Then a usage over W is a tuple s, t, \u03b4, \u03c9 , where s \u2208 Seq(W ) is a sequence of words such that a word form of t \u2208 W occurs in s at least once, \u03b4 \u2208 \u2206 \u222a {N A} is a (possibly empty) discourse context, and \u03c9 \u2208 \u2126 \u222a {N A} is a (possibly empty) non-linguistic context. We write U(W, \u2206, \u2126) for the set of all usages over W (and \u2206 and \u2126). For any usage u = s, t, \u03b4, \u03c9 , we write target(u) = t. Given a set U \u2286 U(W, \u2206, \u2126) of usages, we write U t = {u \u2208 U | target(u) = t} for the usages of a target word t. Furthermore, we write W U = {t \u2208 W | U t = \u2205} for the set of words that have usages in U .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional representations",
"sec_num": "3.1"
},
{
"text": "In distributional approaches, the vector space representation for a target word t is computed from such a set U of usages, typically by mapping U to a single point in vector space (Lund et al., 1995; Landauer and Dumais, 1997) or a set of points (Sch\u00fctze, 1998; Reisinger and Mooney, 2010) . This makes it possible to use linear algebra in modeling semantics. However, for our current purposes, we do not need to specify any particular mapping to a vector space, and can simply work with the underlying set U of usages: A finite set U of usages over W constitutes a distributional representation for W U . The distributional representation for a word t \u2208 W is U t .",
"cite_spans": [
{
"start": 180,
"end": 199,
"text": "(Lund et al., 1995;",
"ref_id": "BIBREF24"
},
{
"start": 200,
"end": 226,
"text": "Landauer and Dumais, 1997)",
"ref_id": "BIBREF21"
},
{
"start": 246,
"end": 261,
"text": "(Sch\u00fctze, 1998;",
"ref_id": "BIBREF29"
},
{
"start": 262,
"end": 289,
"text": "Reisinger and Mooney, 2010)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional representations",
"sec_num": "3.1"
},
{
"text": "We want to interpret distributional representations over conceptual structure. But what is conceptual structure? We know that concepts are linked by different semantic relations, including is-a, and partof (Fellbaum, 1998) , they can overlap, and they are associated with definitional features (Murphy, 2002) . Eventually, all of these properties may be useful to include in the semantics of distributional representations. But for this first step we work with a much simpler definition. We define a conceptual structure simply as a set of (atomic, unconnected) concepts.",
"cite_spans": [
{
"start": 206,
"end": 222,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF13"
},
{
"start": 294,
"end": 308,
"text": "(Murphy, 2002)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A semantics for distributional representations",
"sec_num": "3.2"
},
{
"text": "An individual usage of a word t can refer to a single mental concept. For example, the usage of \"bank\" in (1) clearly refers to a \"financial institution\" concept, not the land at the side of a river. But an individual usage can also refer to multiple mental concepts when there is ambiguity as in (2), or when there is too little information to determine the intended meaning as in (3). 67 (1)",
"cite_spans": [
{
"start": 387,
"end": 389,
"text": "67",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A semantics for distributional representations",
"sec_num": "3.2"
},
{
"text": "The bank engaged in risky stock trades, bank, \u03b4, \u03c9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A semantics for distributional representations",
"sec_num": "3.2"
},
{
"text": "(2) Why fix dinner when it isn't broken, fix, \u03b4, \u03c9 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A semantics for distributional representations",
"sec_num": "3.2"
},
{
"text": "(3) bank, bank, NA, NA From this link between individual usages and concepts, we can derive a link between distributional representations and concepts: The representation U t of a word t is connected to all concepts to which the usages in U t link. Formally, a conceptual model for U(W, \u2206, \u2126) is a tuple C = I u , C , where C is a set of concepts, and the function I u : U(W, \u2206, \u2126) \u2192 2 C is an interpretation function for usages that maps each usage to a set of concepts. 9 A conceptual model C together with a finite set U \u2286 U(W, \u2206, \u2126) of usages define a conceptual mapping for words. We write I C,U (w) = u\u2208Uw I u (u) for the set of concepts associated with w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A semantics for distributional representations",
"sec_num": "3.2"
},
{
"text": "Distributional approaches centrally use some similarity measure, for example cosine similarity, on pairs of distributional representations, usually pairs of points in vector space. Since we represent a word t directly by its set U t of usages rather than a point in vector space derived from U t , we instead have a similarity measure sim(U 1 , U 2 ) on sets of usages. We assume a range of [0, 1] for this similarity measure. A conceptual model can be used to evaluate the appropriateness of similarity predictions: A prediction is appropriate if it is high for two usage sets that refer to the same concepts, or low for two usage sets that refer to different concepts. Formally, a similarity prediction sim(U 1 , U 2 ) is appropriate for a conceptual model C = I u , C and threshold \u03b8 iff",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A semantics for distributional representations",
"sec_num": "3.2"
},
{
"text": "\u2022 either sim(U 1 , U 2 ) \u2265 \u03b8 and u\u2208U 1 I u (u) = u\u2208U 2 I u (u),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A semantics for distributional representations",
"sec_num": "3.2"
},
{
"text": "9x woodchuck(x)^see(John, x) Figure 2 : Enriching the information about non-logical constants: Constants are associated with sets of concepts (circles) and, through them, with distributional representations",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A semantics for distributional representations",
"sec_num": "3.2"
},
{
"text": "\u2022 or sim(U 1 , U 2 ) < \u03b8 and u\u2208U 1 I u (u) = u\u2208U 2 I u (u).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A semantics for distributional representations",
"sec_num": "3.2"
},
{
"text": "This formulation of appropriateness is simplistic in that it discretizes similarity predictions into two classes: above or below threshold \u03b8. This is due to our current impoverished view of concepts as disjoint atoms. When we introduce a conceptual similarity measure within conceptual models, a more fine-grained evaluation of distributional similarity ratings becomes available. Such a conceptual similarity measure would be justified, as humans can judge similarity between concepts (Rubenstein and Goodenough, 1965 ), but we do not do it here in order to keep our models maximally simple.",
"cite_spans": [
{
"start": 486,
"end": 518,
"text": "(Rubenstein and Goodenough, 1965",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A semantics for distributional representations",
"sec_num": "3.2"
},
{
"text": "We now link the intensions of some logical expressions to mental concepts, using the logic IHTT as a basis. We will need to constrain the behavior of intensions more than Fox and Lappin do. In particular, we add the following two requirements to models M = D, S, L, I, F of IHTT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A joint semantics for logical form and distributional representations",
"sec_num": "3.3"
},
{
"text": "(M3) If the expression \u03b1 \u2208 A is the result of beta-reducing the expression \u03b2 \u2208 A, then I(\u03b1) = I(\u03b2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A joint semantics for logical form and distributional representations",
"sec_num": "3.3"
},
{
"text": "(M4) If I(u A ) = I(v A )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A joint semantics for logical form and distributional representations",
"sec_num": "3.3"
},
{
"text": ", then for all \u03c6 \u2208 A, B , I(\u03c6(u)) = I(\u03c6(v)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A joint semantics for logical form and distributional representations",
"sec_num": "3.3"
},
{
"text": "(M4) allows for the exchange of an intensionally equal expression without changing the intension of the overall expression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A joint semantics for logical form and distributional representations",
"sec_num": "3.3"
},
{
"text": "We now define models that join an intensional model of IHTT with a conceptual model for a distributional representation. In particular, we link constants of the logic to sets of concepts, and through them, to distributional representations, as sketched in Figure 2 . If the word \"woodchuck\" is associated with the concept set C woodchuck = I C,U (woodchuck), then the intension of the constant woodchuck will also be C woodchuck . We proceed in two steps: In the definition of joint models, we require the existence of a mapping from words to non-logical constants that share the same interpretation. In a second step, we require semantic constructions to respect this mapping, such that the logical expression associated with \"woodchuck\" will be \u03bbx woodchuck(x) rather than \u03bbx guppy(x) . Note that only words in W U have distributional representations associated with them; for words in W \u2212 W U , neither their translation to logical expressions nor the intensions of those expressions are constrained in any way.",
"cite_spans": [],
"ref_spans": [
{
"start": 256,
"end": 264,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A joint semantics for logical form and distributional representations",
"sec_num": "3.3"
},
{
"text": "Let M = D, S, L, I, F be a model for IHTT, let C = I u , C be a conceptual model for U(W, \u2206, \u2126), and let U be a finite subset of U(W, \u2206, \u2126). Then M C = D, S, L, I, F , I u , C is an intensional conceptual model for IHTT and U(W, \u2206, \u2126) based on U if (M5) There exists a function h from W U to the non-logical constants of IHTT such that for all w \u2208 W U , I C,U (w) = I(h(w))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A joint semantics for logical form and distributional representations",
"sec_num": "3.3"
},
{
"text": "(M6) For all w 1 , w 2 \u2208 W U , if I C,U (w 1 ) = I C,U (w 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A joint semantics for logical form and distributional representations",
"sec_num": "3.3"
},
{
"text": "then h(w 1 ) and h(w 2 ) have the same type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A joint semantics for logical form and distributional representations",
"sec_num": "3.3"
},
{
"text": "We say that the model M C contains M and C. Constraint (M5) links each word to a non-logical constant such that the distributional interpretation of the word and the intension of the constant are the same. (M6) states that if two words have the same distributional interpretation, their associated constants have the same type. We next define semantic constructions sem in general, and semantic constructions that connect the translation sem(w) of a word w to its associated constant h(w). A semantic construction function for a set W of words and a logical language L is a partial function sem : Seq(W ) \u2192 L such that sem(w) is defined for all w \u2208 W . sem(.) maps sequences of words over W to expressions from L. A sequence s \u2208 Seq(W ) is called grammatical if sem(s) is defined. A semantic construction sem is an intended semantic construction for an intensional conceptual model M = D, S, L, I, F , I u , C based on U if the following constraint holds for the function h from (M5):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A joint semantics for logical form and distributional representations",
"sec_num": "3.3"
},
{
"text": "(M7) For each type A there exists some expression \u03c6 A such that for all w \u2208 W U , sem(w) is equivalent (modulo beta-reduction) to \u03c6 A (h(w)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A joint semantics for logical form and distributional representations",
"sec_num": "3.3"
},
{
"text": "(M7) states that the construction of translations sem(w) from non-logical constants h(w) must be uniform for all words of the same semantic type. For example, if for the word \"woodchuck\" we have h(woodchuck) = woodchuck, an expression of type e, \u03a0 , then the expression \u03c6 e,\u03a0 = \u03bbP \u03bbx(P (x)) will map woodchuck to \u03bbx(woodchuck(x)) = sem(woodchuck).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A joint semantics for logical form and distributional representations",
"sec_num": "3.3"
},
{
"text": "In Section 2 we have sketched a framework for the interaction of logic and distributional representations based on Bar-Haim et al. (2007) . Distributional representations can be used to predict semantic similarity between pairs of words and in particular to predict synonymy between words (Lin, 1998) . Distributionally induced synonym pairs can be used as rewriting rules that transform sentence representations. In our case, the representations to be transformed are expressions of the logic. Two sentences count as synonymous if it is possible to transform the representation of one sentence into the representation of the other, using both distributional rewriting rules and the axioms of the logic.",
"cite_spans": [
{
"start": 115,
"end": 137,
"text": "Bar-Haim et al. (2007)",
"ref_id": "BIBREF1"
},
{
"start": 289,
"end": 300,
"text": "(Lin, 1998)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym replacement",
"sec_num": "3.4"
},
{
"text": "We start out by showing that the application of a rewriting rule that exchanges one non-logical constant of IHTT for another constant with the same intension leaves both the intension and the extension of the overall logical expression unchanged. Given a logical expression \u03c6, we write \u03c6[some b/a] for the set of expressions obtained from \u03c6 by replacing zero or more occurrences of a by b.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym replacement",
"sec_num": "3.4"
},
{
"text": "Proposition 1: Soundness of non-logical constant rewriting. Let M = D, S, L, I, F be an intensional model for IHTT, and let a, b be non-logical constants of IHTT of type A such that I(a) = I(b). Then for any expression \u03c6 of IHTT and any \u03c6 \u2208 \u03c6[some b/a], I(\u03c6) = I(\u03c6 ), and for any valuation g, ||\u03c6|| M,g = ||\u03c6 || M,g .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym replacement",
"sec_num": "3.4"
},
{
"text": "Proof. Let x A be a variable that does not occur in \u03c6. Then for each \u03c6 \u2208 \u03c6[some b/a] there exists an expression \u03c8 \u2208 \u03c6[some x/a] such that (\u03bbx\u03c8)(a) beta-reduces to \u03c6 and (\u03bbx\u03c8)(b) beta-reduces to \u03c6 . As I(a) = I(b), we have I((\u03bbx\u03c8)(a)) = I((\u03bbx\u03c8)(b)) by (M4). So by (M3), I(\u03c6) = I(\u03c6 ). From this it follows that for any valuation g, ||\u03c6|| M,g = ||\u03c6 || M,g by (M2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym replacement",
"sec_num": "3.4"
},
{
"text": "We call two words synonyms if they refer to the same set of concepts. Formally, let U be a finite subset of U(W, \u2206, \u2126) that is a distributional representation for W U , and C = I u , C a conceptual model for U(W, \u2206, \u2126). A word p \u2208 W U is a synonym for t \u2208 W U by C and U if I C,U (t) = I C,U (p).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym replacement",
"sec_num": "3.4"
},
{
"text": "We would like to show that if t and p are synonyms, then exchanging t for p changes neither the intension nor the extension of the logical translation for the sentence. To do so, we first show that exchanging t for p corresponds to applying constant rewriting on the sentence representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym replacement",
"sec_num": "3.4"
},
{
"text": "Note, however, that the logical translation of a sentence depends not only on the words, but also on the syntactic structure of the sentence. If a given syntactic analysis framework only allows for the bracketing \"(small (tree house))\" and at the same time only allows for the bracketing \"((little tree) house)\", then the two phrases will not receive the same semantics even if the model considers \"small\" and \"little\" to be synonyms. So we will show that if replacement by a synonym within a given syntactic structure again yields a valid syntactic structure, then the semantics of the sentence remains unchanged. For any sequence s \u2208 Seq(W ) of words over W , we write T (s) for the set of constituent structure analyses for s. For \u03c4 \u2208 T (s), we write \u03c4 [p/t] for the syntactic graph that is exactly like \u03c4 except that all leaves labeled t are replaced by leaves labeled p. We write sem(\u03c4 ) for the logical translation of s that is based on the syntactic structure of \u03c4 . We assume that there exists exactly one translation sem(\u03c4 ) for each syntactic structure \u03c4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym replacement",
"sec_num": "3.4"
},
{
"text": "Lemma 2. Let M C be be an intensional conceptual model for IHTT and U(W, \u2206, \u2126) based on U \u2286 U(W, \u2206, \u2126) that contains M = D, S, L, I, F and C = I u , C . Let t, p \u2208 W U be synonyms by C and U , and let s \u2208 Seq(W ) be a sequence with syntactic analysis \u03c4 \u2208 T (s) such that \u03c4 [p/t] \u2208 T (s[p/t]). Then for any intended semantic construction sem for M C and U , sem(\u03c4 [p/t]) is equivalent modulo beta-reduction to some member of sem(\u03c4 )[some h(p)/h(t)]. If s = t, then sem(\u03c4 ) = sem(t) and sem(\u03c4 [p/t]) = sem(p). By (M5) and because t and p are synonyms, we have I(h(t)) = I C,U (t) = I C,U (p) = I(h(p)). From this it follows by (M6) that the non-logical constants h(t) and h(p) have the same semantic type A. Then by (M7) there exists a logical expression \u03c6 A such that sem(\u03c4 ) = sem(t) is equivalent modulo beta-reduction to \u03c6 A (h(t)). At the same time, sem(\u03c4 [p/t]) = sem(p) is equivalent modulo beta-reduction to \u03c6 A (h(p)), which is equivalent modulo beta-reduction to a member of \u03c6 A (h(t)) [some h(p)/h(t)], which in turn is equivalent modulo beta-reduction so a member of sem(\u03c4 )[some h(p)/h(t)].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym replacement",
"sec_num": "3.4"
},
{
"text": "Now assume that s comprises more than one word. Let the root of \u03c4 have n children that are the roots of subtrees \u03c4 1 . . . \u03c4 n . There is some semantic construction rule associated with the root of \u03c4 that can be written as an expression \u03c6 of IHTT such that \u03c6(sem(\u03c4 1 )) . . . (sem(\u03c4 n )) beta-reduces to sem(\u03c4 ). By the inductive hypothesis, sem(\u03c4 i [p/t]) is equivalent modulo beta-reduction to some \u03c8 i \u2208 sem(\u03c4 i )[some h(p)/h(t)] for 1 \u2264 i \u2264 n. The expression \u03c6 remains unchanged between sem(\u03c4 ) and sem(\u03c4 [p/t]) because only leaves of the tree were changed and the overall constituent structure remained the same. So the expression sem(\u03c4 [p/t]) is equivalent modulo beta-reduction to \u03c6(\u03c8 1 ) . . . (\u03c8 n ) \u2208 \u03c6(sem(\u03c4 1 )) . . . (sem(\u03c4 n )) [some h(p)/h(t)], which in turn is equivalent modulo beta-reduction to sem(\u03c4 )[some h(p)/h(t)].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym replacement",
"sec_num": "3.4"
},
{
"text": "The reason why we have used \u03c6[some b/a] rather than replacement of all occurrences is that there is no guarantee that the corresponding non-logical constant h(t) for a word t is used only in the lexical entry of t. For example, the expression \u03c6 e,\u03a0 of (M7) could be \u03bbP \u03bbx woodchuck(x) \u2227 P (x) , making the lexical entry for \"guppy\" \u03bbx woodchuck(x) \u2227 guppy(x) . Or the semantic construction expression \u03c6 for NPs could contain the constant woodchuck. However, now we are in a position to show that this does not matter, and that a constant rewriting rule can be applied to all occurrences of h(t), whether in the lexical entry for t or elsewhere. At the same time, we show that replacement of a word by a synonym does not change the interpretation of the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym replacement",
"sec_num": "3.4"
},
{
"text": "Proposition 3: Synonym replacement as constant replacement. Let M C be be an intensional conceptual model for IHTT and U(W, \u2206, \u2126) based on U \u2286 U(W, \u2206, \u2126) that contains M = D, S, L, I, F and C = I u , C . Let t, p \u2208 W U be synonyms by C and U , and let s \u2208 Seq(W ) be a sequence with syntactic analysis \u03c4 \u2208 T (s) such that \u03c4 [p/t] \u2208 T (s[p/t]). Then for any valuation g, and any intended semantic construction sem for M C and U , I(sem(\u03c4 )) = I(sem(\u03c4 [p/t])) = I(sem(\u03c4 )[h(p)/h(t)]), and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym replacement",
"sec_num": "3.4"
},
{
"text": "||sem(\u03c4 )|| M,g = ||sem(\u03c4 [p/t])|| M,g = ||sem(\u03c4 )[h(p)/h(t)]|| M,g .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym replacement",
"sec_num": "3.4"
},
{
"text": "Proof. By Lemma 2, the semantic representation of the changed syntactic tree, sem(\u03c4 [p/t]), is equivalent modulo beta-reduction to some \u03c8 \u2208 sem ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym replacement",
"sec_num": "3.4"
},
{
"text": "We extend the list of axioms for IHTT from Table 1 by two additional axioms that correspond to the constraints (M3) and (M4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.5"
},
{
"text": "(IHTT14) \u03bbu\u03c6(v) = \u03c6[u/v] (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.5"
},
{
"text": "where u is a variable in A, v \u2208 A, \u03c6 \u2208 A, B , and v is not bound when substituted for u in \u03c6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.5"
},
{
"text": "(IHTT15) \u2200u, v A \u2200\u03c6 A,B u = v \u2192 \u03c6(u) = \u03c6(v)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.5"
},
{
"text": "These axioms parallel (IHTT9) and (IHTT12) but state intensional rather than extensional equality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.5"
},
{
"text": "Synonymy predictions from the distributional representation can be transformed into rewriting rules: If the words t and p are synonyms by the distributional representation U , then we generate the rewriting rule h(t) \u2192 h(p). As Proposition 3 shows, this rewriting rule can be applied indiscriminately to a logical expression, and is not restricted to the lexical entry for t. But since the logic is equipped with inference capability and is not a passive representation like the syntactic graphs that Bar-Haim et al. (2007) used, we can alternatively just inject an expression h(t) = h(p), which states intensional equality, into the logical representation for the parsed sentence \u03c4 . The logical representation for \u03c4 [p/t] can then be inferred using (IHTT14) and (IHTT15).",
"cite_spans": [
{
"start": 501,
"end": 523,
"text": "Bar-Haim et al. (2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.5"
},
{
"text": "In this paper we have proposed a semantics for distributional representations, namely that each point in vector space stands for a set of mental concepts. We have provided a coarse-grained evaluation for distributional representations in which their similarity predictions are evaluated against conceptual equality or inequality. We have extended this approach to a joint semantics of distributional and logical representations by linking the intensions of some logical expressions to mental concepts as well: If the distributional representation for a word w is interpreted as a set C of concepts, then the non-logical constant linked to the lexical entry for w will have as its intension the same set C. We have used hyperintensional semantics as a basis for this joint semantics. We have been able to show that distributional rewriting rules that exchange non-logical constants with the same intension do not change the intension or extension of the overall logical expression. These rewriting rules can be used to compute the logical representation of a sentence after exchanging a word for its synonym.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and outlook",
"sec_num": "4"
},
{
"text": "The current joint semantics is, however, only a first step, and leaves many important questions open. We consider the following three to be especially important. (1) Polysemy. Many synonym pairs can only be substituted for one another in particular sentence contexts. For example \"correct\" is a synonym for \"fix\" that can be substituted in the context of \"The programmer fixed the error\", but not in \"The cook fixed dinner.\" This means that the words \"fix\" and \"correct\" do not map to the same set of concepts, but they are exchangeable in particular contexts. So we would want to say that \"fix\" and \"correct\" are synonyms with respect to a usage u = s, fix, \u03b4, \u03c9 if I u (u) = I u ( s[correct/fix], correct, \u03b4, \u03c9 ). The main challenge for incorporating polysemy is to have intensions change based on the context of use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and outlook",
"sec_num": "4"
},
{
"text": "(2) Distributional similarity of larger phrases. There is considerable work both on the distributional similarity of phrases and sentences (Coecke et al., 2010; Baroni and Zamparelli, 2010; Grefenstette and Sadrzadeh, 2011) and on the distributional similarity of phrases with open argument slots, such as \"X solves Y\" and \"X finds a solution to Y\" (Lin and Pantel, 2001; Szpektor and Dagan, 2008; Berant et al., 2011) . We would like to use these results to do distributionally driven replacement of multi-word paraphrases in a joint distributional and logical framework. But this requires a semantics for distributional representations of larger phrases. If we assume some sort of conceptual structures as semantics, the next question is whether all logical expressions should be associated with conceptual structures: Should the intension of a variable be something conceptual?",
"cite_spans": [
{
"start": 139,
"end": 160,
"text": "(Coecke et al., 2010;",
"ref_id": "BIBREF11"
},
{
"start": 161,
"end": 189,
"text": "Baroni and Zamparelli, 2010;",
"ref_id": "BIBREF2"
},
{
"start": 190,
"end": 223,
"text": "Grefenstette and Sadrzadeh, 2011)",
"ref_id": "BIBREF19"
},
{
"start": 349,
"end": 371,
"text": "(Lin and Pantel, 2001;",
"ref_id": "BIBREF23"
},
{
"start": 372,
"end": 397,
"text": "Szpektor and Dagan, 2008;",
"ref_id": "BIBREF31"
},
{
"start": 398,
"end": 418,
"text": "Berant et al., 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and outlook",
"sec_num": "4"
},
{
"text": "(3) Gradience. In this paper we have assumed that the link from usage to concept is binary -either present or not -, and also that there are no relations between concepts. Both assumptions are simplifications: Concepts have \"fuzzy boundaries\" (Hampton, 2007) , and cognizers can distinguish degrees of similarity between concepts (Rubenstein and Goodenough, 1965) . By modeling this gradience, we could then talk about degrees of similarity between words and phrases, not just a binary choice of either synonymy or non-synonymy. But this will require dealing with probabilities or weights in the model and also in the logic.",
"cite_spans": [
{
"start": 243,
"end": 258,
"text": "(Hampton, 2007)",
"ref_id": "BIBREF20"
},
{
"start": 330,
"end": 363,
"text": "(Rubenstein and Goodenough, 1965)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and outlook",
"sec_num": "4"
},
{
"text": "Though it should be noted that there is a debate within psychology on whether mental conceptual knowledge is actually distributional in nature(Landauer and Dumais, 1997;Barsalou, 2008;Andrews et al., 2009).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We write \u03b1A to indicate that expression \u03b1 is of type A.5 Fox and Lappin mention that one could add the constraint that if \u03b1, \u03b1 differ only in the names of bound variables, then I(\u03b1) = I(\u03b1 ). We do not do that here, since we are only concerned with replacing non-logical constants in the current paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For the purpose of this paper we make the simplifying assumption that concepts have \"strict boundaries\": A usage either does or does not refer to a concept. We do not model cases where a usage is related to a concept, but is not a clear match.7 Another possible reason for one usage mapping to multiple mental concepts is concept overlap(Murphy, 2002).8 Advertisement for a supermarket in Austin, Texas..9 We write 2 S for the power set of a set S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgements. This research was supported in part by the NSF CAREER grant IIS 0845925 and by the DARPA DEFT program under AFRL grant FA8750-13-2-0026. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the view of DARPA, AFRL or the US government. Warmest thanks to John Beavers and Gemma Boleda, as well as the members of the Austin Computational Linguistics Tea and the anonymous reviewers, for very helpful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Integrating experiential and distributional data to learn semantic representations",
"authors": [
{
"first": "M",
"middle": [],
"last": "Andrews",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Vigliocco",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Vinson",
"suffix": ""
}
],
"year": 2009,
"venue": "Psychological Review",
"volume": "116",
"issue": "3",
"pages": "463--498",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrews, M., G. Vigliocco, and D. Vinson (2009). Integrating experiential and distributional data to learn semantic representations. Psychological Review 116(3), 463-498.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semantic inference at the lexical-syntactic level",
"authors": [
{
"first": "R",
"middle": [],
"last": "Bar-Haim",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Greental",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shnarch",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bar-Haim, R., I. Dagan, I. Greental, and E. Shnarch (2007). Semantic inference at the lexical-syntactic level. In Proceedings of AAAI, Vancouver, Canada.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Nouns are vectors, adjectives are matrices: Representing adjectivenoun constructions in semantic space",
"authors": [
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baroni, M. and R. Zamparelli (2010). Nouns are vectors, adjectives are matrices: Representing adjective- noun constructions in semantic space. In Proceedings of EMNLP, Cambridge, MA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Grounded Cognition",
"authors": [
{
"first": "L",
"middle": [
"W"
],
"last": "Barsalou",
"suffix": ""
}
],
"year": 2008,
"venue": "Annual Review of Psychology",
"volume": "59",
"issue": "1",
"pages": "617--645",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barsalou, L. W. (2008). Grounded Cognition. Annual Review of Psychology 59(1), 617-645.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Global learning of typed entailment rules",
"authors": [
{
"first": "J",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berant, J., I. Dagan, and J. Goldberger (2011). Global learning of typed entailment rules. In Proceedings of ACL, Portland, OR.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "D",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blei, D. M., A. Ng, and M. I. Jordan (2003). Latent Dirichlet allocation. Journal of Machine Learning Research 3, 993-1022.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Distributional semantics in technicolor",
"authors": [
{
"first": "E",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Tran",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruni, E., G. Boleda, M. Baroni, and N. Tran (2012). Distributional semantics in technicolor. In Pro- ceedings of ACL, Jeju Island, Korea.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Modelling parsing constraints with high-dimensional context space",
"authors": [
{
"first": "C",
"middle": [],
"last": "Burgess",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lund",
"suffix": ""
}
],
"year": 1997,
"venue": "Language and Cognitive Processes",
"volume": "12",
"issue": "",
"pages": "177--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burgess, C. and K. Lund (1997). Modelling parsing constraints with high-dimensional context space. Language and Cognitive Processes 12, 177-210.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A compositional distributional model of meaning",
"authors": [
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of QI",
"volume": "",
"issue": "",
"pages": "133--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clark, S., B. Coecke, and M. Sadrzadeh (2008). A compositional distributional model of meaning. In Proceedings of QI, Oxford, UK, pp. 133-140.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Context-theoretic Semantics for Natural Language: an Algebraic Framework",
"authors": [
{
"first": "D",
"middle": [],
"last": "Clarke",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clarke, D. (2007). Context-theoretic Semantics for Natural Language: an Algebraic Framework. Ph. D. thesis, University of Sussex.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A context-theoretic framework for compositionality in distributional semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Clarke",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Linguistics",
"volume": "38",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clarke, D. (2012). A context-theoretic framework for compositionality in distributional semantics. Com- putational Linguistics 38(1).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Mathematical foundations for a compositional distributed model of meaning",
"authors": [
{
"first": "B",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2010,
"venue": "Lambek Festschrift",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Coecke, B., M. Sadrzadeh, and S. Clark (2010). Mathematical foundations for a compositional dis- tributed model of meaning. Lambek Festschrift, Linguistic Analysis 36.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Lexicalised compositionality",
"authors": [
{
"first": "A",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Herbelot",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Copestake, A. and A. Herbelot (2012, July). Lexicalised compositionality. Unpublished draft.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "WordNet: An electronic lexical database",
"authors": [
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fellbaum, C. (Ed.) (1998). WordNet: An electronic lexical database. Cambridge, MA: MIT Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Visual information in semantic representation",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng, Y. and M. Lapata (2010). Visual information in semantic representation. In Proceedings of HLT- NAACL, Los Angeles, California.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A framework for the hyperintensional semantics of natural language with two implementations",
"authors": [
{
"first": "C",
"middle": [],
"last": "Fox",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lappin",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of LACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fox, C. and S. Lappin (2001). A framework for the hyperintensional semantics of natural language with two implementations. In P. de Groote, G. Morrill, and C. Retore (Eds.), Proceedings of LACL, Le Croisic, France.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Foundations of Intensional Semantics",
"authors": [
{
"first": "C",
"middle": [],
"last": "Fox",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lappin",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fox, C. and S. Lappin (2005). Foundations of Intensional Semantics. Wiley-Blackwell.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Conceptual spaces",
"authors": [
{
"first": "P",
"middle": [],
"last": "G\u00e4rdenfors",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00e4rdenfors, P. (2004). Conceptual spaces. Cambridge, MA: MIT press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Integrating logical representations with probabilistic information using markov logic",
"authors": [
{
"first": "D",
"middle": [],
"last": "Garrette",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of IWCS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Garrette, D., K. Erk, and R. Mooney (2011). Integrating logical representations with probabilistic infor- mation using markov logic. In Proceedings of IWCS, Oxford, UK.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Experimental support for a categorical compositional distributional model of meaning",
"authors": [
{
"first": "E",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grefenstette, E. and M. Sadrzadeh (2011). Experimental support for a categorical compositional distri- butional model of meaning. In Proceedings of EMNLP, Edinburgh, Scotland, UK.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Typicality, graded membership, and vagueness",
"authors": [
{
"first": "J",
"middle": [
"A"
],
"last": "Hampton",
"suffix": ""
}
],
"year": 2007,
"venue": "Cognitive Science",
"volume": "31",
"issue": "",
"pages": "355--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hampton, J. A. (2007). Typicality, graded membership, and vagueness. Cognitive Science 31, 355-384.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A solution to Platos problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge",
"authors": [
{
"first": "T",
"middle": [],
"last": "Landauer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "Psychological Review",
"volume": "104",
"issue": "2",
"pages": "211--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Landauer, T. and S. Dumais (1997). A solution to Platos problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review 104(2), 211-240.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Automatic retrieval and clustering of similar words",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING-ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, D. (1998). Automatic retrieval and clustering of similar words. In Proceedings of COLING-ACL, Montreal, Canada.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Discovery of inference rules for question answering",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "Natural Language Engineering",
"volume": "7",
"issue": "4",
"pages": "343--360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, D. and P. Pantel (2001). Discovery of inference rules for question answering. Natural Language Engineering 7(4), 343-360.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Semantic and associative priming in high-dimensional semantic space",
"authors": [
{
"first": "K",
"middle": [],
"last": "Lund",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Burgess",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Atchley",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "660--665",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lund, K., C. Burgess, and R. Atchley (1995). Semantic and associative priming in high-dimensional semantic space. In Proceedings of the Cognitive Science Society, pp. 660-665.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The Big Book of Concepts",
"authors": [
{
"first": "G",
"middle": [
"L"
],
"last": "Murphy",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murphy, G. L. (2002). The Big Book of Concepts. MIT Press.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Intensional Models for the Theory of Types",
"authors": [
{
"first": "R",
"middle": [],
"last": "Muskens",
"suffix": ""
}
],
"year": 2007,
"venue": "The Journal of Symbolic Logic",
"volume": "72",
"issue": "1",
"pages": "98--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muskens, R. (2007). Intensional Models for the Theory of Types. The Journal of Symbolic Logic 72(1), 98-118.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Multi-prototype vector-space models of word meaning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Reisinger",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceeding of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reisinger, J. and R. Mooney (2010). Multi-prototype vector-space models of word meaning. In Pro- ceeding of NAACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Contextual correlates of synonymy",
"authors": [
{
"first": "H",
"middle": [],
"last": "Rubenstein",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Goodenough",
"suffix": ""
}
],
"year": 1965,
"venue": "Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "627--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rubenstein, H. and J. Goodenough (1965). Contextual correlates of synonymy. Computational Linguis- tics 8, 627-633.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Automatic word sense discrimination",
"authors": [
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sch\u00fctze, H. (1998). Automatic word sense discrimination. Computational Linguistics 24(1).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Dynamic pooling and unfolding recursive autoencoders for paraphrase detection",
"authors": [
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pennin",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Socher, R., E. Huang, J. Pennin, A. Ng, and C. Manning (2011). Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger (Eds.), Proceedings of NIPS.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning entailment rules for unary templates",
"authors": [
{
"first": "I",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Szpektor, I. and I. Dagan (2008). Learning entailment rules for unary templates. In Proceedings of COLING.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "P",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turney, P. and P. Pantel (2010). From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research 37, 141-188.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Domain and function: A dual-space model of semantic relations and compositions",
"authors": [
{
"first": "P",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Artificial Intelligence Research",
"volume": "44",
"issue": "",
"pages": "533--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turney, P. D. (2012). Domain and function: A dual-space model of semantic relations and compositions. Journal of Artificial Intelligence Research 44, 533-585.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "We proceed by induction over the structure of \u03c4 . If s consists of a single word, then \u03c4 = s, and either s = t or s = w for a word w = t. If s = w for some w = t, then sem(\u03c4 [p/t]) = sem(\u03c4 ) \u2208 sem(\u03c4 )[some h(p)/h(t)].",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "(\u03c4 )[some h(p)/h(t)]. So by Proposition 1, I(\u03c8) = I(sem(\u03c4 )), and by (M3), I(sem(\u03c4 )[p/t]) = I(\u03c8). Thus, I(sem(\u03c4 )) = I(sem(\u03c4 [p/t])). By Proposition 1, the intension is the same for all members of sem(\u03c4 )[some h(p)/h(t)], so we have I(sem(\u03c4 )) = I(sem(\u03c4 )[h(p)/h(t)]. And by (M2), if sem(\u03c4 ), sem(\u03c4 [p/t]) and sem(\u03c4 )[h(p)/h(t)] have the same intension, they also have the same extension.",
"num": null,
"type_str": "figure",
"uris": null
}
}
}
}