ACL-OCL / Base_JSON /prefixR /json /reinact /2021.reinact-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:58:04.786371Z"
},
"title": "Semantic classification and learning using a linear tranformation model in a Probabilistic Type Theory with Records",
"authors": [
{
"first": "Staffan",
"middle": [],
"last": "Larsson",
"suffix": "",
"affiliation": {
"laboratory": "Centre for Linguistic Theory and Studies in Probability (CLASP)",
"institution": "University of Gothenburg",
"location": {
"postBox": "Box 200",
"postCode": "40530",
"region": "SE",
"country": "Sweden"
}
},
"email": ""
},
{
"first": "Jean-Philippe",
"middle": [],
"last": "Bernardy",
"suffix": "",
"affiliation": {
"laboratory": "Centre for Linguistic Theory and Studies in Probability (CLASP)",
"institution": "University of Gothenburg",
"location": {
"postBox": "Box 200",
"postCode": "40530",
"region": "SE",
"country": "Sweden"
}
},
"email": "jean-philippe.bernardy@gu.se"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Starting from an existing account of semantic classification and learning from interaction formulated in a Probabilistic Type Theory with Records, encompassing Bayesian inference and learning with a frequentist flavour, we observe some problems with this account and provide an alternative account of classification learning that addresses the observed problems. The proposed account is also broadly Bayesian in nature but instead uses a linear transformation model for classification and learning.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Starting from an existing account of semantic classification and learning from interaction formulated in a Probabilistic Type Theory with Records, encompassing Bayesian inference and learning with a frequentist flavour, we observe some problems with this account and provide an alternative account of classification learning that addresses the observed problems. The proposed account is also broadly Bayesian in nature but instead uses a linear transformation model for classification and learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A probabilistic type theory was presented in Cooper et al. (2014) and , which extends Cooper's Type Theory with Records (TTR, Cooper (2012a) ; Cooper and Ginzburg (2015) ). This theory, Probabilistic Type Theory with Records (ProbTTR) assigns probability values, rather than Boolean truth-values, to type judgements.",
"cite_spans": [
{
"start": 45,
"end": 65,
"text": "Cooper et al. (2014)",
"ref_id": "BIBREF5"
},
{
"start": 126,
"end": 140,
"text": "Cooper (2012a)",
"ref_id": "BIBREF3"
},
{
"start": 143,
"end": 169,
"text": "Cooper and Ginzburg (2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "TTR has been used previously for natural language semantics (see, for example, Cooper (2005) and Cooper (2012a) ), and to analyse semantic coordination and learning (for example, ). It has also been applied to the analysis of interaction in dialogue (for example, Ginzburg (2012) and Breitholtz (2020)), in modelling robotic states and spatial cognition (for example, Dobnik et al. (2013) ), and to the problem of learning perceptual meaning from interaction (Larsson, 2015) . We believe that a probabilistic version of TTR could be useful in all these domains.",
"cite_spans": [
{
"start": 79,
"end": 92,
"text": "Cooper (2005)",
"ref_id": "BIBREF2"
},
{
"start": 97,
"end": 111,
"text": "Cooper (2012a)",
"ref_id": "BIBREF3"
},
{
"start": 264,
"end": 279,
"text": "Ginzburg (2012)",
"ref_id": "BIBREF11"
},
{
"start": 368,
"end": 388,
"text": "Dobnik et al. (2013)",
"ref_id": "BIBREF9"
},
{
"start": 459,
"end": 474,
"text": "(Larsson, 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Two main considerations motivated recasting TTR in probabilistic terms. First, a probabilistic type theory offers a natural framework for capturing the gradience of semantic judgements. This allows it to serve as the basis for an account of vagueness in interpretation, as shown by Fern\u00e1ndez and Larsson (2014) . Second, and this is the focus of the present paper, such a theory lends itself to developing a model of semantic classification and learning that can be straightforwardly integrated into more general probabilistic explanations of learning and inference.",
"cite_spans": [
{
"start": 282,
"end": 310,
"text": "Fern\u00e1ndez and Larsson (2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents an account of probabilistic classification (inference) and learning in ProbTTR based on a linear transformation model. Recent work (Larsson, 2020; Larsson, 2021) has developed and used a Bayesian account of classification and a learning theory with a frequentist flavour. Below in Section 2, we first introduce TTR and ProbTTR, and explain briefly how a Naive Bayes classifier can be formulated in ProbTTR. We then review earlier work on semantic classification and learning using ProbTTR, and introduce a simple language game (the fruit recognition game) that has been used as an example there. In Section 3, we note some drawbacks of the frequentist account of classification and learning, motivating the exploration of alternative accounts. The main contribution of this paper is the account of semantic classification and learning using a linear transformation model presented in Section 4. We show how classification (Section 4.2) and learning (Section 4.3) is handled in this account, again taking the fruit recognition game as our example. In Section 4, we provide conclusions and point towards future work.",
"cite_spans": [
{
"start": 151,
"end": 166,
"text": "(Larsson, 2020;",
"ref_id": "BIBREF14"
},
{
"start": 167,
"end": 181,
"text": "Larsson, 2021)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section reviews the background needed to follow the rest of the paper: TTR, Probabilistic TTR fundamentals, and Bayes nets and Naive Bayes classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "We will be formulating our account in a Type Theory with Records (TTR). We can here only give a brief and partial introduction to TTR; see also Cooper (2005) and Cooper (2012b) . To begin with, s : T is a judgment that some s is of type T . One basic type in TTR is Ind, the type of an individual; another basic type is Real, the type of real numbers.",
"cite_spans": [
{
"start": 144,
"end": 157,
"text": "Cooper (2005)",
"ref_id": "BIBREF2"
},
{
"start": 162,
"end": 176,
"text": "Cooper (2012b)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TTR: A brief introduction",
"sec_num": "2.1"
},
{
"text": "Next, we introduce records and record types. If a 1 : T 1 , a 2 : T 2 (a 1 ), . . . , a n : T n (a 1 , a 2 , . . . , a n\u22121 ), where T (a 1 , . . . , a n ) represents a type T which depends on the objects a 1 , . . . , a n , the record to the left in Figure 1 is of the record type to the right.",
"cite_spans": [],
"ref_spans": [
{
"start": 250,
"end": 258,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "TTR: A brief introduction",
"sec_num": "2.1"
},
{
"text": "In Figure 1 , 1 , . . . n are labels which can be used elsewhere to refer to the values associated with them. A sample record and record type is shown in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 154,
"end": 162,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "TTR: A brief introduction",
"sec_num": "2.1"
},
{
"text": "Types constructed with predicates may be dependent. This is represented by the fact that arguments to the predicate may be represented by labels used on the left of the ':' elsewhere in the record type. In Figure 2 , the type of c man is dependent on ref (as is c run ).",
"cite_spans": [],
"ref_spans": [
{
"start": 206,
"end": 214,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "TTR: A brief introduction",
"sec_num": "2.1"
},
{
"text": "If r is a record and is a label in r, we can use a path r. to refer to the value of in r. Similarly, if T is a record type and is a label in T , T . refers to the type of in T . Records (and record types) can be nested, so that the value of a label is itself a record (or record type). As can be seen in Figure 2 , types can be constructed from predicates, e.g., \"run\" or \"man\". Such types are called ptypes and correspond roughly to propositions in first order logic.",
"cite_spans": [],
"ref_spans": [
{
"start": 304,
"end": 313,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "TTR: A brief introduction",
"sec_num": "2.1"
},
{
"text": "In ProbTTR (as in TTR generally), situations are understood in a sense similar to that of Barwise and Perry (1983) . It is also assumed that agents can individuate situations, and that they have a way of judging situations to be of situation types.",
"cite_spans": [
{
"start": 90,
"end": 114,
"text": "Barwise and Perry (1983)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic TTR fundamentals",
"sec_num": "2.2"
},
{
"text": "The core of ProbTTR is the notion of a probabilistic judgement, where a situation s is judged to be of a type T with some probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic TTR fundamentals",
"sec_num": "2.2"
},
{
"text": "(1) p(s :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic TTR fundamentals",
"sec_num": "2.2"
},
{
"text": "T ) = r, where r \u2208 [0,1]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic TTR fundamentals",
"sec_num": "2.2"
},
{
"text": "Such a judgement expresses a subjective probability in that it encodes an agent's take on the likelihood that a situation is of that type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic TTR fundamentals",
"sec_num": "2.2"
},
{
"text": "A probabilistic Austinian proposition is an object (a record) that corresponds to, or encodes, a probabilistic judgement. Probabilistic Austinian propositions are records of the type in (2). A probabilistic Austinian proposition \u03d5 of this type corresponds to the judgement that \u03d5.sit is of type \u03d5.sit-type with probability \u03d5.prob.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic TTR fundamentals",
"sec_num": "2.2"
},
{
"text": "(3) p J (\u03d5.sit:\u03d5.sit-type)= \u03d5.prob",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic TTR fundamentals",
"sec_num": "2.2"
},
{
"text": "We assume that agents track observed situations and their types, modelled as probabilistic Austinian propositions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic TTR fundamentals",
"sec_num": "2.2"
},
{
"text": "We use p(T 1 ||T 2 ) to represent the probability that an agent assigns to some situation s being of type T 1 , given that s is of type T 2 . Note that p(T 1 ||T 2 ), the conditional probability for some s of s : T 1 given that s : T 2 , is different from p(T 1 |T 2 ), the probability of there being something of type T 1 given that there is something of type T 2 . We refer to the former as the bound variable conditional probability, and the latter as the existential conditional probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic TTR fundamentals",
"sec_num": "2.2"
},
{
"text": "A Bayesian Network is a Directed Acyclic Graph (DAG). The nodes of the DAG are random variables, each of whose values is the probability of one of the set of possible states that the variable denotes. Its directed edges express dependency relations among the variables. When the values of all the variables are specified, the graph describes a complete joint probability distribution (JPD) for its random variables. Bayesian Networks provide graphical models for probabilistic learning and inference (Pearl (1990) ; Halpern (2003) ).",
"cite_spans": [
{
"start": 500,
"end": 513,
"text": "(Pearl (1990)",
"ref_id": "BIBREF21"
},
{
"start": 516,
"end": 530,
"text": "Halpern (2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian nets and the Naive Bayes classifier",
"sec_num": "2.3"
},
{
"text": "A standard Naive Bayes model is a special case of a Bayesian network. More precisely, it is a Bayesian network with a single class variable C that influences a set of evidence variables E 1 , . . . , E n (the evidence), which do not depend on each other. Figure 2 illustrates the relation between evidence types and class types in a Naive Bayes classifier.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 263,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bayesian nets and the Naive Bayes classifier",
"sec_num": "2.3"
},
{
"text": "A Naive Bayes classifier computes the marginal probability of a class, given the evidence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian nets and the Naive Bayes classifier",
"sec_num": "2.3"
},
{
"text": "(4) p(c) = e 1 ,...,en p(c | e 1 , . . . , e n )p(e 1 ) . . . p(e n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian nets and the Naive Bayes classifier",
"sec_num": "2.3"
},
{
"text": "where c 1 is the value of C, e i is the value of ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian nets and the Naive Bayes classifier",
"sec_num": "2.3"
},
{
"text": "E i (1 \u2264 i \u2264 n) and \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 1 = a 1 2 = a 2 . . . n = a n . . . \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb : \uf8ee \uf8ef \uf8ef \uf8f0 1 : T 1 2 : T 2 (l 1 ) . . . n : T n ( 1 , l 2 , . . . , l n\u22121 ) \uf8f9 \uf8fa \uf8fa \uf8fb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian nets and the Naive Bayes classifier",
"sec_num": "2.3"
},
{
"text": "V ) = {A 1 , . . . , A n } such that the following conditions hold. (6) a. A j V for 1 \u2264 j \u2264 n b. A j \u22a5 A i for all i, j such that 1 \u2264 i = j \u2264 n c. for any s, p(s : V ) \u2208 {0, 1} and p(s : V ) = T \u2208R(V ) p J (s : T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayesian nets and the Naive Bayes classifier",
"sec_num": "2.3"
},
{
"text": "For a situation s, a probability distribution over the m value types A j \u2208 R(A), 1 \u2264 j \u2264 m belonging to a variable type A can be written (as above) as a set of probabilistic Austinian propositions, e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing probability distibutions",
"sec_num": "2.5"
},
{
"text": "(7) { \uf8ee \uf8f0 sit = s sit-type = A j prob = p(s : A j ) \uf8f9 \uf8fb | A j \u2208 R(A)}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing probability distibutions",
"sec_num": "2.5"
},
{
"text": "However, we will also have use for a vector representation of probability distributions, which is also more compact. If we assume R(A) is an ordered set {A 1 , . . . A m }, we can define probability distribution d A (s):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing probability distibutions",
"sec_num": "2.5"
},
{
"text": "(8) d A (s) = p 1 , . . . , p m where p j = p(s : A j ) for A j \u2208 R(A), 1 \u2264 i \u2264 m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing probability distibutions",
"sec_num": "2.5"
},
{
"text": "We also write d A (s) j for p(s : A j ). This means we can reformulate (11) above:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing probability distibutions",
"sec_num": "2.5"
},
{
"text": "(9) d C \u03ba (s) = p(s : C 1 ), . . . , p(s : C |R(C \u03ba )| )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing probability distibutions",
"sec_num": "2.5"
},
{
"text": "Corresponding to the evidence, class variables, and their value types, we associate with a ProbTTR Naive Bayes classifier \u03ba:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A ProbTTR Naive Bayes classifier",
"sec_num": "2.6"
},
{
"text": "(10) a. a collection of n evidence variable types E \u03ba 1 , . . . , E \u03ba n b. n associated sets of evidence value types R(E \u03ba 1 ), . . . , R(E \u03ba n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A ProbTTR Naive Bayes classifier",
"sec_num": "2.6"
},
{
"text": "c. a class variable type C \u03ba , e.g. Fruit, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A ProbTTR Naive Bayes classifier",
"sec_num": "2.6"
},
{
"text": "d. an associated set of class value types R(C \u03ba )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A ProbTTR Naive Bayes classifier",
"sec_num": "2.6"
},
{
"text": "To classify a situation s using a classifier \u03ba, the evidence is acquired by observing and classifying s with respect to the evidence types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A ProbTTR Naive Bayes classifier",
"sec_num": "2.6"
},
{
"text": "Larsson and Cooper (2021) define a ProbTTR Bayes classifier \u03ba as a function from a situation s (of the meet type of the evidence variable types E \u03ba 1 , . . . , E \u03ba n ) to a set of probabilistic Austinian propositions that define a probability distribution over the values of the class variable type C \u03ba , given probability distributions over the values of each evidence variable type E \u03ba 1 , . . . , E \u03ba n . Formally, a ProbTTR Na\u00efve Bayes classifier is a function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A ProbTTR Naive Bayes classifier",
"sec_num": "2.6"
},
{
"text": "(11) \u03ba : E \u03ba 1 \u2227 . . . \u2227 E \u03ba n \u2192 Set( \uf8ee \uf8f0 sit : Sit sit-type : Type prob : [0,1] \uf8f9 \uf8fb ) such that if 1 s : E \u03ba 1 \u2227 . . . \u2227 E \u03ba n , then (12) \u03ba(s)={ \uf8ee \uf8f0 sit = s sit-type = C prob = p(s : C) \uf8f9 \uf8fb | C \u2208 R(C \u03ba )} or equivalently, (13) \u03ba(s) = { \uf8ee \uf8f0 sit = s sit-type = C prob = d C \u03ba (s) C \uf8f9 \uf8fb | C \u2208 R(C \u03ba )}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A ProbTTR Naive Bayes classifier",
"sec_num": "2.6"
},
{
"text": "Larsson and Cooper (2021) illustrate semantic classification and learning using a Naive Bayes classifier in ProbTTR using the Apple Recognition Game.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The fruit recognition game",
"sec_num": "2.7"
},
{
"text": "In this game a teacher shows a learning agent fruits. The agent makes a guess, the teacher provides the correct answer, and the agent learns from these observations. We will use shorthands Apple and Pear for the types corresponding to an object being an apple or a pear, respectively 2 . Furthermore, we will assume that the objects in the Apple Recognition Game have one of two shapes (a-shape or p-shape, corresponding to types Ashape and Pshape= and one of two colours (green or red, corresponding to types Green and Red).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The fruit recognition game",
"sec_num": "2.7"
},
{
"text": "The class variable type is Fruit, with value types R(Fruit) = {Apple, Pear}. The evidence 1 Recall that for any s, p J (s : V ) \u2208 {0, 1} for any variable type V . Therefore, any type judgement regarding a variable type, such as that involved in the classifier function, can be regarded as categorical.",
"cite_spans": [
{
"start": 90,
"end": 91,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The fruit recognition game",
"sec_num": "2.7"
},
{
"text": "2 For details, see ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The fruit recognition game",
"sec_num": "2.7"
},
{
"text": "Shape Colour ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fruit",
"sec_num": null
},
{
"text": "= F prob = p FruitC (s : F ) \uf8f9 \uf8fb | F \u2208 R(Fruit)}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fruit",
"sec_num": null
},
{
"text": "In , an account of semantic classification and learning with a frequentist flavour (but also with some differences to regular frequentist learning acccounts) is presented, under the assumption that we can compute conditional proba-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "bilities p(C j ||E 1 . . . E n ) of a class value types C j given evidence value types E 1 . . . E n . In general, for C j \u2208 R(C \u03ba ), we have (15) p(s : C j ) = E 1 \u2208R(E \u03ba 1 ) ... En\u2208R(E \u03ba n ) p(C j ||E 1 . . . E n )p(s : E 1 ) . . . p(s : E n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "The non-conditional probabilities p(s : E 1 ) . . . p(s : E n ) are derived from the agents' take on the particular situation s being classified, coming for example from perceptual sensors that are directed at s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "For the model of semantic classification that uses conditional probabilities, a central question is of course how to estimate conditional probabilities, of the form p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "(C||E 1 \u2227 . . . \u2227 E n ) (where C \u2208 R(C), E i \u2208 R(E i ), 1 \u2264 i \u2264 n).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "Using Bayes rule and marginalising over the class value types, we get for a Naive Bayes classifier:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "(16)p \u03ba (C||E 1 \u2227 . . . \u2227 E n ) = p(C)p(E 1 ||C) . . . p(E n ||C) C \u2208R(C \u03ba ) p(C )p(E 1 ||C ) . . . p(E n ||C )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "To estimate the likelihoods p(E i ||C) and priors p(C ), use a version of counting previous instances of C and E i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "p(E i |C) = |E i &C| |C|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "The account in (ibid.) is based on the idea that an agent makes judgements based on a finite string of probabilistic Austinian propositions, the judgement history J. When an agent A encounters a new situation s and wants to know if it is of type T or not, A uses probabilistic reasoning to determine p J (s : T ) on the basis of A's previous judgements J. For all combinations of evidence value types E 1 , . . . , E n and class value types C, the account in (ibid.) computes the conditional probability of the evidence value types given the class value type as in (17):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "(17) p(E i ||C) = j\u2208J,j.sit=s p J (s : C)p J (s : E i ) j\u2208J,j.sit=s p J (s : C)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "Note that the recorded judgements concerning the class types C \u2208 R(C) are here assumed to be derived mainly from a tutor's explicit judgements, which are thus assumed to provide the ground truth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "The account in (ibid.) also computes the prior of the class value type as in (18). p J (T ) represents the prior probability that an arbitrary situation is of type T given J.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "(18) p J (T ) = || T || J P(J) if P(J) > 0, otherwise 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "where P(J) is the cardinality of situations in J, i.e. the total number of situations in J.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "(19) P(J) = |{s|\u2203j \u2208 J, j.sit = s}| 3 Drawbacks of the frequentist account While conceptually simple, the above account, as any frequentist model, has some drawbacks. Some are well known, such as (problem P1) assigning probability 0 to judgements concerning unseen types, and (P2) putting equal weight on old and recent observations, thereby risking that classifiers for types that have a large amount of related judgements in J may change only very slowly in light of new observations. Also, (P3) the account may be computationally unwieldy in real life settings since conditional probabilities are computed from scratch from J on every instance of classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "Other drawbacks are more specifically related to our goal of modelling semantic coordination in dialogue, where both definitions (or explications) and examples can affect meanings but in different ways (Myrendal, 2019; Larsson and Myrendal, 2017; Larsson, 2021) . With respect to the problem (P4) of combining evidence from examples and definitions (as described in Larsson (2021) ), the frequentist model does not provide a theoretically satisfying way of doing this. While a definition may be useful until examples have been observed, at some point the observed examples may override a definition. In the account proposed in (Larsson, 2021) , definitions affect the corresponding classifier only in the short run, and effects of proposed definitions are overwritten as soon as an observation of an instance of the defined concept has been made. A more flexible trade-off between definitions and examples (observations) would probably be desirable in this context.",
"cite_spans": [
{
"start": 202,
"end": 218,
"text": "(Myrendal, 2019;",
"ref_id": "BIBREF20"
},
{
"start": 219,
"end": 246,
"text": "Larsson and Myrendal, 2017;",
"ref_id": "BIBREF19"
},
{
"start": 247,
"end": 261,
"text": "Larsson, 2021)",
"ref_id": "BIBREF15"
},
{
"start": 366,
"end": 380,
"text": "Larsson (2021)",
"ref_id": "BIBREF15"
},
{
"start": 627,
"end": 642,
"text": "(Larsson, 2021)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "Finally, (P5) the frequentist model has little to say about the relation between the learning agents' own judgement and the judgement given by the teacher with respect to how much weight is put on these relative to each other when learning from interaction. Does the agent completely trust the tutor, or does it weigh in other factors when learning from tutor input?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "While there may be ways of addressing at least some of these problems within the frequentist account 3 , we will here explore an alternative account that seems to address all these problems without the need for ad hoc solutions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A frequentist model of semantic classification and learning",
"sec_num": "2.8"
},
{
"text": "In this section we present a model where the probabilities given in J are used to compute a linear transformation model \u0398 that generalises over J and which is used to compute the conditional probabilities used in classification. Such a model can be made more computationally efficient than the frequentist model, and is also compatible with the way probabilistic inference and learning is encoded in neural network models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic classification and learning using a linear transformation model",
"sec_num": "4"
},
{
"text": "Like before we assume that evidence variables are determined independently by the class variable (in the fruit recognition game game, Col(our) and Shape are determined independently by the Fruit variable). Following standard practice in deep learning models, a probability distribution over the values (class value types) of the class variable type C is mapped to probability distributions over the evidence value types corresponding to each evidence variable type E i (in the fruit recognition game, fruit type is mapped to a colour distribution and a shape distribution) using a linear transformation, represented by a matrix \u0398 C\u2192E i (in the apple game, \u0398 F ruit\u2192Col and \u0398 F ruit\u2192Shape ) followed by a softmax. Let us call \u0398 the combined parameters of such linear transformations. For a classifier \u03ba, a subset \u0398 \u03ba of the parameters can be used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "For example, in the apple game, we assume that the probability distribution over the variable value types in R(Col) are estimated thus:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "(20)d Col (s) = softmax(\u0398 \u03ba Fruit\u2192Col d Fruit (s))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "In general,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "(21)d E i (s) = softmax(\u0398 \u03ba C\u2192E i d C (s))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "Even more generally, with an arbitrary bayesian network, we take into account all edges to the variable E (giving a finite set of unobserved variables By manipulation of N , the relative importance of definitions relative to observations can be regulated. To address P5, judgements s : C regarding class value types C that are added to J could be made to reflect a combination of teacher judgement and other factors, including the agents' own estimation. ranged by I below). We assume simultaneously a parameter matrix \u0398 \u03ba IE for each such edge:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "(22)d E (s) = softmax( (I\u2192E)\u2208net \u0398 \u03ba IE d I (s)) It follows that for E j \u2208 R(E i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": ", an estimation of the probability of a situation having evidence value type E j is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "(23)p(s : E j ) =d E i (s) j = softmax(\u0398 \u03ba C\u2192E i d C (s))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "j Expanding the definition of softmax, we get:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "(24)p (s : E j ) = e (\u0398 \u03ba C\u2192E i d C (s)) j E k \u2208R(E \u03ba i ) e (\u0398 \u03ba C\u2192E i d C (s)) k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "or equivalently (dot, products and column vectors)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "(25)p (s : E j ) = e (\u0398 \u03ba C\u2192E j \u2022d C(s) ) E k \u2208R(E \u03ba i ) e (\u0398 \u03ba C\u2192E k \u2022d C(s) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "Note that softmax is here overloaded to be used for vectors of probabilities as well as for individual probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "We also define for any distribution d A over (variable value types of) variable type A:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "(26)\u010f B (d A ) = softmax(\u0398 A\u2192B d A )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "so that e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "(27)d E (s) =\u010f E (d C (s)) = softmax(\u0398 \u03ba C\u2192E d C (s))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling how Evidence is determined by Class",
"sec_num": "4.1"
},
{
"text": "When we use a transformation model for classification, the idea is to evaluate the likelihood of a distributiond C (s) which according to the model \u0398 accounts for the observed evidence d E i (s) 4 . This means we need to represent meta-level probabilities of a probability distribution given another probability distribution. When classifying fruits in the Apple game, we want to estimate the probability of the class value types given the observed distribution over the evidence value types. The probability for a particular distribution d C (s) is estimated using Bayesian marginalisation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "(28)p(d C (s)|d E i (s)) \u221d p(d E i (s)|d C (s)) \u00d7 prior(d C (s))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "If we want to find the distribution d C(s) that maximises the observed evidence in light of the model, for a single evidence variable type E i we want to find",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "(29) argmax z\u2208[0,1] |R(C)|p(d E i (s) |z)prior(z)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "and for evidence variable types E \u03ba 1 , . . . E \u03ba n we want to find",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "(30) d C \u03ba (s) = argmax z\u2208[0,1] |R(C)| p(d E 1 (s)|z) . . .p(d En (s)|z)prior(z)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "where z is ranging over the space of distributions over C value types . If we have k = |R(C)| possible value types, this space is contained in [0, 1] k . To find z we need a numerical method, e.g. gradient descent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "To classify a situation s with respect to each C j \u2208 R(C),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "(31)p(s : C j ) =d C (s) j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "In the fruit game, for each C j \u2208 R(Fruit),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "(32)p(s : C j ) =d C (s) j = argmax z\u2208[0,1] 2p(d Col (s)|z)p(d Shp (s)|z)prior(z)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "Conditional probabilities Instead of estimating the conditional probability of an evidence value type given a class value type, as in the frequentist model, we here estimate the conditional probability of a distribution over evidence value types given a distribution over class value types belonging to the class variable type. The probability of an observed probability distribution d E i (s) over evidence value types E j \u2208 E i for a situation s given a distribution d C (s) over the class value types for s can be estimated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "(33)p(d E i (s)|d C (s)) = e \u2212H(d E i (s),\u010f E i (d C (s))) where H(d E i (s),\u010f E i (d C (s)))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "is the cross entropy between the observed distribution over the evidence d E i (s) and the distribution\u010f E i (d C (s)) over the evidence variable type E i as predicted by the model \u0398 \u03ba C\u2192E i based on a (hypothetical) distribution over the class variable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "Probability Density Functions In reality, d C (s) is a continuous variable (since it is a probability distribution), so p(d C (s)) = 0. Basically, since there are uncountably many possible probability distributions, the probability of any one of them is zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "However, the same kind formula works for Probability Density Functions (PDFs) which give probability distributions over a continuous variable. Writing f for PDF, we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "(34) f d E i (s) (d C (s)) \u221d p(d E i (s)|d C (s)) \u00d7 f prior (d C (s))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "corresponding to (28), repeated here as (35)):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "(35)p(d C (s)|d E i (s)) \u221d p(d E i (s)|d C (s)) \u00d7 prior(d C (s))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "As before, when classifying we want the the distributiond C (s) that maximises the probability that the model \u0398 accounts for the evidence. For a single evidence variable, this is d E i (s).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "(36)d C \u03ba (s) = argmax zp (d E i (s)|z)f prior (z)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "For n evidence variables:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "(37)d C \u03ba (s) = argmax zp (d E 1 (s)|z) \u2022 \u2022 \u2022p(d En (s)|z)f prior (z)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "corresponding to (30), repeated here as (38):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "(38) d C \u03ba (s) = argmax z\u2208[0,1] |R(C)| p(d E 1 (s) |z) . . .p(d En(s) |z)prior(z)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "Priors for d C There are many ways to give a prior for d C . We know that (1) it must be a function of \u0398 and (2) must be a probability distribution. One way to satisfy these requirements is to follow the same recipe as for evidence (but with no dependency). According to this recipe, we have the formula:d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "C (s) = softmax(\u0398 \u03ba C )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "This way, there is a functional dependency from \u0398 tod C (s), and therefore any prior density function on \u0398 yields another density function ond C (s), called hereafter f prior (d C (s)) 5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "Note that here, \u0398 \u03ba C is a vector, not a matrix. The priors of each element in \u0398 can be an independent uniform distribution over reals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification using a transformation model",
"sec_num": "4.2"
},
{
"text": "It remains to see how \u0398 gets updated by any learning event j. To do so, one uses Bayesian reasoning again. We start by evaluating the probability of a learning event in the form of a newly observed situation s associated with j actually occurring, given a fixed value of \u0398. As in the frequentist account of learning, we assume that our agent A has stored in J probabilistic judgements providing probability distributions for s over the class and evidence variables (or in the general case of a Bayes net, all evidence variables and unobserved variables).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.3"
},
{
"text": "We want to assign a high probability if they match the prediction and low otherwise. Following standard practice in information theory, we assign it the (inverse exponential of) the cross entropy of each characteristic's observed distribution, with the predicted distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.3"
},
{
"text": "When learning from a tutor, as in the fruit recognition game, the learning agent computes the cross entropy between the predicted (estimated) distributiond C (s) and the distribution based on the teacher's input d C (s), which is here treated as ground truth. By contrast, in the frequentist account, the predicted d C played no role in learning (although it did affect the learning agent's guess).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.3"
},
{
"text": "Using J \u03ba (s) as a shorthand for the probabilistic judgements concerning a situation s (with respect to an evidence variable E i and a class variable C used by a classifier \u03ba) encoded in J (concretely, the observed probability distributions for s over E i and C), we can compute the conditional probability of these judgements given a classifier parameter matrix \u0398 \u03ba thus:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.3"
},
{
"text": "(39) p(J \u03ba (s)|\u0398 \u03ba ) = p(d E i (s), d C (s)|\u0398 \u03ba ) = e \u2212H(d C (s),d C (s)) \u00d7 i e \u2212H(d E i (s),d E i (s))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.3"
},
{
"text": "Using the same kind of Bayesian reasoning as always, we can marginalise:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.3"
},
{
"text": "p(\u0398|J(s)) \u221d p(J(s)|\u0398)p(\u0398)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.3"
},
{
"text": "A benefit of this model that the estimation for various probabilities depend only on \u0398. This means that the agent needs not remember the whole history J, only the distribution of \u0398 (over all \u039e \u2208 P arameters). (Yet one can consider several learning events jointly when performing a Bayesian update.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.3"
},
{
"text": "In practice, an actual agent will only work with an approximation of this distribution. For example, a neural net may remember just a single \u0398, and instead of a Bayesian update it takes a gradient of p(J(s)|\u0398) wrt. \u0398 and update it accordingly: \u0398 := \u0398 \u2212 \u03b1 dp(J(s)|\u0398) d\u0398",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.3"
},
{
"text": "Insofar as the agent updates parameters directly, rather than updating the judgements history J and using it to compute classifier parameters, this account addresses problem P2 noted above. Furthermore, the fact that the proposed model has an explicit learning factor is key to addressing some of the other problems noted above. Since the learning factor explicitly addresses the impact of new examples compared to previous observations, it enables us to address problem P3. To address problem P4, definitions can be associated with a higher learning factor than examples, to model the hypothesis that definitions have a much larger potential impact on an agents' take on a meaning compared to an example. Also, we could possibly use the learning factor \u03b1 to model how much the teacher's judgement is prioritised over the agent's own judgement estimation (based on perception of the situation), thereby addressing problem P5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4.3"
},
{
"text": "Previous work proposed a frequentist Bayesian account of semantic classification and learning formulated in terms of a Probabilistic Type Theory with Records. We observed some problems with this approach, including accounting for the effect of definitions as opposed to examples in learning meanings from interaction, and proposed an alternative account of learning that keeps the broadly Bayesian model of classification, but where classification is based on a linear transformation model. We argued that the account proposed here can address some of the problems of the frequentist account.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "5"
},
{
"text": "In future work, we wish to implement both the frequentist model (including some amendments to address observed problems) and the linear transformation model, and evaluate and compare them practically with respect to the problems P1-P5 noted above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "5"
},
{
"text": "One might argue that the interactive learning setting already addresses P1 to the extent that tutor input can override the agent's judgement concerning unseen types. To address P2, the frequentist model could be amended with exponential decay over J. To address P3, some method of caching conditional probabilities and priors computed from J, and updating them only when needed, might be devised. To address P4, one could let a definition lead to adding some relatively high number N of \"fake\" observations in line with the definition to J.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It is also possible to not decide on one distribution, but to keep a distribution over distributions over the class variable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Unfortunately, because softmax is not a bijective function, there is no simple formula connecting these PDFs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by grant 2014-39 from the Swedish Research Council (VR) for the establishment of the Centre for Linguistic Theory and Studies in Probability (CLASP) at the University of Gothenburg.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Situations and Attitudes",
"authors": [
{
"first": "Jon",
"middle": [],
"last": "Barwise",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Perry",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jon Barwise and John Perry. 1983. Situations and At- titudes. Bradford Books. MIT Press, Cambridge, Mass.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enthymemes and Topoi in Dialogue: The Use of Common Sense Reasoning in Conversation",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Breitholtz",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1163/9789004436794"
]
},
"num": null,
"urls": [],
"raw_text": "Ellen Breitholtz. 2020. Enthymemes and Topoi in Dia- logue: The Use of Common Sense Reasoning in Con- versation. Brill, Leiden, The Netherlands.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Records and record types in semantic theory",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Cooper",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Logic and Computation",
"volume": "15",
"issue": "2",
"pages": "99--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Cooper. 2005. Records and record types in se- mantic theory. Journal of Logic and Computation, 15(2):99-112.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Type theory and semantics in flux",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Cooper",
"suffix": ""
}
],
"year": 2012,
"venue": "Handbook of the Philosophy of Science",
"volume": "14",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Cooper. 2012a. Type theory and semantics in flux. In Ruth Kempson, Nicholas Asher, and Tim Fernando, editors, Handbook of the Philosophy of Science, volume 14: Philosophy of Linguistics. El- sevier BV. General editors: Dov M. Gabbay, Paul Thagard and John Woods.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Type theory and semantics in flux",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Cooper",
"suffix": ""
}
],
"year": 2012,
"venue": "Handbook of the Philosophy of Science",
"volume": "14",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Cooper. 2012b. Type theory and semantics in flux. In Ruth Kempson, Nicholas Asher, and Tim Fernando, editors, Handbook of the Philosophy of Science, volume 14: Philosophy of Linguistics. El- sevier BV. General editors: Dov M. Gabbay, Paul Thagard and John Woods.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A probabilistic rich type theory for semantic interpretation",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Cooper",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Dobnik",
"suffix": ""
},
{
"first": "Shalom",
"middle": [],
"last": "Lappin",
"suffix": ""
},
{
"first": "Staffan",
"middle": [],
"last": "Larsson",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the EACL 2014 Workshop on Type Theory and Natural Language Semantics (TTNLS)",
"volume": "",
"issue": "",
"pages": "72--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Cooper, Simon Dobnik, Shalom Lappin, and Staffan Larsson. 2014. A probabilistic rich type theory for semantic interpretation. In Proceedings of the EACL 2014 Workshop on Type Theory and Natural Language Semantics (TTNLS), pages 72-79. Gothenburg, Association of Computational Linguis- tics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Probabilistic type theory and natural language semantics. Linguistic Issues in Language Technology",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Cooper",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Dobnik",
"suffix": ""
},
{
"first": "Shalom",
"middle": [],
"last": "Lappin",
"suffix": ""
},
{
"first": "Staffan",
"middle": [],
"last": "Larsson",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "10",
"issue": "",
"pages": "1--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Cooper, Simon Dobnik, Shalom Lappin, and Staffan Larsson. 2015. Probabilistic type theory and natural language semantics. Linguistic Issues in Language Technology 10, pages 1-43.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Type theory with records for natural language semantics",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Cooper",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Ginzburg",
"suffix": ""
}
],
"year": 2015,
"venue": "The Handbook of Contemporary Semantic Theory",
"volume": "",
"issue": "",
"pages": "375--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Cooper and Jonathan Ginzburg. 2015. Type the- ory with records for natural language semantics. In Shalom Lappin and Chris Fox, editors, The Hand- book of Contemporary Semantic Theory, Second Edition, pages 375-407. Wiley-Blackwell, Oxford and Malden.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Compositional and ontological semantics in learning from corrective feedback and explicit definition",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Cooper",
"suffix": ""
},
{
"first": "Staffan",
"middle": [],
"last": "Larsson",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of DiaHolmia: 2009 Workshop on the Semantics and Pragmatics of Dialogue",
"volume": "",
"issue": "",
"pages": "59--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Cooper and Staffan Larsson. 2009. Composi- tional and ontological semantics in learning from corrective feedback and explicit definition. In Pro- ceedings of DiaHolmia: 2009 Workshop on the Se- mantics and Pragmatics of Dialogue, pages 59-66. Department of Speech, Music and Hearing, KTH.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Modelling language, action, and perception in Type Theory with Records",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Dobnik",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Cooper",
"suffix": ""
},
{
"first": "Staffan",
"middle": [],
"last": "Larsson",
"suffix": ""
}
],
"year": 2012,
"venue": "Constraint Solving and Language Processing -7th International Workshop on Constraint Solving and Language Processing",
"volume": "2012",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Dobnik, Robin Cooper, and Staffan Larsson. 2013. Modelling language, action, and perception in Type Theory with Records. In Denys Duchier and Yannick Parmentier, editors, Constraint Solving and Language Processing -7th International Work- shop on Constraint Solving and Language Process- ing, CSLP 2012, Orleans, France, September 13- 14, 2012. Revised Selected Papers, number 8114 in Publications on Logic, Language and Information (FoLLI). Springer, Berlin, Heidelberg.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Vagueness and learning: A type-theoretic approach",
"authors": [
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "Staffan",
"middle": [],
"last": "Larsson",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 3rd Joint Conference on Lexical and Computational Semantics ( * SEM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raquel Fern\u00e1ndez and Staffan Larsson. 2014. Vague- ness and learning: A type-theoretic approach. In Proceedings of the 3rd Joint Conference on Lexical and Computational Semantics ( * SEM 2014).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The Interactive Stance: Meaning for Conversation",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Ginzburg",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Ginzburg. 2012. The Interactive Stance: Meaning for Conversation. Oxford University Press, Oxford.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Reasoning About Uncertainty",
"authors": [
{
"first": "J",
"middle": [],
"last": "Halpern",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Halpern. 2003. Reasoning About Uncertainty. MIT Press, Cambridge MA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Formal semantics for perceptual classification",
"authors": [
{
"first": "",
"middle": [],
"last": "Staffan Larsson",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of Logic and Computation",
"volume": "25",
"issue": "2",
"pages": "335--369",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Staffan Larsson. 2015. Formal semantics for percep- tual classification. Journal of Logic and Computa- tion, 25(2):335-369. Published online 2013-12-18.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Discrete and probabilistic classifier-based semantics",
"authors": [
{
"first": "",
"middle": [],
"last": "Staffan Larsson",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Probability and Meaning Conference",
"volume": "",
"issue": "",
"pages": "62--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Staffan Larsson. 2020. Discrete and probabilistic classifier-based semantics. In Proceedings of the Probability and Meaning Conference (PaM 2020), pages 62-68, Gothenburg. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The role of definitions in coordinating on perceptual meanings",
"authors": [
{
"first": "",
"middle": [],
"last": "Staffan Larsson",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Workshop on the Semantics and Pragmatics of Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Staffan Larsson. 2021. The role of definitions in coor- dinating on perceptual meanings. In Proceedings of the Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2021).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Semantic learning in a probabilistic type theory with records",
"authors": [
{
"first": "Staffan",
"middle": [],
"last": "Larsson",
"suffix": ""
},
{
"first": "Jean-Philippe",
"middle": [],
"last": "Bernardy",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Cooper",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of Workshop on Computing Semantics with Types, Frames and Related Structures",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Staffan Larsson, Jean-Philippe Bernardy, and Robin Cooper. 2021. Semantic learning in a probabilistic type theory with records. In Proceedings of Work- shop on Computing Semantics with Types, Frames and Related Structures 2021.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Towards a formal view of corrective feedback",
"authors": [
{
"first": "Staffan",
"middle": [],
"last": "Larsson",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Cooper",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Cognitive Aspects of Computational Language Acquisition",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Staffan Larsson and Robin Cooper. 2009. Towards a formal view of corrective feedback. In Proceedings of the Workshop on Cognitive Aspects of Computa- tional Language Acquisition, pages 1-9. EACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bayesian classification and inference in a probabilistic type theory with records",
"authors": [
{
"first": "Staffan",
"middle": [],
"last": "Larsson",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Cooper",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of NALOMA 2021",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Staffan Larsson and Robin Cooper. 2021. Bayesian classification and inference in a probabilistic type theory with records. In Proceedings of NALOMA 2021.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dialogue acts and updates for semantic coordination",
"authors": [
{
"first": "Staffan",
"middle": [],
"last": "Larsson",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Myrendal",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Staffan Larsson and Jenny Myrendal. 2017. Dialogue acts and updates for semantic coordination. SEM- DIAL 2017 SaarDial, page 59.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Negotiating meanings online: Disagreements about word meaning in discussion forum communication",
"authors": [
{
"first": "Jenny",
"middle": [],
"last": "Myrendal",
"suffix": ""
}
],
"year": 2019,
"venue": "Discourse Studies",
"volume": "21",
"issue": "3",
"pages": "317--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Myrendal. 2019. Negotiating meanings online: Disagreements about word meaning in discussion fo- rum communication. Discourse Studies, 21(3):317- 339.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bayesian decision methods",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pearl",
"suffix": ""
}
],
"year": 1990,
"venue": "Readings in Uncertain Reasoning",
"volume": "",
"issue": "",
"pages": "345--352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Pearl. 1990. Bayesian decision methods. In G. Shafer and J. Pearl, editors, Readings in Uncer- tain Reasoning, pages 345-352. Morgan Kaufmann.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Schema of record and record type"
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Evidence and Class in a Naive Bayes classifier (5) p(c | e 1 , . . . , e n ) = p(c)p(e 1 | c) . . . p(e n | c) C=c p(c )p(e 1 | c ) . . . p(e n | c ) 2.4 Random variables in TTR Larsson and Cooper (2021) introduce a type theoretic counterpart of a random variable in Bayesian inference. To represent a single (discrete) random variable with a range of possible (mutually exclusive) values, ProbTTR uses a variable type V whose range is a set of value types R("
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Bayesian Network for the Apple Recognition GameFor a situation s the classifier FruitC(s) returns a probability distribution over the value types in R(Fruit)."
},
"TABREF0": {
"html": null,
"content": "<table/>",
"text": "). variable types are (i) Col(our), with value types R(Col) = {Green, Red}, and (ii) Shape, with value types R(Shape) = {Ashape, Pshape}.Figure 4shows the evidence and class types of the Apple Recognition Game in a simple Bayesian Network.",
"type_str": "table",
"num": null
}
}
}
}