ACL-OCL / Base_JSON /prefixW /json /W09 /W09-0209.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W09-0209",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:37:00.454618Z"
},
"title": "SVD Feature Selection for Probabilistic Taxonomy Learning",
"authors": [
{
"first": "Fallucchi",
"middle": [],
"last": "Francesca",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University \"Tor Vergata\" Rome",
"location": {
"settlement": "Disp",
"country": "Italy"
}
},
"email": ""
},
{
"first": "Fabio",
"middle": [
"Massimo"
],
"last": "Zanzotto",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose a novel way to include unsupervised feature selection methods in probabilistic taxonomy learning models. We leverage on the computation of logistic regression to exploit unsupervised feature selection of singular value decomposition (SVD). Experiments show that this way of using SVD for feature selection positively affects performances.",
"pdf_parse": {
"paper_id": "W09-0209",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose a novel way to include unsupervised feature selection methods in probabilistic taxonomy learning models. We leverage on the computation of logistic regression to exploit unsupervised feature selection of singular value decomposition (SVD). Experiments show that this way of using SVD for feature selection positively affects performances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Taxonomies are extremely important knowledge repositories in a variety of applications for natural language processing and knowledge representation. Yet, manually built taxonomies such as WordNet (Miller, 1995) often lack in coverage when used in specific knowledge domains. Automatically creating or extending taxonomies for specific domains is then a very interesting area of research (O'Sullivan et al., 1995; Magnini and Speranza, 2001; Snow et al., 2006) . Automatic methods for learning taxonomies from corpora often use distributional hypothesis (Harris, 1964) and exploit some induced lexical-syntactic patterns (Hearst, 1992; Pantel and Pennacchiotti, 2006) . In these models, within a very large set, candidate word pairs are selected as new word pairs in hyperonymy and added to an existing taxonomy. Candidate pairs are represented in some feature space. Often, these feature spaces are huge and, then, models may take into consideration noisy features.",
"cite_spans": [
{
"start": 196,
"end": 210,
"text": "(Miller, 1995)",
"ref_id": "BIBREF12"
},
{
"start": 387,
"end": 412,
"text": "(O'Sullivan et al., 1995;",
"ref_id": "BIBREF14"
},
{
"start": 413,
"end": 440,
"text": "Magnini and Speranza, 2001;",
"ref_id": "BIBREF10"
},
{
"start": 441,
"end": 459,
"text": "Snow et al., 2006)",
"ref_id": "BIBREF18"
},
{
"start": 553,
"end": 567,
"text": "(Harris, 1964)",
"ref_id": "BIBREF6"
},
{
"start": 620,
"end": 634,
"text": "(Hearst, 1992;",
"ref_id": "BIBREF7"
},
{
"start": 635,
"end": 666,
"text": "Pantel and Pennacchiotti, 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In machine learning, feature selection has been often used to reduce the dimensions in huge feature spaces. This has many advantages, e.g., reducing the computational cost and improving performances by removing noisy features (Guyon and Elisseeff, 2003) .",
"cite_spans": [
{
"start": 226,
"end": 253,
"text": "(Guyon and Elisseeff, 2003)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a novel way to include unsupervised feature selection methods in probabilistic taxonomy learning models. Given the probabilistic taxonomy learning model introduced by (Snow et al., 2006) , we leverage on the computation of logistic regression to exploit singular value decomposition (SVD) as unsupervised feature selection. SVD is used to compute the pseudo-inverse matrix needed in logistic regression.",
"cite_spans": [
{
"start": 193,
"end": 212,
"text": "(Snow et al., 2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To describe our idea, we firstly review how SVD can be used as unsupervised feature selection (Sec. 2). In Section 3 we then describe the probabilistic taxonomy learning model introduced by (Snow et al., 2006) . We will then shortly review the logistic regression used to compute the taxonomy learning model to describe where SVD can be naturally used. We will describe our experiments in Sec. 4. Finally, we will draw some conclusions and describe our future work (Sec. 5).",
"cite_spans": [
{
"start": 190,
"end": 209,
"text": "(Snow et al., 2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Singular value decomposition (SVD) is one of the possible factorization of a rectangular matrix that has been largely used in information retrieval for reducing the dimension of the document vector space (Deerwester et al., 1990) . The decomposition can be defined as follows. Given a generic rectangular n \u00d7 m matrix A, its singular value decomposition is:",
"cite_spans": [
{
"start": 204,
"end": 229,
"text": "(Deerwester et al., 1990)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised feature selection with Singular Value Decomposition",
"sec_num": "2"
},
{
"text": "A = U \u03a3V T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised feature selection with Singular Value Decomposition",
"sec_num": "2"
},
{
"text": "where U is a matrix n \u00d7 r, V T is a r \u00d7 m and \u03a3 is a diagonal matrix r \u00d7 r. The two matrices U and V are unitary, i.e., U T U = I and V T V = I. The diagonal elements of the \u03a3 are the singular values such as \u03b4 1 \u2265 \u03b4 2 \u2265 ... \u2265 \u03b4 r > 0 where r is the rank of the matrix A. For the decomposition, SVD exploits the linear combination of rows and columns of A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised feature selection with Singular Value Decomposition",
"sec_num": "2"
},
{
"text": "A first trivial way of using SVD as unsupervised feature reduction is the following. Given E as set of training examples represented in a feature space of n features, we can observe it as a matrix, i.e. a sequence of examples E = ( \u2212 \u2192 e 1 ... \u2212 \u2192 e m ). With SVD, the n \u00d7 m matrix E can be factorized as E = U \u03a3V T . This factorization implies we can focus the learning problem on a new space using the transformation provided by the matrix U . This new space is represented by the matrix:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised feature selection with Singular Value Decomposition",
"sec_num": "2"
},
{
"text": "E = U T E = \u03a3V T (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised feature selection with Singular Value Decomposition",
"sec_num": "2"
},
{
"text": "where each example is represented with r new features. Each new feature is obtained as a linear combination of the original features, i.e. each feature vector \u2212 \u2192 e l can be seen as a new feature vector \u2212 \u2192 e l = U T \u2212 \u2192 e l . When the target feature space is big whereas the cardinality of the training set is small, i.e., n >> m, the application of SVD results in a reduction of the original feature space as the rank r of the matrix E is r \u2264 min(n, m).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised feature selection with Singular Value Decomposition",
"sec_num": "2"
},
{
"text": "A more interesting way of using SVD as unsupervised feature selection model is to exploit its approximated computations, i.e. :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised feature selection with Singular Value Decomposition",
"sec_num": "2"
},
{
"text": "A \u2248 A k = U m\u00d7k \u03a3 k\u00d7k V T k\u00d7n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised feature selection with Singular Value Decomposition",
"sec_num": "2"
},
{
"text": "where k is smaller than the rank r. The computation algorithm (Golub and Kahan, 1965) is allowed to stop at a given k different from the real rank r. The property of the singular values, i.e., \u03b4 1 \u2265 \u03b4 2 \u2265 ... \u2265 \u03b4 r > 0, guarantees that the first k are bigger than the discarded ones. There is a direct relation between the informativeness of the dimension and the value of the singular value. High singular values correspond to dimensions of the new space where examples have more variability whereas low singular values determine dimensions where examples have a smaller variability (see (Liu, 2007) ). These dimensions can not be used as discriminative features in learning algorithms. The possibility of computing the approximated version of the matrix gives a powerful method for feature selection and filtering as we can decide in advance how many features or, better, linear combination of original features we want to use.",
"cite_spans": [
{
"start": 62,
"end": 85,
"text": "(Golub and Kahan, 1965)",
"ref_id": "BIBREF4"
},
{
"start": 589,
"end": 600,
"text": "(Liu, 2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised feature selection with Singular Value Decomposition",
"sec_num": "2"
},
{
"text": "As feature selection model, SVD is unsupervised in the sense that the feature selection is done without taking into account the final classes of the training examples. This is not always the case, feature selection models such as those based on Information Gain largely use the final classes of training examples. SVD as feature selection is independent from the classification problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised feature selection with Singular Value Decomposition",
"sec_num": "2"
},
{
"text": "Recently, Snow et al. (2006) introduced a probabilistic model for learning taxonomies form corpora. This probabilistic formulation exploits the two well known hypotheses: the distributional hypothesis (Harris, 1964) and the exploitation of the lexico-syntactic patterns as in (Robison, 1970; Hearst, 1992 ). Yet, in this formulation, we can positively and naturally introduce our use of SVD as feature selection model. In the rest of this section we will firstly introduce the probabilistic model (Sec. 3.1) and, then, we will describe how SVD is used as feature selector in the logistic regression that estimates the probabilities of the model. To describe this part we need to go in depth into the definition of the logistic regression (Sec. 3.2) and the way of estimating the regression coefficients (Sec. 3.3) . This will open the possibility of describing how we exploit SVD (Sec. 3.4)",
"cite_spans": [
{
"start": 10,
"end": 28,
"text": "Snow et al. (2006)",
"ref_id": "BIBREF18"
},
{
"start": 201,
"end": 215,
"text": "(Harris, 1964)",
"ref_id": "BIBREF6"
},
{
"start": 276,
"end": 291,
"text": "(Robison, 1970;",
"ref_id": "BIBREF17"
},
{
"start": 292,
"end": 304,
"text": "Hearst, 1992",
"ref_id": "BIBREF7"
},
{
"start": 803,
"end": 813,
"text": "(Sec. 3.3)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Taxonomy Learning and SVD feature selection",
"sec_num": "3"
},
{
"text": "In the probabilistic formulation (Snow et al., 2006) , the task of learning taxonomies from a corpus is seen as a probability maximization problem. The taxonomy is seen as a set T of assertions R over pairs R i,j . If R i,j is in T , i is a concept and j is one of its generalization (i.e., the direct or the indirect generalization). For example, R dog,animal \u2208 T describes that dog is an animal. The main innovation of this probabilistic method is the ability of taking into account in a single probability the information coming from the corpus and an existing taxonomy T .",
"cite_spans": [
{
"start": 33,
"end": 52,
"text": "(Snow et al., 2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "The main probabilities are then: (1) the prior probability P (R i,j \u2208 T ) of an assertion R i,j to belong to the taxonomy T and (2) the posterior probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "P (R i,j \u2208 T | \u2212 \u2192 e i,j ) of an assertion R i,j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "to belong to the taxonomy T given a set of evidences \u2212 \u2192 e i,j derived from the corpus. Evidences is a feature vector associated with a pair (i, j). For examples, a feature may describe how many times i and j are seen in patterns like \"i as j\" or \"i is a j\". These among many other features are indicators of an is-a relation between i and j (see (Hearst, 1992) ). Given a set of evidences E over all the relevant word pairs, in (Snow et al., 2006) , the probabilistic taxonomy learning task is defined as the problem of finding the taxonomy T that maximizes the probability of having the evidences E, i.e.:",
"cite_spans": [
{
"start": 347,
"end": 361,
"text": "(Hearst, 1992)",
"ref_id": "BIBREF7"
},
{
"start": 429,
"end": 448,
"text": "(Snow et al., 2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "T = arg max T P (E|T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "In (Snow et al., 2006) , this maximization problem is solved with a local search. What is maximized at each step is the increase of the probability P (E|T ) of the taxonomy when the taxonomy changes from T to T = T \u222a N where N are the relations added at each step. This increase of probabilities is defined as multiplicative change \u2206(N ) as follows:",
"cite_spans": [
{
"start": 3,
"end": 22,
"text": "(Snow et al., 2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2206(N ) = P (E|T )/P (E|T )",
"eq_num": "(2)"
}
],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "The main innovation of the model in (Snow et al., 2006) is the possibility of adding at each step the best relation",
"cite_spans": [
{
"start": 36,
"end": 55,
"text": "(Snow et al., 2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "N = {R i,j } as well as N = I(R i,j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "that is R i,j with all the relations by the existing taxonomy. We will then experiment with our feature selection methodology in the two different models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "flat: at each iteration step, a single relation is added, i.e. R i,j = arg max R i,j \u2206(R i,j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "inductive: at each iteration step, a set of relations is added, i.e. I( R i,j ) where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "R i,j = arg max R i,j \u2206(I(R i,j )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "The last important fact is that it is possible to demonstrate that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "\u2206(E i,j ) = k \u2022 P (R i,j \u2208 T | \u2212 \u2192 e i,j ) 1 \u2212 P (R i,j \u2208 T | \u2212 \u2192 e i,j ) = = k \u2022 odds(R i,j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "where k is a constant (see (Snow et al., 2006 )) that will be neglected in the maximization process. This last equation gives the possibility of using the logistic regression as it is. In the next sections we will see how SVD and the related feature selection can be used to compute the odds.",
"cite_spans": [
{
"start": 27,
"end": 45,
"text": "(Snow et al., 2006",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "Logistic Regression (Cox, 1958 ) is a particular type of statistical model for relating responses Y to linear combinations of predictor variables X. It is a specific kind of Generalized Linear Model (see (Nelder and Wedderburn, 1972) ) where its function is the logit function and the independent variable Y is a binary or dicothomic variable which has a Bernoulli distribution. The dependent variable Y takes value 0 or 1. The probability that Y has value 1 is function of the regressors",
"cite_spans": [
{
"start": 20,
"end": 30,
"text": "(Cox, 1958",
"ref_id": "BIBREF1"
},
{
"start": 204,
"end": 233,
"text": "(Nelder and Wedderburn, 1972)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "x = (1, x 1 , ..., x k ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "The probabilistic taxonomy learner model introduced in the previous section falls in the category of probabilistic models where the logistic regression can be applied as R i,j \u2208 T is the binary dependent variable and \u2212 \u2192 e i,j is the vector of its regressors. In the rest of the section we will see how the odds, i.e., the multiplicative change, can be computed. We start from formally describing the Logistic Regression Model. Given the two stochastic variables Y and X, we can define as p the probability of Y to be 1 given that X=x, i.e.:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "p = P (Y = 1|X = x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "The distribution of the variable Y is a Bernulli distribution, i.e.:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "Y \u223c Bernoulli(p)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "Given the definition of the logit(p) as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "logit(p) = ln p 1 \u2212 p",
"eq_num": "(3)"
}
],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "and given the fact that Y is a Bernoulli distribution, the logistic regression foresees that the logit is a linear combination of the values of the regressors, i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "logit(p) = \u03b2 0 + \u03b2 1 x 1 + ... + \u03b2 k x k (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "where \u03b2 0 , \u03b2 1 , ..., \u03b2 k are called regression coefficients of the variables x 1 , ..., x k respectively. Given the regression coefficients, it is possible to compute the probability of a given event where we observe the regressors x to be Y = 1 or in our case to belong to the taxonomy. This probability can be computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "p(x) = exp(\u03b2 0 + \u03b2 1 x 1 + ... + \u03b2 k x k ) 1 + exp(\u03b2 0 + \u03b2 1 x 1 + ... + \u03b2 k x k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "It is obviously trivial to determine the odds(R i,j ) related to the multiplicative change of the probabilistic taxonomy model. The odds is the ratio between the positive and the negative event. It is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "odds(R i,j ) = P (R i,j \u2208T | \u2212 \u2192 e i,j ) 1\u2212P (R i,j \u2208T | \u2212 \u2192 e i,j )",
"eq_num": "(5)"
}
],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "Then, it is strictly related with the logit, i.e.:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "odds(R i,j ) = exp(\u03b2 0 + \u2212 \u2192 e T i,j \u03b2)",
"eq_num": "(6)"
}
],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "The relationship between the possible values of the probability, odds and logit is show in the Table 1 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 103,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "3.2"
},
{
"text": "Odds Logit 0 \u2264 p < 0.5 [0, 1) (\u2212\u221e, 0] 0.5 < p \u2264 1 [1, \u221e) [0, \u221e)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability",
"sec_num": null
},
{
"text": "The remaining problem is how to estimate the regression coefficients. This estimation is done using the maximal likelihood estimation to prepare a set of linear equations using the above logit definition and, then, solving a linear problem. This will give us the possibility of introducing the necessity of determining a pseudo-inverse matrix where we will use the singular value decomposition and its natural possibility of performing feature selection. Once we have the regression coefficients, we have the possibility of assigning estimating a probability P (R i,j \u2208 T | \u2212 \u2192 e i,j ) given any configuration of the values of the regressors \u2212 \u2192 e i,j , i.e., the observed values of the features. For sake of simplicity we will hereafter refer to \u2212 \u2192 e i,j as \u2212 \u2192 e l .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "Let assume we have a multiset O of observations extracted from Y \u00d7 E where Y \u2208 {0, 1} and we know that some of them are positive observations (i.e., Y = 1) and some of them are negative observations (i.e., Y = 0).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "For each pairs the relative configuration \u2212 \u2192 e l \u2208 E that appeared at least once in O, we can determine using the maximal likelihood estimation P (Y = 1| \u2212 \u2192 e l ). Then, from the equation of the logit (Eq. 4), we have a linear equation system, i.e.:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "\u2212 \u2212\u2212\u2212\u2212 \u2192 logit(p) = Q\u03b2 (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "where Q is a matrix that includes a constant column of 1, necessary for the \u03b2 0 of the linear combination of the values of the regression. Moreover it includes the transpose of the evidence matrix, i.e. E = ( \u2212 \u2192 e 1 ... \u2212 \u2192 e m ). Therefore the matrix will be: The set of equations in Eq. 7 can be solved using multiple linear regression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "Q = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "In their general form, the equations of multiple linear regression may be written as (Caron et al., 1988) :",
"cite_spans": [
{
"start": 85,
"end": 105,
"text": "(Caron et al., 1988)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "y = X\u03b2 + \u03b5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "where:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "\u2022 y is a column vector n \u00d7 1 that includes the observed values of the dependent variables Y 1 , ..., Y k ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "\u2022 X is a matrix n \u00d7 m of the values of the regressors that we have observed;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "\u2022 \u03b2 is a column vector m \u00d7 1 of the regression coefficients;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "\u2022 \u03b5 is a column vector including the stochastic components that have not been observed and that will not be considered later.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "In the case X is a rectangular and singular matrix, the system y = X\u03b2 has not a solution. Yet, it is possible to use the principle of the Least Square Estimation. This principle determines the solution \u03b2 that minimize the residual norm, i.e.:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b2 = arg min X\u03b2 \u2212 y 2",
"eq_num": "(8)"
}
],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "This problem can be solved by the Moore-Penrose pseudoinverse X + (Penrose, 1955) . Then, the final equation to determine the \u03b2 is",
"cite_spans": [
{
"start": 66,
"end": 81,
"text": "(Penrose, 1955)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "\u03b2 = X + y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "It is important to remark that if the inverse matrix exist X + = X \u22121 and that X + X and XX + are symmetric. For our case, the following equation is valid:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "\u03b2 = Q + \u2212 \u2212\u2212\u2212\u2212 \u2192 logit(p)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating Regression Coefficients",
"sec_num": "3.3"
},
{
"text": "We finally reached the point where it is possible to explain our idea that is naturally using singular value decomposition (SVD) as feature selection in a probabilistic taxonomy learner. In the previous sections we described how the probabilities of the taxonomy learner can be estimated using logistic regressions and we concluded that a way to determine the regression coefficients \u03b2 is computing the Moore-Penrose pseudoinverse Q + . It is possible to compute the Moore-Penrose pseudoinverse using the SVD in the following way (Penrose, 1955). Given an SVD decomposition of the matrix Q = U \u03a3V T the pseudo-inverse matrix that minimizes the Eq. 9 is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Pseudoinverse Matrix with SVD Analysis",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Q + = V \u03a3 + U T",
"eq_num": "(9)"
}
],
"section": "Computing Pseudoinverse Matrix with SVD Analysis",
"sec_num": "3.4"
},
{
"text": "The diagonal matrix \u03a3 + is a matrix r \u00d7 r obtained first transposing \u03a3 and then calculating the reciprocals of the singular value of \u03a3. So the diagonal elements of the \u03a3 + are 1 \u03b4 1 , 1 \u03b4 2 , ..., , 1 \u03b4r . We have now our opportunity of using SVD as natural feature selector as we can compute different approximations of the pseudo-inverse matrix. As we saw in Sec. 2, the algorithm for computing the singular value decomposition can be stopped a different dimensions. We called k the number of dimensions. As we can obtain different SVD as approximations of the original matrix (Eq. 2), we can define different approximations of :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Pseudoinverse Matrix with SVD Analysis",
"sec_num": "3.4"
},
{
"text": "Q + \u2248 Q + k = V n\u00d7k \u03a3 + k\u00d7k U T k\u00d7m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Pseudoinverse Matrix with SVD Analysis",
"sec_num": "3.4"
},
{
"text": "In our experiments we will use different values of k to explore the benefits of SVD as feature selector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Pseudoinverse Matrix with SVD Analysis",
"sec_num": "3.4"
},
{
"text": "In this section, we want to empirically explore whether our use of SVD feature selection positively affects performances of the probabilistic taxonomy learner. The best way of determining how a taxonomy learner is performing is to see if it can replicate an existing \"taxonomy\". We will experiment with the attempt of replicating a portion of WordNet (Miller, 1995) . In the experiments, we will address two issues: 1) determining to what extent SVD feature selection affect performances of the taxonomy learner; 2) determining if SVD as unsupervised feature selection is better for the task than some simpler model for taxonomy learning. We will explore the effects on both the flat and the inductive probabilistic taxonomy learner.",
"cite_spans": [
{
"start": 351,
"end": 365,
"text": "(Miller, 1995)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "4"
},
{
"text": "The rest of the section is organized as follows. In Sec. 4.1 we will describe the experimental setup in terms of: how we selected the portion of WordNet, the description of the corpus used to extract evidences, a description of the feature space we used, and, finally, the description of a baseline models for taxonomy learning we have used. In Sec. 4.2 we will present the results of the experiments in term of performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "4"
},
{
"text": "To completely define the experiments we need to describe some issues: how we defined the taxonomy to replicate, which corpus we have used to extract evidences for pairs of words, which feature space we used, and, finally, the baseline model we compared our feature selection model against.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Set-up",
"sec_num": "4.1"
},
{
"text": "As target taxonomy we selected a portion of WordNet 1 (Miller, 1995) . Namely, we started from the 44 concrete nouns listed in (McRae et al., 2005) and divided in 3 classes: animal, artifact, and vegetable. For sake of comprehension, this set is described in Tab. 2. For each word w, we selected the synset s w that is compliant with the class it belongs to. We then obtained a set S of synsets (see Tab. 2). We then expanded the set to S adding the siblings (i.e., the coordinate terms) for each synset in S. The set S contains 265 coordinate terms plus the 44 original concrete nouns. For each element in S we collected its hyperonym, obtaining the set H. We then removed from the set H the 4 topmosts: entity, unit, object, and whole. The set H contains 77 hyperonyms. For the purpose of the experiments we both derived from the previous sets a taxonomy T and produced a set of negative examples T . The two sets have been obtained as follows. The taxonomy T is the portion of WordNet implied by O = H \u222a S , i.e., T contains all the (s, h) \u2208 O \u00d7 O that are in WordNet. On the contrary, T contains all the (s, h) \u2208 O \u00d7 O that are not in WordNet. We then have 5108 positive pairs in T and 52892 negative pairs in T .",
"cite_spans": [
{
"start": 127,
"end": 147,
"text": "(McRae et al., 2005)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 54,
"end": 68,
"text": "(Miller, 1995)",
"ref_id": null
},
{
"start": 400,
"end": 404,
"text": "Tab.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Set-up",
"sec_num": "4.1"
},
{
"text": "We then split the set T \u222a T in two parts, training and testing. As we want to see if it is possible to attach the set S to the right hyperonym, the split has been done as follows. We randomly divided the set S in two parts S tr and S ts , respectively, of 70% and 30% of the original S . We then selected as training T tr all the pairs in T containing a synset in S tr and as testing set T ts those pairs of T containing a synset of S ts . For the probabilistic model, T tr is the initial taxonomy whereas T ts \u222a T is the unknown set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Set-up",
"sec_num": "4.1"
},
{
"text": "As corpus we used the English Web as Corpus (ukWaC) (Ferraresi et al., 2008) . This is a web extracted corpus of about 2700000 web pages containing more than 2 billion words. The corpus contains documents of different topics such as web, computers, education, public sphere, etc.. It has been largely demonstrated that the web documents",
"cite_spans": [
{
"start": 52,
"end": 76,
"text": "(Ferraresi et al., 2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Set-up",
"sec_num": "4.1"
},
{
"text": "Concrete nouns Clas Sense 1 banana Vegetable 1 23 boat Artifact 0 2 bottle Artifact 0 24 bowl Artifact 0 3 car Artifact 0 25 cat Animal 0 4 cherry Vegetable 2 26 (Lapata and Keller, 2004) .",
"cite_spans": [
{
"start": 186,
"end": 211,
"text": "(Lapata and Keller, 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 20,
"end": 185,
"text": "Sense 1 banana Vegetable 1 23 boat Artifact 0 2 bottle Artifact 0 24 bowl Artifact 0 3 car Artifact 0 25 cat Animal 0 4 cherry Vegetable 2 26",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Sense",
"sec_num": null
},
{
"text": "As the focus of the paper is the analysis of the effect of the SVD feature selection, we used as feature spaces both n-grams and bag-of-words. Out of the T \u222a T , we selected only those pairs that appeared at a distance of at most 3 tokens. Using these 3 tokens, we generated three spaces:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sense",
"sec_num": null
},
{
"text": "(1) 1-gram that contains monograms, (2) 2-gram that contains monograms and bigrams, and (3) the 3-gram space that contains monograms, bigrams, and trigrams. For the purpose of this experiment, we used a reduced stop list as classical stop words as punctuation, parenthesis, the verb to be are very relevant in the context of features for learning a taxonomy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sense",
"sec_num": null
},
{
"text": "Finally, we want to describe our baseline model for taxonomy learning. This model only contains Heart's patterns (Hearst, 1992) as features. The feature value is the point-wise mutual information. These features are in some sense the best features for the task as these have been manually selected after a process of corpus analysis. These baseline features are included in our 3-gram model. We can then compare our best models with this baseline features in order to see if our SVD feature selection model outperforms manual feature selection.",
"cite_spans": [
{
"start": 113,
"end": 127,
"text": "(Hearst, 1992)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sense",
"sec_num": null
},
{
"text": "In the first set of experiments we want to focus on the issue whether or not performances of the probabilistic taxonomy learner is positively affected by the proposed feature selection model based on the singular value decomposition. We then determined the performance with respect to different values of k. This latter represents the number of surviving dimensions where the pseudo-inverse is computed. Then, it represents the number of features the model adopts. We performed this first set of experiments in the 1-gram feature space. Punctuation has been considered. Figure 1 plots the accuracy of the probabilistic learner with respect to the size of the feature set, i.e. the number k of single values considered for computing the pseudoinverse matrix. To determine if the effect of the feature selection is preserved during the iteration of the local search algorithm, we report curves at different sizes of the set of added pairs. Curves are reported for both the flat model and the inductive model. The flat algorithm adds one pair at each iteration. Then, we reported curves for each 20 added pairs. Each curve shows that accuracy does not increase after a dimension of k=700. This size of the space is necessary only for the first 20 added pairs. Accuracy keeps increasing to k=700 and then decreases. When we add more pairs, the optimal size of the space is around k=200. For the inductive model we report the accuracies for around 40, 80, 130 added pairs. Here, at each iteration, more than one pair is added. The optimal dimension of the feature space seems to be around 500 as after that value performances decrease or stay stable. SVD feature selection has then a positive effect for both the flat and the inductive probabilistic taxonomy learners. This has beneficial effects both on the performances and on the computation time.",
"cite_spans": [],
"ref_spans": [
{
"start": 570,
"end": 578,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "In the second set of experiments we want to determine whether or not SVD feature selection for the probabilistic taxonomy learner behaves better than a reduced set of known features. We then fixed the dimension k to 400 and we compared the baseline model with different probabilistic models with different feature sets: 1-gram, 2-gram, and 3-gram. We can consider that the trigram model before the cut on its dimensions contains feature subsuming the baseline model. Figure 2 shows results. Curves report accuracy after n added pairs. All the probabilistic models outperform the baseline model. As what happened for the first series of experiments (see Fig. 1 ) more informative spaces such as 3-gram behaves better when the number of added pairs is small. Performances of the three reduced pairs become similar after 100 added pairs. These experiments show that SVD feature selection has a positive effect on performances as resulting models are always better with respect to the baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 467,
"end": 475,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 653,
"end": 659,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We presented a model to naturally introduce SVD feature selection in a probabilistic taxonomy learner. The method is effective as allows the designing of better probabilistic taxonomy learners. We still need to explore at least two issues. First, we need to determine whether or not the positive effect of SVD feature selection is preserved in more complex feature spaces such as syntactic feature spaces as those used in (Snow et al., 2006) . Second, we need to compare the SVD feature selection with other unsupervised feature selection models to determine whether or not this is the best method to use in the case of probabilistic taxonomy learning.",
"cite_spans": [
{
"start": 422,
"end": 441,
"text": "(Snow et al., 2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "We used the version 3.0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Variance estimation of linear regression coefficients in complex sampling situation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Caron",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Hospital",
"suffix": ""
},
{
"first": "P",
"middle": [
"N"
],
"last": "Corey",
"suffix": ""
}
],
"year": 1988,
"venue": "Sampling Error: Methodology, Software and Application",
"volume": "",
"issue": "",
"pages": "688--694",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Caron, W. Hospital, and P. N. Corey. 1988. Variance estimation of linear regression coefficients in complex sampling situation. Sampling Error: Methodology, Software and Application, pages 688- 694.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The regression analysis of binary sequences",
"authors": [
{
"first": "D",
"middle": [
"R"
],
"last": "Cox",
"suffix": ""
}
],
"year": 1958,
"venue": "Journal of the Royal Statistical Society. Series B (Methodological)",
"volume": "20",
"issue": "2",
"pages": "215--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. R. Cox. 1958. The regression analysis of binary sequences. Journal of the Royal Statistical Society. Series B (Methodological), 20(2):215-242.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Indexing by latent semantic analysis",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Deerwester",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "George",
"middle": [
"W"
],
"last": "Furnas",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"K L"
],
"last": "",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Society for Information Science",
"volume": "41",
"issue": "",
"pages": "391--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. L, and Richard Harshman. 1990. In- dexing by latent semantic analysis. Journal of the American Society for Information Science, 41:391- 407.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Introducing and evaluating ukwac, a very large web-derived corpus of english",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ferraresi",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Zanchetta",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bernardini",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceed-ings of the WAC4 Workshop at LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Ferraresi, E. Zanchetta, M. Baroni, and S. Bernar- dini. 2008. Introducing and evaluating ukwac, a very large web-derived corpus of english. In In Proceed-ings of the WAC4 Workshop at LREC 2008, Marrakesh, Morocco.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Calculating the singular values and pseudo-inverse of a matrix",
"authors": [
{
"first": "G",
"middle": [],
"last": "Golub",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Kahan",
"suffix": ""
}
],
"year": 1965,
"venue": "Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis",
"volume": "2",
"issue": "2",
"pages": "205--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Golub and W. Kahan. 1965. Calculating the singu- lar values and pseudo-inverse of a matrix. Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis, 2(2):205-224.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An introduction to variable and feature selection",
"authors": [
{
"first": "Isabelle",
"middle": [],
"last": "Guyon",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Elisseeff",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1157--1182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isabelle Guyon and Andr\u00e9 Elisseeff. 2003. An intro- duction to variable and feature selection. Journal of Machine Learning Research, 3:1157-1182, March.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Distributional structure",
"authors": [
{
"first": "Zellig",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1964,
"venue": "The Philosophy of Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig Harris. 1964. Distributional structure. In Jer- rold J. Katz and Jerry A. Fodor, editors, The Philos- ophy of Linguistics, New York. Oxford University Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "A",
"middle": [],
"last": "Marti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 15th International Conference on Computational Linguistics (CoLing-92)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A. Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Proceedings of the 15th International Conference on Computational Linguistics (CoLing-92), Nantes, France.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The web as a baseline: Evaluating the performance of unsupervised web-based models for a range of nlp tasks",
"authors": [
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirella Lapata and Frank Keller. 2004. The web as a baseline: Evaluating the performance of unsuper- vised web-based models for a range of nlp tasks. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, Boston, MA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Web Data Mining: Exploring Hyperlinks, Contents, and Usage Data. Data-Centric Systems and Applications",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu. 2007. Web Data Mining: Exploring Hy- perlinks, Contents, and Usage Data. Data-Centric Systems and Applications. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Integrating generic and specialized wordnets",
"authors": [
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Speranza",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Euroconference RANLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernardo Magnini and Manuela Speranza. 2001. In- tegrating generic and specialized wordnets. In In Proceedings of the Euroconference RANLP 2001, Tzigov Chark, Bulgaria.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semantic feature production norms for a large set of living and nonliving things",
"authors": [
{
"first": "K",
"middle": [],
"last": "Mcrae",
"suffix": ""
},
{
"first": "G",
"middle": [
"S"
],
"last": "Cree",
"suffix": ""
},
{
"first": "M",
"middle": [
"S"
],
"last": "Seidenberg",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Mcnorgan",
"suffix": ""
}
],
"year": 2005,
"venue": "Behavioral Research Methods, Instruments, and Computers",
"volume": "",
"issue": "",
"pages": "547--559",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. McRae, G.S. Cree, M.S. Seidenberg, and C. McNor- gan. 2005. Semantic feature production norms for a large set of living and nonliving things. pages 547- 559, Behavioral Research Methods, Instruments, and Computers.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "WordNet: A lexical database for English",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39-41, November.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Generalized linear models",
"authors": [
{
"first": "J",
"middle": [
"A"
],
"last": "Nelder",
"suffix": ""
},
{
"first": "R",
"middle": [
"W M"
],
"last": "Wedderburn",
"suffix": ""
}
],
"year": 1972,
"venue": "Journal of the Royal Statistical Society. Series A (General)",
"volume": "135",
"issue": "3",
"pages": "370--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. A. Nelder and R. W. M. Wedderburn. 1972. Gener- alized linear models. Journal of the Royal Statistical Society. Series A (General), 135(3):370-384.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Augmenting the princeton wordnet with a domain specific ontology",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Donie",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sullivan",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"F E"
],
"last": "Mcelligott",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutcliffe",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Workshop on Basic Issues in Knowledge Sharing at the 14th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donie O'Sullivan, A. McElligott, and Richard F. E. Sutcliffe. 1995. Augmenting the princeton wordnet with a domain specific ontology. In Proceedings of the Workshop on Basic Issues in Knowledge Sharing at the 14th International Joint Conference on Artifi- cial Intelligence. Montreal, Canada.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Espresso: Leveraging generic patterns for automatically harvesting semantic relations",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "113--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: Leveraging generic patterns for automati- cally harvesting semantic relations. In Proceedings of the 21st International Conference on Computa- tional Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 113-120, Sydney, Australia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A generalized inverse for matrices",
"authors": [
{
"first": "R",
"middle": [],
"last": "Penrose",
"suffix": ""
}
],
"year": 1955,
"venue": "Proc. Cambridge Philosophical Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Penrose. 1955. A generalized inverse for matrices. In Proc. Cambridge Philosophical Society.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Computer-detectable semantic structures. Information Storage and Retrieval",
"authors": [
{
"first": "R",
"middle": [],
"last": "Harold",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Robison",
"suffix": ""
}
],
"year": 1970,
"venue": "",
"volume": "6",
"issue": "",
"pages": "273--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harold R. Robison. 1970. Computer-detectable se- mantic structures. Information Storage and Re- trieval, 6(3):273-288.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semantic taxonomy induction from heterogenous evidence",
"authors": [
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "A",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2006,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "801--808",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rion Snow, Daniel Jurafsky, and A. Y. Ng. 2006. Se- mantic taxonomy induction from heterogenous evi- dence. In In ACL, pages 801-808.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Figure 1: Accuracy over different cuts of the feature space",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Comparison of different feature spaces with k=400",
"num": null,
"uris": null
},
"TABREF0": {
"html": null,
"num": null,
"type_str": "table",
"text": "Relationship between probability, odds and logit",
"content": "<table/>"
},
"TABREF3": {
"html": null,
"num": null,
"type_str": "table",
"text": "Concrete nouns, Classes and senses selected in WordNet are good models for natural language",
"content": "<table/>"
}
}
}
}