ACL-OCL / Base_JSON /prefixD /json /D17 /D17-1028.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D17-1028",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:16:19.027346Z"
},
"title": "Exploiting Morphological Regularities in Distributional Word Representations",
"authors": [
{
"first": "Syed",
"middle": [],
"last": "Sarfaraz Akhtar",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Arihant",
"middle": [],
"last": "Gupta",
"suffix": "",
"affiliation": {},
"email": "arihant.gupta@research.iiit.ac.in"
},
{
"first": "Avijit",
"middle": [],
"last": "Vajpayee",
"suffix": "",
"affiliation": {},
"email": "avijit@inshorts.com"
},
{
"first": "Arjit",
"middle": [],
"last": "Srivastava",
"suffix": "",
"affiliation": {},
"email": "arjit.srivastava@research.iiit.ac.in"
},
{
"first": "Madan",
"middle": [],
"last": "Gopal Jhanwar",
"suffix": "",
"affiliation": {},
"email": "madangopal.jhanwar@research.iiit.ac.in"
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": "",
"affiliation": {},
"email": "manish.shrivastava@iiit.ac.in"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a simple, fast and unsupervised approach for exploiting morphological regularities present in high dimensional vector spaces. We propose a novel method for generating embeddings of words from their morphological variants using morphological transformation operators. We evaluate this approach on MSR word analogy test set (Mikolov et al., 2013d) with an accuracy of 85% which is 12% higher than the previous best known system.",
"pdf_parse": {
"paper_id": "D17-1028",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a simple, fast and unsupervised approach for exploiting morphological regularities present in high dimensional vector spaces. We propose a novel method for generating embeddings of words from their morphological variants using morphological transformation operators. We evaluate this approach on MSR word analogy test set (Mikolov et al., 2013d) with an accuracy of 85% which is 12% higher than the previous best known system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Vector representation of words are presently being used to solve a variety of problems like document classification (Sebastiani, 2002) , question answering (Tellex et al., 2003) and chunking (Turian et al., 2010) .",
"cite_spans": [
{
"start": 116,
"end": 134,
"text": "(Sebastiani, 2002)",
"ref_id": "BIBREF11"
},
{
"start": 156,
"end": 177,
"text": "(Tellex et al., 2003)",
"ref_id": "BIBREF14"
},
{
"start": 191,
"end": 212,
"text": "(Turian et al., 2010)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Word representations capture both syntactic and semantic properties (Mikolov et al., 2013d) of natural language. Soricut and Och (2015) exploited these regularities to generate prefix/suffix based morphological transformation rules in an unsupervised manner. These morphological transformations were represented as vectors in the same embedding space as the vocabulary. They used a graph based approach and represented transformations as \"type:from:to\" triples and a direction vector: for example \"suffix:ion:e:\u2191 creation \" implies a suffix change just like in the case \"creation\" to \"create\". Using Soricut's transformation rules, the major problem is identifying the correct direction vector to use for a given case, i.e. if we have to generate an embedding for \"runs\", which rule to apply on \"run\". Experimental results showed that \"walk -* These authors contributed equally to this work.",
"cite_spans": [
{
"start": 68,
"end": 91,
"text": "(Mikolov et al., 2013d)",
"ref_id": "BIBREF10"
},
{
"start": 113,
"end": 135,
"text": "Soricut and Och (2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "walks\" gives better results than rules like \"invent -invents\" or \"object -objects\" in generating word embedding for \"runs\". In this paper, we try to explore if we can harness this morphological regularity in a much better way, than applying a single direction using vector arithmetic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Hence, we tried to come up with a global transformation operator, which aligns itself with the source word, to give best possible word embedding for target word. We will have a single transformation operator for each rule, irrespective of the form of root word (like verb or a noun). Our transformation operator is in the form of a matrix, which when applied on a word embedding (cross product of vector representation of word with transformation matrix) gives us a word embedding for target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The intuition is not to solve for \"invent is to invents as run is to ?\" or \"object is to objects as run is to ?\", but instead we are solving for \"walk is to walks, object is to objects, invent is to invents, .... as run is to ?\". A transformation operator aims to be a unified transition function for different forms of the same transition. Learning a representation of this operator would allow us to capture the semantic changes associated with the transition. As word embeddings for rare and out-of-vocabulary words are poorly trained or not trained at all, learning this operator will be beneficial to reducing the sparsity in corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The idea of projection learning has been applied to a multitude of tasks such as in the learning of cross lingual mappings for translation of English to Spanish (Mikolov et al., 2013b) and for unsupervised mapping between vector spaces (Akhtar et al., 2017a) . Our approach has its basis on the same lines but with a different formulation and end goal to learn morphological rules rather than semantic associations and translational constraints.",
"cite_spans": [
{
"start": 161,
"end": 184,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF8"
},
{
"start": 236,
"end": 258,
"text": "(Akhtar et al., 2017a)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, we present a new method to har-ness morphological regularities present in high dimensional word embeddings and learn its representation in the form of a matrix. Using this method, we present state of the art results on MSR word analogy dataset. This paper is structured as follows. We first discuss the corpus used for training the transformation operators in section 2. In section 3, we discuss how these transformation operators are trained. Later in sections 4, we analyze and discuss the results of our experiments. We finish this paper with future scope of our work in section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We are using word embeddings trained on Google News corpus (Mikolov et al., 2013c) for our experiments. For the model trained in this paper, we have used the Skip-gram (Mikolov et al., 2013a) algorithm. The dimensionality has been fixed at 300 with a minimum count of 5 along with negative sampling. As training set and for estimating the frequencies of words, we use the Wikipedia data (Shaoul, 2010) . The corpus contains about 1 billion tokens.",
"cite_spans": [
{
"start": 59,
"end": 82,
"text": "(Mikolov et al., 2013c)",
"ref_id": "BIBREF9"
},
{
"start": 168,
"end": 191,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF7"
},
{
"start": 387,
"end": 401,
"text": "(Shaoul, 2010)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2"
},
{
"text": "The MSR dataset (Mikolov et al., 2013d) contains 8000 analogy questions. This data set has been used by us for testing our model. The relations portrayed by these questions are morphosyntactic, and can be categorized according to parts of speech -adjectives, nouns and verbs. Adjective relations include comparative and superlative (good is to best as smart is to smartest). Noun relations include singular and plural, possessive and non-possessive (dog is to dog's as cat is to cat's). Verb relations are tense modifications (work is to worked as accept is to accepted).",
"cite_spans": [
{
"start": 16,
"end": 39,
"text": "(Mikolov et al., 2013d)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2"
},
{
"text": "For all the experiments, we have calculated the fraction of answers correctly answered by the system on MSR word analogy dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2"
},
{
"text": "Our approach has two steps -1. Extraction of candidate rules and word pairs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "Note that all the thresholds mentioned in following sub-sections were determined by empirical fine tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training of a transformation matrix per rule",
"sec_num": "2."
},
{
"text": "For unsupervised extraction of candidate rules and corresponding word pairs for that rule, we follow the approach used by Akhtar et al. (2017b). For example, in case of the rule <null,s>, we find word pairs such as <boy,boys>, <object,objects> and <invent,invents>. We restrict the scope of our work to dealing with only prefix and suffix based morphology. To extract candidate suffixes / prefixes, we maintain two TRIE data structures (one where inverted words are inserted for suffixes and another where words are inserted in original order for prefixes). By thresholding on the basis of branching factor of a node (bf = 10), we obtain candidate suffixes / prefixes and stems associated with them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformation Extraction",
"sec_num": "3.1"
},
{
"text": "Defining two types of transitions -1. Null transitions -involve a prefix/suffix going to null for e.g. the transition <suffix:null:ed> would involve pairs <talk, talked> , <walk, walked> etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformation Extraction",
"sec_num": "3.1"
},
{
"text": "2. Cross transitions -involve both addition and deletion of characters for e.g. the transition <suffix:ed:ing> would involve pairs <talked, talking>, <walked, walking> etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformation Extraction",
"sec_num": "3.1"
},
{
"text": "For extracting null transitions, we take the intersection of stems associated with candidate suffixes/prefixes with the vocabulary of our training corpus. For extracting cross transitions, we take the intersection between stems of different suffixes/prefixes. For e.g. the stem \"talk\" would be associated with both suffixes \"ed\" and \"ing\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformation Extraction",
"sec_num": "3.1"
},
{
"text": "We prune the candidate rules and associated pairs thus extracted based on both cosine similarity and frequency. For e.g. <hat,hated> is a co-incidental example of the transition <null,ed>. We lower bound the cosine similarity at theta sim = 0.2 for word vectors of the pair. Since our transformation matrix is derived from all the word pairs following a particular transition rule, we carefully use only those word pairs which are of high frequency (as they have better trained embeddings). We lower bound the frequency of both words of pair at theta f req = 1000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformation Extraction",
"sec_num": "3.1"
},
{
"text": "We could have relied on an external morph analyzer such as Morfessor (Creutz and Lagus, 2007) to extract candidate rules and word pairs, but we wished to keep the approach completely unsupervised. ",
"cite_spans": [
{
"start": 69,
"end": 93,
"text": "(Creutz and Lagus, 2007)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformation Extraction",
"sec_num": "3.1"
},
{
"text": "Previous works that handle morphology using vector space representations involved complex neural network architectures such as recursive neural networks (Luong et al., 2013) and log-bilinear models (Botha and Blunsom, 2014) . Both the referred works treat morph-analysis as a pre-processing step using Morfessor (Creutz and Lagus, 2007) . In contrast, we propose a simple yet effective linear approach to learn the representations of transformations without depending on external segmentation tools.",
"cite_spans": [
{
"start": 153,
"end": 173,
"text": "(Luong et al., 2013)",
"ref_id": "BIBREF6"
},
{
"start": 198,
"end": 223,
"text": "(Botha and Blunsom, 2014)",
"ref_id": "BIBREF2"
},
{
"start": 312,
"end": 336,
"text": "(Creutz and Lagus, 2007)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "Suppose we get \"N\" highly frequent word pairs following the same regularity(transition rule). For our experiments, the lower bound of \"N\" is set at 50. Dimensions of word embedding of a word in our model is \"D\". Using first word of our \"N\" chosen word pairs, we create a matrix \"A\" of dimensions N*D, where each row is vector representation of a word. Similarly, we create another matrix B, of similar dimensions as A, using second word of our chosen word pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "We now propose that a matrix \"X\" (our transformation matrix) exists such that,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "A * X = B or, X = A \u22121 * B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "(1) (all instances of A that we encountered were nonsingular). Our matrix \"X\" will be of dimensions \"D*D\" and when applied to a word embedding (matrix of dimensions 1*D, it gives a matrix of dimensions 1*D as output), it results in the word embedding of the transformed form of the word. Due to inverse property of a matrix, it accurately remembers the word pairs used for computing. The matrix also appears to align itself with the word embedding of other words (not used for its training) to transform them according to the rule that the matrix follows. Some interesting results are shown in table 1. While testing, we extract the lexical transition using the first two words of the analogy question. For example, for pairs like <reach, reached>, < walk, walked>, we are able to extract that they follow <null, ed> rule. But, for <go, went>, we are not able to find any transformation operator after lexical analysis, and for such cases, we fall back on CosSum/CosMul (Levy et al., 2014) approaches as our backup. Mikolov et al. showed that relations between words are reflected to a large extent in the offsets between their vector embeddings (queen -king = woman -man), and thus the vector of the hidden word b * will be similar to the vector b \u2212 a + a * , suggesting that the analogy question can be solved by optimizing:",
"cite_spans": [
{
"start": 970,
"end": 989,
"text": "(Levy et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 1016,
"end": 1030,
"text": "Mikolov et al.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "arg max b * \u2208V (sim(b * , b \u2212 a + a * ))",
"eq_num": "(2)"
}
],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "where V is the vocabulary and sim is a similarity measure. Specifically, they used the cosine similarity measure, defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "cos(u, v) = u . v ||u|| . ||v||",
"eq_num": "(3)"
}
],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "resulting in:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "arg max b * \u2208V (cos(b * , b \u2212 a + a * ))",
"eq_num": "(4)"
}
],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "Equation 4 has been referred to as CosAdd model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "While experimenting, Omer Levy (Levy et al., 2014) found that for an analogy question \"London is to England as Baghdad is to -?\", using CosAdd model, they got Mosul -a large Iraqi city, instead of Iraq which is a country, as an answer. They were seeking for Iraq because of its similarity to England (both are countries), similarity to Baghdad (similar geography/culture) and dissimilarity to London (different geography/culture). While Iraq was much more similar to England than Mosul was (because both Iraq and England are countries), the sums were dominated by the geographic and cultural aspect of the analogy.",
"cite_spans": [
{
"start": 31,
"end": 50,
"text": "(Levy et al., 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "Hence to achieve better balancing among different aspects of similarity, they proposed a new model, where they moved from additive to multiplicative approach:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "arg max b * \u2208V cos(b * , b) . cos(b * , a * )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "cos(b * , a) + ( = 0.001 to prevent division by zero)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "This was equivalent to taking the logarithm of each term before summation, thus amplifying the differences between small quantities and reducing the differences between larger ones. This model has been referred to as CosMul model. Even though our transformation operator can handle any sort of transformation, but if we are not able to detect the rule from lexical analysis, we are not able to determine which transformation operator to use, and hence, we fall back on Cos-Sum/CosMul. Like for the above mentioned examples, we will use transformation operator (if existing) for transformations like <reach, reached>, since we can find the rule, but for <go, went>, we can not, since we can not extract the corresponding rule itself -even if the matrix can handle such transitions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "If a transformation matrix exists for a transition rule, we apply the corresponding transformation matrix on the word embedding of the third word and search the whole vocabulary for the word with an embedding most similar to the transformed embedding (ignoring the third word itself). If the similarity of the resultant word's embedding with our transformed embedding is less than 0.68 (determined empirically) or the transformation matrix itself does not exist, we fall back on the Cos-Sum/CosMul techniques. Levy et. al. (2015) proposed the systems Cos-Sum and CosMul in which they showed that tuning the hyperparameters has a significant impact on the performance. Figure 1 gives an overview of how we train the transformation matrix and 2 shows how target word embeddings are generated using transformation operators and our backup models. Table 4 : Example results of transformation operators for irregular transformations.",
"cite_spans": [
{
"start": 510,
"end": 529,
"text": "Levy et. al. (2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 668,
"end": 676,
"text": "Figure 1",
"ref_id": null
},
{
"start": 844,
"end": 851,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "In table 2, GN denotes the scores of Google-News word embeddings on the test set. SGNS-L and Glove-L (Levy et al., 2015) denote the results of Skip-gram with negative sampling and Glove word embeddings respectively, both trained on large datasets. SG denotes the scores of our word2vec trained model (on 1B tokens). \"w/ M\" implies that we have used matrix arithmetic (along with CosSum/CosMul as backup) for word analogy answering questions. Our model uses Table 5 : Example results of transformation operators for complete change of word form.",
"cite_spans": [
{
"start": 101,
"end": 120,
"text": "(Levy et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 457,
"end": 464,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "\"CosSum\" and \"CosMul\" as backup transformation method in case a transformation operator (matrix) does not exist. We see that the results of GN+Matrix are better than the previously used models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "However, one thing we noticed was that the model trained on Google-News did not contain words with apostrophe sign(s) and 1000 out of 8000 words in MSR word analogy test set contained apostrophe sign(s). Also, we noticed that in SG, the matrix approach was able to answer word analogy queries where words contained apostrophe sign(s), with an accuracy of 93.7% since it is a very common transformation -which resulted in well trained transformation matrix. So, we used SG as a backup for words which were not found in GN. The results of this hybrid model are denoted by GN-SG Hybrid. We see that this model performs considerably better than the existing state of the art system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "As we can see in table 3, our approach works really well for analogy questions where target word experiences regular transformation, i.e. the transformation type is simple addition/subtraction of suffix/prefix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "In table 4 and table 5 we observe that transformations are irregular transformations i.e there is slight change in word form while addition/subtraction of suffix/prefix or there is complete change in word form in the target word of our analogy question. This is an interesting observation, because even though our rule extraction (as explained above) is syntactic in nature, our method still learns and can apply transformation rules on words which undergo such irregular/complete transformations.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 22,
"text": "table 4 and table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "In operator \"<null,s>\", we see that our transformation matrix works pretty well irrespective of the form the word. For example, it works for \"school-schools\" and \"reduce-reduces\" which are noun and verb word pairs respectively. Our approach works by statistically creating global transformation operators and is agnostic in applying them (i.e. applied on a verb or a noun). Our transformation rules learn from both noun transitions and verb transitions and hence, even though we agree that linguistically there is a difference between noun and verb transitions, our approach performed better than previously existing systems",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "We also observed that in some cases, cosine similarity score is 1. This is mostly because \"stricter-strictest\" was used for training transformation matrix of \"<r,st>\" operator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "Although our cosine scores for irregular/complete transformations are not that high with respect to scores for regular transformations, our system still performs at par or better than previous known systems. It is still able to predict words with high accuracy using its limited training corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "These observations can also help us ana-lyze how certain complex transformations (irregular/complete) still behave similar to their regular counterpart computationally, as is apparent from our transformation matrix -which has learnt itself from rules that were extracted via all possible prefix and suffix substitutions from w1 to w2, and thus irregular/complete transformations would not be present in training our transformation matrix (where w1 and w2 belong to our vocabulary Vthe size of our corpus). The main application of our approach lies in its ability to generate representations for unseen/unreliable words on the go. If we encounter a word such as \"preparedness\" for which we do not have a representation or our representation is not reliable, we can identify any reliable form of the word, say \"prepared\" and apply <null,ness> operator on it, resulting in a representation for \"preparedness\". In a similar case, we can generate embeddings for words such as \"unpreparedness\" from \"prepared\" by sequentially applying <null,ness> and a prefix operator trained in a similar manner -<null,un>. Overall, this results in a much larger vocabulary than of the model initially being used. Sequential application for such learned operators would also be beneficial for morphologically rich languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "We conclude that our matrix is able to harness morphological regularities present in word pairs used for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Transformation Matrices",
"sec_num": "3.2"
},
{
"text": "One of the major drawbacks to our system is that the rule extraction process is designed towards prefix/suffix based morphology only. Improvements will be required in that step to handle complex morphological phenomena such as affixation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "5"
},
{
"text": "To solve the word analogy task, we currently employ a simple lexical analysis to determine which transformation operator to apply. We thus require a backup model for pairs that do not conform to any known operator. A more complicated scheme involving comparisons between multiple outputs (after applying different rules) could help remove the dependency on a backup model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "5"
},
{
"text": "Although our work is a general way to generate morphologically informed embeddings in an unsupervised manner, we have designed the prediction approach to deal with the word analogy task. Recent trends (Tsvetkov et al., 2015) and (Tsvetkov et al., 2016) have suggested that eval-uation methodologies such as word analogy and word similarity tasks may not be holistic. Thus, embeddings generated by our approach should be evaluated by plugging into end-level tasks such as machine translation, POS tagging etc. This would also help in analysing which tasks benefit from having morphologically informed word embeddings and which would suffice with simple orthographic features such as presence of certain suffixes.",
"cite_spans": [
{
"start": 201,
"end": 224,
"text": "(Tsvetkov et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 229,
"end": 252,
"text": "(Tsvetkov et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An unsupervised approach for mapping between vector spaces",
"authors": [
{
"first": "Arihant",
"middle": [],
"last": "Syed Sarfaraz Akhtar",
"suffix": ""
},
{
"first": "Avijit",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Arjit",
"middle": [],
"last": "Vajpayee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Madan Gopal Jhanwar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shrivastava",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Syed Sarfaraz Akhtar, Arihant Gupta, Avijit Vajpayee, Arjit Srivastava, Madan Gopal Jhanwar, and Manish Shrivastava. 2017a. An unsupervised approach for mapping between vector spaces. Research in Com- puter Science, forthcoming.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised morphological expansion of small datasets for improving word embeddings",
"authors": [
{
"first": "Arihant",
"middle": [],
"last": "Syed Sarfaraz Akhtar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": 2017,
"venue": "International Journal of Computational Linguistics and Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Syed Sarfaraz Akhtar, Arihant Gupta, Avijit Vajpayee, Arjit Srivastava, and Manish Shrivastava. 2017b. Unsupervised morphological expansion of small datasets for improving word embeddings. Interna- tional Journal of Computational Linguistics and Ap- plications, forthcoming.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Compositional morphology for word representations and language modelling",
"authors": [
{
"first": "Jan",
"middle": [
"A"
],
"last": "Botha",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 31st International Conference on International Conference on Machine Learning",
"volume": "32",
"issue": "",
"pages": "1899--1907",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan A. Botha and Phil Blunsom. 2014. Composi- tional morphology for word representations and lan- guage modelling. In Proceedings of the 31st Inter- national Conference on International Conference on Machine Learning -Volume 32, ICML'14, pages II- 1899-II-1907.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised models for morpheme segmentation and morphology learning",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "Creutz",
"suffix": ""
},
{
"first": "Krista",
"middle": [],
"last": "Lagus",
"suffix": ""
}
],
"year": 2007,
"venue": "ACM Trans. Speech Lang. Process",
"volume": "4",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphol- ogy learning. ACM Trans. Speech Lang. Process., 4(1):3:1-3:34.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving distributional similarity with lessons learned from word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "211--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Associ- ation for Computational Linguistics, 3:211-225.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Linguistic regularities in sparse and explicit word representations",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Israel",
"middle": [],
"last": "Ramat-Gan",
"suffix": ""
}
],
"year": 2014,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "171--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Yoav Goldberg, and Israel Ramat-Gan. 2014. Linguistic regularities in sparse and explicit word representations. In CoNLL, pages 171-180.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Better word representations with recursive neural networks for morphology",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "104--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Richard Socher, and Christopher D Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL, pages 104-113.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Exploiting similarities among languages for machine translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1309.4168"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for ma- chine translation. arXiv preprint arXiv:1309.4168.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013c. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "hlt-Naacl",
"volume": "13",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013d. Linguistic regularities in continuous space word representations. In hlt-Naacl, volume 13, pages 746-751.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Machine learning in automated text categorization",
"authors": [
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM computing surveys (CSUR)",
"volume": "34",
"issue": "1",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabrizio Sebastiani. 2002. Machine learning in auto- mated text categorization. ACM computing surveys (CSUR), 34(1):1-47.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The westbury lab wikipedia corpus",
"authors": [
{
"first": "Cyrus",
"middle": [],
"last": "Shaoul",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cyrus Shaoul. 2010. The westbury lab wikipedia cor- pus. Edmonton, AB: University of Alberta.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Unsupervised morphology induction using word embeddings",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2015,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "1627--1637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Soricut and Franz Josef Och. 2015. Unsu- pervised morphology induction using word embed- dings. In HLT-NAACL, pages 1627-1637.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Quantitative evaluation of passage retrieval algorithms for question answering",
"authors": [
{
"first": "Stefanie",
"middle": [],
"last": "Tellex",
"suffix": ""
},
{
"first": "Boris",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Fernandes",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Marton",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval",
"volume": "",
"issue": "",
"pages": "41--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefanie Tellex, Boris Katz, Jimmy Lin, Aaron Fernan- des, and Gregory Marton. 2003. Quantitative eval- uation of passage retrieval algorithms for question answering. In Proceedings of the 26th annual inter- national ACM SIGIR conference on Research and development in informaion retrieval, pages 41-47. ACM.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Correlation-based intrinsic evaluation of word vector representations",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.06710"
]
},
"num": null,
"urls": [],
"raw_text": "Yulia Tsvetkov, Manaal Faruqui, and Chris Dyer. 2016. Correlation-based intrinsic evaluation of word vector representations. arXiv preprint arXiv:1606.06710.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Evaluation of word vector representations by subspace alignment",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guil- laume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Word representations: a simple and general method for semi-supervised learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th annual meeting of the association for computational linguistics",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th annual meeting of the association for compu- tational linguistics, pages 384-394. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Figure 1: Training WorkFlow",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Figure 2: Prediction WorkFlow",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"num": null,
"text": "",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF3": {
"num": null,
"text": "Scores on MSR word analogy test set.",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Word1</td><td>Word2</td><td colspan=\"2\">Word3 Operator</td><td colspan=\"2\">Word4 Cosine</td></tr><tr><td>decides</td><td colspan=\"2\">decided studies</td><td>&lt;s , d&gt;</td><td>studied</td><td>0.89</td></tr><tr><td>reach</td><td>reaches</td><td>go</td><td>&lt;null , es&gt;</td><td>goes</td><td>1.0</td></tr><tr><td colspan=\"3\">member members school</td><td colspan=\"2\">&lt;null , s&gt; schools</td><td>0.88</td></tr><tr><td>ask</td><td>asks</td><td colspan=\"3\">reduce &lt;null , s&gt; reduces</td><td>0.91</td></tr><tr><td colspan=\"2\">resident residents</td><td>rate</td><td>&lt;null , s&gt;</td><td>rates</td><td>0.86</td></tr><tr><td>get</td><td>gets</td><td>show</td><td>&lt;null , s&gt;</td><td>shows</td><td>0.83</td></tr><tr><td>higher</td><td>highest</td><td>stricter</td><td>&lt;r , st&gt;</td><td>strictest</td><td>1.0</td></tr><tr><td>wild</td><td>wilder</td><td>harsh</td><td colspan=\"2\">&lt;null , er&gt; harsher</td><td>0.91</td></tr></table>"
},
"TABREF4": {
"num": null,
"text": "Example results of transformation operators for regular transformations.",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Word1</td><td>Word2</td><td>Word3</td><td>Operator</td><td colspan=\"2\">Word4 Cosine</td></tr><tr><td>joined</td><td>joins</td><td>became</td><td>&lt;ed , s&gt;</td><td>becomes</td><td>0.68</td></tr><tr><td>turned</td><td>turns</td><td>said</td><td>&lt;ed , s&gt;</td><td>says</td><td>0.74</td></tr><tr><td>learn</td><td>learned</td><td>build</td><td>&lt;null , ed&gt;</td><td>built</td><td>0.80</td></tr><tr><td colspan=\"2\">support supported</td><td>see</td><td>&lt;null , ed&gt;</td><td>saw</td><td>0.72</td></tr></table>"
}
}
}
}