ACL-OCL / Base_JSON /prefixD /json /D18 /D18-1033.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D18-1033",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:50:35.380157Z"
},
"title": "Cross-lingual Lexical Sememe Prediction",
"authors": [
{
"first": "Fanchao",
"middle": [],
"last": "Qi",
"suffix": "",
"affiliation": {
"laboratory": "Lab on Intelligent Technology and Systems",
"institution": "Tsinghua University",
"location": {}
},
"email": ""
},
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "Lab on Intelligent Technology and Systems",
"institution": "Tsinghua University",
"location": {}
},
"email": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "Lab on Intelligent Technology and Systems",
"institution": "Tsinghua University",
"location": {}
},
"email": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {
"laboratory": "Lab on Intelligent Technology and Systems",
"institution": "Tsinghua University",
"location": {}
},
"email": "zhuhao15@mails.tsinghua.edu.cn"
},
{
"first": "Ruobing",
"middle": [],
"last": "Xie",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "Lab on Intelligent Technology and Systems",
"institution": "Tsinghua University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Sememes are defined as the minimum semantic units of human languages. As important knowledge sources, sememe-based linguistic knowledge bases have been widely used in many NLP tasks. However, most languages still do not have sememe-based linguistic knowledge bases. Thus we present a task of cross-lingual lexical sememe prediction, aiming to automatically predict sememes for words in other languages. We propose a novel framework to model correlations between sememes and multilingual words in low-dimensional semantic space for sememe prediction. Experimental results on real-world datasets show that our proposed model achieves consistent and significant improvements as compared to baseline methods in cross-lingual sememe prediction. The codes and data of this paper are available at https: //github.com/thunlp/CL-SP.",
"pdf_parse": {
"paper_id": "D18-1033",
"_pdf_hash": "",
"abstract": [
{
"text": "Sememes are defined as the minimum semantic units of human languages. As important knowledge sources, sememe-based linguistic knowledge bases have been widely used in many NLP tasks. However, most languages still do not have sememe-based linguistic knowledge bases. Thus we present a task of cross-lingual lexical sememe prediction, aiming to automatically predict sememes for words in other languages. We propose a novel framework to model correlations between sememes and multilingual words in low-dimensional semantic space for sememe prediction. Experimental results on real-world datasets show that our proposed model achieves consistent and significant improvements as compared to baseline methods in cross-lingual sememe prediction. The codes and data of this paper are available at https: //github.com/thunlp/CL-SP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Words are regarded as the smallest meaningful unit of speech or writing that can stand by themselves in human languages, but not the smallest indivisible semantic unit of meaning. That is, the meaning of a word can be represented as a set of semantic components. For example, \"Man = human + male + adult\" and \"Boy = human + male + child\". In linguistics, the minimum semantic unit of meaning is named sememe (Bloomfield, 1926) . Some people believe that semantic meanings of concepts such as words can be composed of a limited closed set of sememes. And sememes can help us comprehend human languages better.",
"cite_spans": [
{
"start": 408,
"end": 426,
"text": "(Bloomfield, 1926)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unfortunately, the lexical sememes of words are not explicit in most human languages. Hence, people construct sememe-based linguistic knowledge bases (KBs) via manually annotating every words with a pre-defined closed set of sememes. HowNet (Dong and Dong, 2003) is one of the most wellknown sememe-based linguistic KBs. Different from WordNet (Miller, 1995) which focuses on the relations between senses, it annotates each word with one or more relevant sememes. As illustrated in Fig. 1 , the word apple has two senses including apple (fruit) and apple (brand) in HowNet. The sense apple (fruit) has one sememe fruit, and the sense apple (brand) has five sememes including computer, PatternValue, able, bring and Speci-ficBrand. There exist about 2, 000 sememes and over 100 thousand labeled Chinese and English words in HowNet. HowNet has been widely used in various NLP applications such as word similarity computation (Liu and Li, 2002) , word sense disambiguation (Zhang et al., 2005) , question classification (Sun et al., 2007) and sentiment classification (Dang and Zhang, 2010) . However, most languages do not have such sememe-based linguistic KBs, which prevents us understanding and utilizing human languages to a greater extent. Therefore, it is important to build sememe-based linguistic KBs for various languages. Manual construction for sememebased linguistic KBs requires efforts of many linguistic experts, which is time-consuming and labor-intensive. For example, the construction of HowNet has cost lots of Chinese linguistic experts more than 10 years.",
"cite_spans": [
{
"start": 241,
"end": 262,
"text": "(Dong and Dong, 2003)",
"ref_id": "BIBREF9"
},
{
"start": 344,
"end": 358,
"text": "(Miller, 1995)",
"ref_id": "BIBREF31"
},
{
"start": 923,
"end": 941,
"text": "(Liu and Li, 2002)",
"ref_id": "BIBREF25"
},
{
"start": 970,
"end": 990,
"text": "(Zhang et al., 2005)",
"ref_id": "BIBREF45"
},
{
"start": 1017,
"end": 1035,
"text": "(Sun et al., 2007)",
"ref_id": "BIBREF37"
},
{
"start": 1065,
"end": 1087,
"text": "(Dang and Zhang, 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 482,
"end": 488,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address the issue of the high labor cost of manual annotation, we propose a new task, crosslingual lexical sememe prediction (CLSP) which aims to automatically predict lexical sememes for words in other languages. CLSP aims to assist in the annotation of linguistic experts. There are two critical challenges for CLSP: (1) There is not a consistent one-to-one match between words in different languages. For example, English word \"beautiful\" can refer to Chinese words of either \"\u7f8e\u4e3d\" or \"\u6f02\u4eae\". Hence, we cannot simply translate HowNet into another language. And how to recognize the semantic meaning of a word in other languages becomes a critical problem. 2Since there is a gap between the semantic meanings of words and sememes, we need to build semantic representations for words and sememes to capture the semantic relatedness between them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To tackle these challenges, in this paper, we propose a novel model for CLSP, which aims to transfer sememe-based linguistic KBs from source language to target language. Our model contains three modules including (1) monolingual word embedding learning which is intended for learning semantic representations of words for source and target languages respectively; (2) cross-lingual word embedding alignment which aims to bridge the gap between the semantic representations of words in two languages; (3) sememe-based word embedding learning whose objective is to incorporate sememe information into word representations. For simplicity, we do not consider the hierarchy information in HowNet in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In experiments, we take Chinese as source language and English as target language to show the effectiveness of our model. Experimental results show that our proposed model could effectively predict lexical sememes for words with different frequencies in other languages. Our model also has consistent improvements on two auxiliary experiments including bilingual lexicon induction and monolingual word similarity computation by jointly learning the representations of sememes, words in source and target languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since HowNet was published (Dong and Dong, 2003) , it has attracted wide attention of re-searchers. Most of related works focus on applying HowNet to specific NLP tasks (Liu and Li, 2002; Zhang et al., 2005; Sun et al., 2007; Dang and Zhang, 2010; Fu et al., 2013; Niu et al., 2017; Zeng et al., 2018; Gu et al., 2018) . To the best of our knowledge, only and Jin et al. (2018) conduct studies of augmenting HowNet by recommending sememes for new words. However, both of the two works are aimed to recommend sememes for monolingual words and not applicable to cross-lingual circumstance. Accordingly, our work is the first effort to automatically perform cross-lingual sememe prediction to enrich sememe-based linguistic KBs.",
"cite_spans": [
{
"start": 27,
"end": 48,
"text": "(Dong and Dong, 2003)",
"ref_id": "BIBREF9"
},
{
"start": 169,
"end": 187,
"text": "(Liu and Li, 2002;",
"ref_id": "BIBREF25"
},
{
"start": 188,
"end": 207,
"text": "Zhang et al., 2005;",
"ref_id": "BIBREF45"
},
{
"start": 208,
"end": 225,
"text": "Sun et al., 2007;",
"ref_id": "BIBREF37"
},
{
"start": 226,
"end": 247,
"text": "Dang and Zhang, 2010;",
"ref_id": "BIBREF7"
},
{
"start": 248,
"end": 264,
"text": "Fu et al., 2013;",
"ref_id": "BIBREF14"
},
{
"start": 265,
"end": 282,
"text": "Niu et al., 2017;",
"ref_id": "BIBREF33"
},
{
"start": 283,
"end": 301,
"text": "Zeng et al., 2018;",
"ref_id": "BIBREF43"
},
{
"start": 302,
"end": 318,
"text": "Gu et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 360,
"end": 377,
"text": "Jin et al. (2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our novel model adopts the method of word representation learning (WRL). Recent years have witnessed great advances in WRL. Models like Skip-gram, CBOW (Mikolov et al., 2013a) and GloVe (Pennington et al., 2014) are immensely popular and achieve remarkable performance in many NLP tasks. However, most WRL methods learn distributional information of words from large corpora while the valuable information contained in semantic lexicons are disregarded. Therefore, some works try to inject semantic information of KBs into WRL (Faruqui et al., 2015; Mrk\u0161ic et al., 2016; Bollegala et al., 2016) . Nevertheless, these works are all applied to word-based KBs such as WordNet, few works pay attention to how to incorporate the knowledge from sememe-based linguistic KBs.",
"cite_spans": [
{
"start": 152,
"end": 175,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF29"
},
{
"start": 186,
"end": 211,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 527,
"end": 549,
"text": "(Faruqui et al., 2015;",
"ref_id": "BIBREF11"
},
{
"start": 550,
"end": 570,
"text": "Mrk\u0161ic et al., 2016;",
"ref_id": "BIBREF32"
},
{
"start": 571,
"end": 594,
"text": "Bollegala et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There also have been plenty of studies working on cross-lingual WRL (Upadhyay et al., 2016; Ruder, 2017) . Most of them require parallel corpora (Zou et al., 2013; AP et al., 2014; Hermann and Blunsom, 2014; Ko\u010disk\u1ef3 et al., 2014; Gouws et al., 2015; Luong et al., 2015; Coulmance et al., 2015) . Some of them adopt unsupervised or weakly supervised methods (Mikolov et al., 2013b; Vuli\u0107 and Moens, 2015; Conneau et al., 2017; Artetxe et al., 2017) . There are also some works using a seed lexicon as the cross-lingual signal (Dinu et al., 2014; Faruqui and Dyer, 2014; Lazaridou et al., 2015; Shi et al., 2015; Lu et al., 2015; Gouws et al., 2015; Wick et al., 2016; Ammar et al., 2016; Duong et al., 2016; Vuli\u0107 and Korhonen, 2016) .",
"cite_spans": [
{
"start": 68,
"end": 91,
"text": "(Upadhyay et al., 2016;",
"ref_id": "BIBREF38"
},
{
"start": 92,
"end": 104,
"text": "Ruder, 2017)",
"ref_id": "BIBREF35"
},
{
"start": 145,
"end": 163,
"text": "(Zou et al., 2013;",
"ref_id": "BIBREF46"
},
{
"start": 164,
"end": 180,
"text": "AP et al., 2014;",
"ref_id": "BIBREF1"
},
{
"start": 181,
"end": 207,
"text": "Hermann and Blunsom, 2014;",
"ref_id": "BIBREF21"
},
{
"start": 208,
"end": 229,
"text": "Ko\u010disk\u1ef3 et al., 2014;",
"ref_id": "BIBREF21"
},
{
"start": 230,
"end": 249,
"text": "Gouws et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 250,
"end": 269,
"text": "Luong et al., 2015;",
"ref_id": "BIBREF27"
},
{
"start": 270,
"end": 293,
"text": "Coulmance et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 357,
"end": 380,
"text": "(Mikolov et al., 2013b;",
"ref_id": "BIBREF30"
},
{
"start": 381,
"end": 403,
"text": "Vuli\u0107 and Moens, 2015;",
"ref_id": "BIBREF40"
},
{
"start": 404,
"end": 425,
"text": "Conneau et al., 2017;",
"ref_id": "BIBREF5"
},
{
"start": 426,
"end": 447,
"text": "Artetxe et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 525,
"end": 544,
"text": "(Dinu et al., 2014;",
"ref_id": "BIBREF8"
},
{
"start": 545,
"end": 568,
"text": "Faruqui and Dyer, 2014;",
"ref_id": "BIBREF12"
},
{
"start": 569,
"end": 592,
"text": "Lazaridou et al., 2015;",
"ref_id": "BIBREF22"
},
{
"start": 593,
"end": 610,
"text": "Shi et al., 2015;",
"ref_id": "BIBREF36"
},
{
"start": 611,
"end": 627,
"text": "Lu et al., 2015;",
"ref_id": "BIBREF26"
},
{
"start": 628,
"end": 647,
"text": "Gouws et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 648,
"end": 666,
"text": "Wick et al., 2016;",
"ref_id": "BIBREF41"
},
{
"start": 667,
"end": 686,
"text": "Ammar et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 687,
"end": 706,
"text": "Duong et al., 2016;",
"ref_id": "BIBREF10"
},
{
"start": 707,
"end": 732,
"text": "Vuli\u0107 and Korhonen, 2016)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In terms of our cross-lingual sememe prediction task, parallel data-based bilingual WRL methods are unsuitable because most language pairs have no large parallel corpora. Besides, unsupervised methods are not appropriate either as they are generally hard to learn high-quality bilingual word embeddings. Therefore, we choose the seed lexicon method in our model, and further introduce matching mechanism that is inspired by Zhang et al. (2017) to enhance its performance.",
"cite_spans": [
{
"start": 424,
"end": 443,
"text": "Zhang et al. (2017)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we introduce our novel model for CLSP. Here we define the language with sememe annotations as source language and the language without sememe annotations as target language. The main idea of our model is to learn word embeddings of source and target languages jointly in a unified semantic space, and then predict sememes for words in target language according to the words with similar semantic meanings in source language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Our method consists of three parts: monolingual word representation learning, cross-lingual word embedding alignment and sememe-based word representation learning. Hence, we define the objective function of our method corresponding to the three parts:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = L mono + L cross + L sememe .",
"eq_num": "(1)"
}
],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Here, the monolingual term L mono is designed for learning monolingual word embeddings from nonparallel corpora for source and target languages respectively. The cross-lingual term L cross aims to align cross-lingual word embeddings in a unified semantic space. And L sememe can draw sememe information into word representation learning and conduce to better word embeddings for sememe prediction. In the following subsections, we introduce the three parts in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Monolingual word representation is responsible for explaining regularities in monolingual corpora of source and target languages. Since the two corpora are non-parallel, L mono comprises two monolingual sub-models that are independent of each other:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Word Representation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L mono = L S mono + L T mono ,",
"eq_num": "(2)"
}
],
"section": "Monolingual Word Representation",
"sec_num": "3.1"
},
{
"text": "where the superscripts S and T denote source and target languages respectively. As a common practice, we choose the well established Skip-gram model to obtain monolingual word embeddings. Skip-gram model is aimed at maximizing the predictive probability of context words conditioned on the centered word. Formally, taking the source side for example, given a training word sequence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Word Representation",
"sec_num": "3.1"
},
{
"text": "{w S 1 , \u2022 \u2022 \u2022 , w S n }, Skip-gram model intends to minimize: L S mono = \u2212 n\u2212K \u2211 c=K+1 \u2211 \u2212K\u2264k\u2264K,k\u0338 =0 log P (w S c+k |w S c ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Word Representation",
"sec_num": "3.1"
},
{
"text": "(3) where K is the size of the sliding window. P (w S c+k |w S c ) stands for the predictive probability of one of the context words conditioned on the centered word w S c , formalized by the following softmax function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Word Representation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (w S c+k |w S c ) = exp(w S c+k \u2022 w S c ) \u2211 w S s \u2208V S exp(w S s \u2022 w S c ) ,",
"eq_num": "(4)"
}
],
"section": "Monolingual Word Representation",
"sec_num": "3.1"
},
{
"text": "in which V s indicates the word vocabulary of source language. L T mono can be formulated similarly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Word Representation",
"sec_num": "3.1"
},
{
"text": "Cross-lingual word embedding alignment aims to build a unified semantic space for the words in source and target languages. Inspired by Zhang et al. (2017) , we align the cross-lingual word embeddings with signals of a seed lexicon and selfmatching. Formally, L cross is composed of two terms including alignment by seed lexicon L seed and alignment by matching L match :",
"cite_spans": [
{
"start": 136,
"end": 155,
"text": "Zhang et al. (2017)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Word Embedding Alignment",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L cross = \u03bb s L seed + \u03bb m L match ,",
"eq_num": "(5)"
}
],
"section": "Cross-lingual Word Embedding Alignment",
"sec_num": "3.2"
},
{
"text": "where \u03bb s and \u03bb m are hyperparameters for controlling relative weightings of the two terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Word Embedding Alignment",
"sec_num": "3.2"
},
{
"text": "The seed lexicon term L seed encourages word embeddings of translation pairs in a seed lexicon D to be close, which can be achieved via a L 2 regularizer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Seed Lexicon",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L seed = \u2211 \u27e8w S s ,w T t \u27e9\u2208D \u2225w S s \u2212 w T t \u2225 2 ,",
"eq_num": "(6)"
}
],
"section": "Alignment by Seed Lexicon",
"sec_num": null
},
{
"text": "in which w S s and w T t indicate the words in source and target languages in the seed lexicon respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Seed Lexicon",
"sec_num": null
},
{
"text": "As for the matching process, it is founded on an assumption that each target word should be matched to a single source word or a special empty word, and vice versa. The goal of the matching process is to find the matched source (target) word for each target (source) word and maximize the matching probabilities for all the matched word pairs. The loss of this part can be formulated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Matching Mechanism",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L match = L T 2S match + L S2T match ,",
"eq_num": "(7)"
}
],
"section": "Alignment by Matching Mechanism",
"sec_num": null
},
{
"text": "where L T 2S match is the term for target-to-source matching and L S2T match is the term for source-totarget matching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Matching Mechanism",
"sec_num": null
},
{
"text": "Next, we give a detailed explanation of target-to-source matching, and the source-totarget matching is defined in the same way. We first introduce a latent variable",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Matching Mechanism",
"sec_num": null
},
{
"text": "m t \u2208 {0, 1, \u2022 \u2022 \u2022 , |V S |} (t = 1, 2, \u2022 \u2022 \u2022 , |V T |) for each target word w T t ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Matching Mechanism",
"sec_num": null
},
{
"text": "where |V S | and |V T | indicate the vocabulary size of source and target languages respectively. Here, m t specifies the index of the source word that w T t matches with, and m t = 0 signifies the empty word is matched. Then we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Matching Mechanism",
"sec_num": null
},
{
"text": "m = {m 1 , m 2 , \u2022 \u2022 \u2022 , m |V T | }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Matching Mechanism",
"sec_num": null
},
{
"text": ", and can formalize the target-to-source matching term:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Matching Mechanism",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L T 2S match = \u2212 log P (C T |C S ) = \u2212 log \u2211 m P (C T , m|C S ),",
"eq_num": "(8)"
}
],
"section": "Alignment by Matching Mechanism",
"sec_num": null
},
{
"text": "where C T and C S denote the target and source corpus respectively. Here, we simply assume that the matching processes of target words are independent of each other. Therefore, we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Matching Mechanism",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (C T , m|C S ) = \u220f w T \u2208C T P (w T , m|C S ) = |V T | \u220f t=1 P (w T t |w S mt ) c(w T t ) ,",
"eq_num": "(9)"
}
],
"section": "Alignment by Matching Mechanism",
"sec_num": null
},
{
"text": "where w S mt is the source word that w T t matches with, and c(w T t ) is the number of times w T t occurs in the target corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment by Matching Mechanism",
"sec_num": null
},
{
"text": "Sememe-based word representation is intended for improving word embeddings for sememe prediction by introducing the information of sememebased linguistic KBs of source language. In this section, we present two methods of sememe-based word representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sememe-based Word Representation",
"sec_num": "3.3"
},
{
"text": "A simple and intuitive method is to let words with similar sememe annotations tend to have similar word embeddings, which we name word relationbased approach. To begin with, we construct a synonym list from sememe-based linguistic KBs of source language, where we regard words sharing a certain number of sememes as synonyms. Next, we force synonyms to have closer word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Relation-based Approach",
"sec_num": null
},
{
"text": "Formally, we let w S i be original word embedding of w S i and\u0175 S i be its adjusted word embedding. And let Syn(w S i ) denote the synonym set of word w S i . Then the loss function is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Relation-based Approach",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L sememe = \u2211 w S i \u2208V S [ \u03b1 i \u2225w S i \u2212\u0175 S i \u2225 2 + \u2211 w S j \u2208Syn(w S i ) \u03b2 ij \u2225\u0175 S i \u2212\u0175 S j \u2225 2 ] ,",
"eq_num": "(10)"
}
],
"section": "Word Relation-based Approach",
"sec_num": null
},
{
"text": "where \u03b1 and \u03b2 control the relative strengths of the two terms. It should be noted that the idea of forcing similar words to have close word embeddings is similar to the state-of-theart retrofitting approach (Faruqui et al., 2015) . However, retrofitting approach cannot be applied here because sememe-based linguistic KBs such as HowNet cannot directly provide its needed synonym list.",
"cite_spans": [
{
"start": 207,
"end": 229,
"text": "(Faruqui et al., 2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Relation-based Approach",
"sec_num": null
},
{
"text": "Simple and effective as the word relation-based approach is, it cannot make full use of the information of sememe-based linguistic KBs because it disregards the complicated relations between sememes and words as well as relations between different sememes. To address this limitation, we propose sememe embedding-based approach, which learns both sememe and word embeddings jointly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sememe Embedding-based Approach",
"sec_num": null
},
{
"text": "In this approach, we represent sememes with distributed vectors as well and place them into the same semantic space as words. Similar to SPSE , which learns sememe embeddings by decomposing word-sememe matrix and sememe-sememe matrix, our method utilizes sememe embeddings as regularizers to learn better word embeddings. Different from SPSE, we do not use pre-trained word embeddings. Instead, we learn word embeddings and sememe embeddings simultaneously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sememe Embedding-based Approach",
"sec_num": null
},
{
"text": "More specifically, from HowNet we can extract a source-side word-sememe matrix M S with M S sj = 1 indicating word w S s is annotated with sememe x j , otherwise M S sj = 0. Hence by factorizing M S , we can define the loss function as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sememe Embedding-based Approach",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L sememe = \u2211 w S s \u2208V S ,x j \u2208X (w S s \u2022x j +b s +b \u2032 j \u2212M S sj ) 2 ,",
"eq_num": "(11)"
}
],
"section": "Sememe Embedding-based Approach",
"sec_num": null
},
{
"text": "where b s and b \u2032 j are the biases of w S s and x j , and X denotes sememe set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sememe Embedding-based Approach",
"sec_num": null
},
{
"text": "In this approach, we obtain word and sememe embeddings in a unified semantic space. The sememe embeddings bear all the information about the relationships between words and sememes, and they inject the information into word embeddings. Therefore, the word embeddings are expected to be more suitable for sememe prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sememe Embedding-based Approach",
"sec_num": null
},
{
"text": "When training monolingual word embeddings, we use negative sampling following Mikolov et al. (2013a) . In the optimization of sememe part, we adopt the iterative updating method following Faruqui et al. (2015) for word relation-based approach and stochastic gradient descent (SGD) for sememe embedding-based approach. As for the optimization of the seed lexicon term of crosslingual part, we also apply SGD.",
"cite_spans": [
{
"start": 78,
"end": 100,
"text": "Mikolov et al. (2013a)",
"ref_id": "BIBREF29"
},
{
"start": 188,
"end": 209,
"text": "Faruqui et al. (2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": null
},
{
"text": "Nevertheless, due to the existence of the latent variable, optimization of the matching process in cross-lingual part poses a challenge. We settle on Viterbi EM algorithm to address the problem. Next, we still take the target-to-source side as an example and give a detailed description of the training process using Viterbi EM algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": null
},
{
"text": "Viterbi EM algorithm alternates between a Viterbi E step and a subsequent M step. The Viterbi E step aims to find the most probable matched word pairs given the current parameters. Considering the independence, we can seek the match for each word individually:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m t = arg max s\u2208{0,1,\u2022\u2022\u2022 ,|V S |} P (w T t |w S s ).",
"eq_num": "(12)"
}
],
"section": "Training",
"sec_num": null
},
{
"text": "As for the parametrization of the matching probability, there are various choices. For computational simplicity, we select cosine similarity:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (w T t |w S s ) = { \u03f5 if s = 0, cos(w T t , w S s ) otherwise,",
"eq_num": "(13)"
}
],
"section": "Training",
"sec_num": null
},
{
"text": "where \u03f5 is a hyperparameter indicating the probability of matching the empty word. Therefore, the Viterbi E step computes matching by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m t = arg max s\u2208{1,\u2022\u2022\u2022 ,|V S |} cos(w T t , w S s ),",
"eq_num": "(14)"
}
],
"section": "Training",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m t = {m t if cos(w T t , w S mt ) > \u03f5, 0 otherwise.",
"eq_num": "(15)"
}
],
"section": "Training",
"sec_num": null
},
{
"text": "From this, we can see that \u03f5 serves as a threshold to keep out unreliable matched pairs. The Viterbi M step performs maximization as if the latent variable has been observed in the Viterbi E step. Thus, we can treat the matched pairs as correct translations, and use a L 2 regularizer as well. Consequently, the M step computes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(\u0175 S ,\u0175 T ) = arg max w S ,w T M(w S , w T ),",
"eq_num": "(16)"
}
],
"section": "Training",
"sec_num": null
},
{
"text": "where M(w S , w T ) is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M(w S , w T ) = \u2212 |V T | \u2211 t=1 I[m t \u0338 = 0] c(w T t ) |C T | \u2225w T t \u2212w S mt \u2225 2 .",
"eq_num": "(17)"
}
],
"section": "Training",
"sec_num": null
},
{
"text": "Prediction Since we assume that words with similar sememe annotations are similar and similar words should have similar sememes, which resembles collaborative filtering in personalized recommendation, we can recommend sememes for target words according to their most similar source words. Formally, we define the score function P (x j |w T t ) of sememes x j given a target word w T t as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (x j |w T t ) = \u2211 w S s \u2208V S cos(w S s , w T t )\u2022M S sj \u2022c rs ,",
"eq_num": "(18)"
}
],
"section": "Training",
"sec_num": null
},
{
"text": "where r s is the descending rank of word similarity cos(w S s , w T t ) for the source word w S s , and c \u2208 (0, 1) is a hyperparameter. Thus, c rs is a declined confidence factor which can eliminate the noise from irrelevant source words and concentrate on the most similar source words when predicting sememes for target words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": null
},
{
"text": "In this section, we first introduce the dataset used in the experiments and then describe the experimental settings of both baseline method and our model. Next, we present the experimental results of different methods on the task of cross-lingual lexical sememe prediction. And then we conduct detailed analysis and exhaustive case studies. Following this, we investigate the effect of word frequency on cross-lingual sememe prediction results. Finally, we perform further quantitative analysis through two sub-tasks including bilingual lexicon induction and word similarity computation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We use sememe annotations in HowNet for sememe prediction. HowNet annotates sememes for 118, 346 Chinese words and 104, 025 English words. The number of sememes in total is 1, 983. Since some sememes only appear few times in HowNet, which are expected to be unimportant, we filter out those low-frequency sememes. Specifically, the frequency threshold is 5, and the final number of distinct sememes used in our experiments is 1, 400.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "In our experiments, Chinese is source language and English is target language. To learn Chinese and English monolingual word embeddings, we extract about 2.0G text from Sogou-T 1 and Wikipedia 2 respectively. And we use THULAC 3 (Li and Sun, 2009) for Chinese word segmentation.",
"cite_spans": [
{
"start": 229,
"end": 247,
"text": "(Li and Sun, 2009)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "As for seed lexicon, we build it in a similar way to Zhang et al. (2017) . First, we employ Google Translation API 4 to translate the source side (Chinese) vocabulary. Then the translations in the target language (English) are queried again in the reverse direction to translate back to the source language (Chinese). And we only keep the translation pairs whose back translated words match with the original source words.",
"cite_spans": [
{
"start": 53,
"end": 72,
"text": "Zhang et al. (2017)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "In the task of bilingual lexicon induction, we opt for Chinese-English Translation Lexicon Version 3.0 5 to be the gold standard. In the task of word similarity computation, we choose WordSim-240 and WordSim-297 (Jin and Wu, 2012) datasets for Chinese, and WordSim-353 (Finkelstein et al., 2002) and SimLex-999 (Hill et al., 2015) datasets for English to evaluate the performance of our 1 Sogou-T is a corpus of web pages provided by a Chinese commercial search engine. https://www.sogou.com/ labs/resource/t.php 2 https://dumps.wikimedia.org/ 3 http://thulac.thunlp.org/ 4 https://cloud.google.com/translate/ 5 https://catalog.ldc.upenn.edu/ LDC2002L27 model. These datasets contain word pairs as well as human-assigned similarity scores. The word vectors are evaluated by ranking the word pairs according to their cosine similarities, and measuring Spearman's rank correlation coefficient with the human ratings.",
"cite_spans": [
{
"start": 212,
"end": 230,
"text": "(Jin and Wu, 2012)",
"ref_id": "BIBREF20"
},
{
"start": 269,
"end": 295,
"text": "(Finkelstein et al., 2002)",
"ref_id": "BIBREF13"
},
{
"start": 311,
"end": 330,
"text": "(Hill et al., 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "We empirically set the dimension of word and sememe embeddings to 200. And the embeddings are all randomly initialized. In monolingual word embedding learning, we follow the optimal parameter settings in Mikolov et al. (2013a) . We set the window size K to 5, down-sampling rate for highfrequency words to 10 \u22125 , learning rate to 0.025 and the number of negative samples to 5. In crosslingual word embedding alignment, the seed lexicon term weight \u03bb s is 0.01, and the matching term weight \u03bb m is 1, 000. In sememe-based word representation, the number of shared sememes for synonyms in the word relation-based approach is 2. In the training of matching process, we set \u03f5 to 0.5 empirically. When predicting sememes for words in target language, we only consider 100 most similar source words for each target word and the attenuation parameter c is 0.8. The testing set for cross-lingual lexical sememe prediction contains 2, 000 randomly selected English words from the vocabulary.",
"cite_spans": [
{
"start": 204,
"end": 226,
"text": "Mikolov et al. (2013a)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.2"
},
{
"text": "We evaluate our model by recommending sememes for English words. In HowNet, many words have multiple sememes, so that sememe prediction can be regarded as a multi-label classification task. We use mean average precision (MAP) and F 1 score to evaluate the sememe prediction results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Lexical Sememe Prediction",
"sec_num": "4.3"
},
{
"text": "We compare our model that incorporates sememe information with word relation-based approach (named CLSP-WR) and our model which jointly trains word and sememe embeddings (named CLSP-SE) with a baseline method BiLex (Zhang et al., 2017) , a bilingual WRL model without incorporation of sememe information. For BiLex, we use its trained bilingual word embeddings to predict sememes for the words in target language with our sememe prediction approach. seed lexicon sizes in {1000, 2000, 4000, 6000 6 }. From the table, we can clearly see that:",
"cite_spans": [
{
"start": 215,
"end": 235,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Lexical Sememe Prediction",
"sec_num": "4.3"
},
{
"text": "(1) Our two models perform much better compared with BiLex in all the seed lexicon size settings. It indicates that incorporating sememe information into word embeddings can effectively improve the performance of predicting sememes for target words. The reason is that both of our models make words with similar sememe annotations have similar embeddings, and as a result, we can recommend better sememes for target words according to its related source words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Lexical Sememe Prediction",
"sec_num": "4.3"
},
{
"text": "(2) CLSP-SE model achieves better results than CLSP-WR model. The reason is that by representing sememes in a latent semantic space, CLSP-SE model can further capture the relatedness between sememes as well as the relatedness between words and sememes, which is helpful for modeling the representations of those words with similar sememes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Lexical Sememe Prediction",
"sec_num": "4.3"
},
{
"text": "In case study, we conduct qualitative analysis to explain the effectiveness of our models with detailed cases. We show two examples of crosslingual word sememe prediction, in which we predict sememes for handcuffs and canoeist. Fig. 2 shows the embeddings of five closest Chinese and English words to handcuffs and canoeist, and the vector of each word is projected down to two dimensions using t-SNE (Maaten and Hinton, 2008) . 6 The largest seed lexicon size is 6000 because that is the maximum number of translation word pairs that we can obtain from the bilingual corpora. Table 2 lists top-5 sememes we predict for the two words and the sememes annotated for each word in HowNet are in boldface. In the table, we also exhibit the annotated sememes of the five closest Chinese words.",
"cite_spans": [
{
"start": 401,
"end": 426,
"text": "(Maaten and Hinton, 2008)",
"ref_id": "BIBREF28"
},
{
"start": 429,
"end": 430,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 228,
"end": 234,
"text": "Fig. 2",
"ref_id": "FIGREF1"
},
{
"start": 577,
"end": 584,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "In the first example, our model finds the best translated word for handcuffs in Chinese \u2f3f \u94d0 \"handcuffs\", whose sememe annotations are exactly the same as those of handcuffs. In addition, the second closest Chinese word \u9563 \u94d0 \"shackles\" is a synonym for \u2f3f\u94d0 \"handcuffs\" and also has the same sememe annotations. Therefore, our model predicts all the correct sememes successfully. From the prediction results of this example, we notice that our model can accurately predict general sememes like \u7528\u5177 \"tool\" and \u2f08 \"human\", which are supposed to be difficult to predict.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "In the second example, accurate Chinese translated counterpart for canoeist does not exist, but our model still hits all the three annotated sememes in the top-5 predicted sememes. By observing the most similar Chinese words, we can find that although these words do not have the same meaning as canoeist, they are related to canoeist in different aspects. For example, \u77ed\u8dd1 \"sprint\" and canoeist are both in the sports domain so that they share the sememes \u953b\u70bc \"exercise\" and \u4f53\u80b2 \"sport\". \u540d\u5c06 \"sports star\" has the meaning of sports star and it can provide the sememe \u2f08 \"human\" in sememe prediction. Furthermore, it is noteworthy that our model predicts \u8239 \"ship\" due to the nearest Chinese words \u72ec \u2f4a \u2f88 \"canoe\" and \u76ae \u8247 \"kayak\", whereas \u8239 \"ship\" is not annotated for canoeist in HowNet. It is obvious that \u8239 \"ship\" is an appropriate sememe for canoeist. Since HowNet is manually annotated by experts, misannotated words always exist inevitably, which in some cases underestimates our models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "English Word handcuffs \u7528\u5177 \"tool\", \"police\", \"detain\", \u2f08 \"human\", \"guilty\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Words Sememes",
"sec_num": null
},
{
"text": "5 Nearest Chinese Words \u2f3f\u94d0 \"handcuffs\" \"guilty\", \"police\", \u2f08 \"human\", \"detain\", \u7528\u5177 \"tool\" \u9563\u94d0 \"shackles\" \"guilty\", \"police\", \u2f08 \"human\", \"detain\", \u7528\u5177 \"tool\" \u7ed1 \"tie\" \u5305\u624e \"wrap\" \u87ba\u4e1d\u2f11 \"screwdriver\" \u7528\u5177 \"tool\", \u653e\u677e \"loosen\", \u52d2\u7d27 \"tighten\" \u7ef3 \"rope\" \u7ebf \"linear\", \u6750\u6599 \"material\", \u62f4\u8fde \"fasten\" English Word canoeist \u953b\u70bc \"exercise\", \u2f08 \"human\", \u4f53\u80b2 \"sport\", \u4e8b\u60c5 \"fact\", \u8239 \"ship\" 5 Nearest Chinese Words \u77ed\u8dd1 \"sprint\" \u4e8b\u60c5 \"fact\" \u953b\u70bc \"exercise\" \u4f53\u80b2 \"sport\" \u72ec\u2f4a\u2f88 \"canoe\" \u8239 \"ship\" \u76ae\u8247 \"kayak\" \u8239 \"ship\" \u540d\u5c06 \"sports star\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type Words Sememes",
"sec_num": null
},
{
"text": "\u8457\u540d \"famous\", \u2f08 \"human\", \u5b98 \"official\", \u519b \"military\" \u76ae\u5212\u8247 \"kayak\" \u4e8b\u60c5 \"fact\", \u953b\u70bc \"exercise\", \u4f53\u80b2 \"sport\" Table 2 : Two examples of cross-lingual lexical sememe prediction.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 107,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Type Words Sememes",
"sec_num": null
},
{
"text": "To explore how frequencies of target words affect cross-lingual sememe prediction results, we split the testing set into four subsets according to word frequency and then calculate the sememe prediction MAP and F 1 score for each subset. The results are shown in Table 3 . From the table we can see that:",
"cite_spans": [],
"ref_spans": [
{
"start": 263,
"end": 270,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Effect of Word Frequency",
"sec_num": "4.5"
},
{
"text": "(1) The more frequently a target word appears in the corpus, the better its predicted sememes are. It is because high-frequency words normally have better word embeddings, which are crucial to sememe prediction. (2) Our models evidently perform better than BiLex in different word frequencies, especially in low frequency. It indicates that by considering external information of HowNet, our models are more robust and can competently handle sparse scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Word Frequency",
"sec_num": "4.5"
},
{
"text": "In this section, we conduct two typical auxiliary experiments to further analyze the superiority of our models quantitatively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further Quantitative Analysis",
"sec_num": "4.6"
},
{
"text": "Our models learn bilingual word embeddings in one unified semantic space. Here we use translation top-1 and top-5 average precision (P@1 and P@5) to evaluate bilingual lexicon induction performance of our models and BiLex. The seed lexicon size also varies in {1000, 2000, 4000, 6000}. The results are shown in Table 4 . From this table, we observe that our models, especially CLSP-SE model, enhance the performance of word translation compared to BiLex no matter how large the seed lexicon is. It indicates that our models can bind bilingual word embeddings better.",
"cite_spans": [],
"ref_spans": [
{
"start": 311,
"end": 318,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Bilingual Lexicon Induction",
"sec_num": null
},
{
"text": "We also evaluate the task of monolingual word similarity computation on WordSim-240 (WS-240) and WordSim-297 (WS-297) datasets for Chinese, and WordSim-353 (WS-353) and SimLex-999 (SL-999) datasets for English. Table 5 : Performance on monolingual word similarity computation with seed lexicon size 6000. Table 5 shows the results of monolingual word similarity computation on four datasets. From the table, we find that: (1) Our models perform better than BiLex on both Chinese word similarity datasets. It signifies incorporating sememe information helps learn better monolingual embeddings; (2) CLSP-WR model does not enhance English word similarity results but CLSP-SE model does. It is because CLSP-WR model only post-processes Chinese word embeddings and keeps English word embeddings unchanged while CLSP-SE model undertakes bilingual alignment and sememe information incorporation together, which makes English word embeddings improve with Chinese word embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 211,
"end": 218,
"text": "Table 5",
"ref_id": null
},
{
"start": 305,
"end": 312,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Similarity Computation",
"sec_num": null
},
{
"text": "In this paper, we introduce a new task of crosslingual sememe prediction. This task is very important because the construction of sememe-based linguistic knowledge bases in various languages is beneficial to better understanding these languages. We propose a simple and effective model for this task, including monolingual word representation learning, cross-lingual word representation alignment and sememe-based word representation learning. Experimental results on real-world datasets show that our model achieves consistent and significant improvements compared to baseline method in cross-lingual sememe prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "In the future, we will explore the following research directions: (1) In this paper, for simplification, we ignore the rich hierarchy information in HowNet and also ignore the fact that a word may have multiple senses. We will extend our models to consider the structure information of sememe and multiple senses of words; (2) In fact, our framework for cross-lingual lexical sememe prediction can be transferred to other cross-lingual tasks. We will explore the effectiveness of our model in these tasks such as cross-lingual information retrieval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "This research is funded by the National 973 project (No. 2014CB340501). It is also partially supported by the NExT++ project, the National Research Foundation, Prime Minister's Office, Singapore under its IRC@Singapore Funding Initiative. Hao Zhu is supported by Tsinghua University Initiative Scientific Research Program. We also thank the anonymous reviewers for their valuable comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Massively multilingual word embeddings",
"authors": [
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.01925"
]
},
"num": null,
"urls": [],
"raw_text": "Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An autoencoder approach to learning bilingual word representations",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Sarath Chandar",
"suffix": ""
},
{
"first": "Stanislas",
"middle": [],
"last": "Lauly",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Mitesh",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "Balaraman",
"middle": [],
"last": "Ravindran",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Vikas",
"suffix": ""
},
{
"first": "Amrita",
"middle": [],
"last": "Raykar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Saha",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarath Chandar AP, Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C Raykar, and Amrita Saha. 2014. An autoencoder ap- proach to learning bilingual word representations. In Proceedings of NIPS.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning bilingual word embeddings with (almost) no bilingual data",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A set of postulates for the science of language",
"authors": [
{
"first": "Leonard",
"middle": [],
"last": "Bloomfield",
"suffix": ""
}
],
"year": 1926,
"venue": "Language",
"volume": "2",
"issue": "3",
"pages": "153--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leonard Bloomfield. 1926. A set of postulates for the science of language. Language, 2(3):153-164.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Joint word representation learning using a corpus and a semantic lexicon",
"authors": [
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [],
"last": "Alsuhaibani",
"suffix": ""
},
{
"first": "Takanori",
"middle": [],
"last": "Maehara",
"suffix": ""
},
{
"first": "Ken-Ichi",
"middle": [],
"last": "Kawarabayashi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danushka Bollegala, Mohammed Alsuhaibani, Takanori Maehara, and Ken-ichi Kawarabayashi. 2016. Joint word representation learning using a corpus and a semantic lexicon. In Proceedings of AAAI.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.04087"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Transgram, fast cross-lingual word-embeddings",
"authors": [
{
"first": "Jocelyn",
"middle": [],
"last": "Coulmance",
"suffix": ""
},
{
"first": "Jean-Marc",
"middle": [],
"last": "Marty",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Amine",
"middle": [],
"last": "Benhalloum",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jocelyn Coulmance, Jean-Marc Marty, Guillaume Wenzek, and Amine Benhalloum. 2015. Trans- gram, fast cross-lingual word-embeddings. In Pro- ceedings of EMNLP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Method of discriminant for chinese sentence sentiment orientation based on hownet",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Dang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2010,
"venue": "Application Research of Computers",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Dang and Lei Zhang. 2010. Method of discriminant for chinese sentence sentiment orientation based on hownet. Application Research of Computers, 4:43.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improving zero-shot learning by mitigating the hubness problem",
"authors": [
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6568"
]
},
"num": null,
"urls": [],
"raw_text": "Georgiana Dinu, Angeliki Lazaridou, and Marco Ba- roni. 2014. Improving zero-shot learning by mitigating the hubness problem. arXiv preprint arXiv:1412.6568.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Hownet-a hybrid language and knowledge resource",
"authors": [
{
"first": "Zhendong",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Dong",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of NLP-KE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhendong Dong and Qiang Dong. 2003. Hownet-a hy- brid language and knowledge resource. In Proceed- ings of NLP-KE.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning crosslingual word embeddings without bilingual corpora",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Duong",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Kanayama",
"suffix": ""
},
{
"first": "Tengfei",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2016. Learning crosslingual word embeddings without bilingual corpora. In Pro- ceedings of EMNLP.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Retrofitting word vectors to semantic lexicons",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "Sujay",
"middle": [],
"last": "Kumar Jauhar",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of NAACL-HLT.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving vector space word representations using multilingual correlation",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui and Chris Dyer. 2014. Improving vec- tor space word representations using multilingual correlation. In Proceedings of the EACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Placing search in context: The concept revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Yossi",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Zach",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "Gadi",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "Eytan",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Transactions on Information Systems",
"volume": "20",
"issue": "1",
"pages": "116--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on Informa- tion Systems, 20(1):116-131.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Multi-aspect sentiment analysis for chinese online social reviews based on topic modeling and hownet lexicon. Knowledge-Based Systems",
"authors": [
{
"first": "Xianghua",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Guo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Zhiqiang",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "37",
"issue": "",
"pages": "186--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xianghua Fu, Guo Liu, Yanyan Guo, and Zhiqiang Wang. 2013. Multi-aspect sentiment analysis for chinese online social reviews based on topic model- ing and hownet lexicon. Knowledge-Based Systems, 37:186-195.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bilbowa: fast bilingual distributed representations without word alignments",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: fast bilingual distributed represen- tations without word alignments. In Proceedings of ICML.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Language modeling with sparse product of sememe experts",
"authors": [
{
"first": "Yihong",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ruobing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Fen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Leyu",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yihong Gu, Jun Yan, Hao Zhu, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Fen Lin, and Leyu Lin. 2018. Language modeling with sparse product of sememe experts. In Proceedings of EMNLP.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Multilingual distributed representations without word alignment",
"authors": [],
"year": 2014,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann and Phil Blunsom. 2014. Mul- tilingual distributed representations without word alignment. In Proceedings of ICLR.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "4",
"pages": "665--695",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Incorporating chinese characters of words for lexical sememe prediction",
"authors": [
{
"first": "Huiming",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ruobing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Fen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Leyu",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huiming Jin, Hao Zhu, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Fen Lin, and Leyu Lin. 2018. In- corporating chinese characters of words for lexical sememe prediction. In Proceedings of ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "SemEval-2012 Task 4: Evaluating chinese word similarity",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Yunfang",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceddings of *SEM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Jin and Yunfang Wu. 2012. SemEval-2012 Task 4: Evaluating chinese word similarity. In Proced- dings of *SEM.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning bilingual word representations by marginalizing alignments",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u1ef3",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Ko\u010disk\u1ef3, Karl Moritz Hermann, and Phil Blun- som. 2014. Learning bilingual word representa- tions by marginalizing alignments. In Proceedings of ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Hubness and pollution: Delving into cross-space mapping for zero-shot learning",
"authors": [
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angeliki Lazaridou, Georgiana Dinu, and Marco Ba- roni. 2015. Hubness and pollution: Delving into cross-space mapping for zero-shot learning. In Pro- ceedings of ACL-IJCNLP.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Punctuation as implicit annotations for chinese word segmentation",
"authors": [
{
"first": "Zhongguo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics",
"volume": "35",
"issue": "4",
"pages": "505--512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongguo Li and Maosong Sun. 2009. Punctuation as implicit annotations for chinese word segmentation. Computational Linguistics, 35(4):505-512.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning semantic word embeddings based on ordinal knowledge constraints",
"authors": [
{
"first": "Quan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015. Learning semantic word embeddings based on ordinal knowledge constraints. In Proceed- ings of ACL-IJCNLP.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Word similarity computing based on hownet",
"authors": [
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2002,
"venue": "International Journal of Computational Linguistics & Chinese Language Processing",
"volume": "7",
"issue": "2",
"pages": "59--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qun Liu and Sujian Li. 2002. Word similarity comput- ing based on hownet. International Journal of Com- putational Linguistics & Chinese Language Process- ing, 7(2):59-76.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Deep multilingual correlation for improved word embeddings",
"authors": [
{
"first": "Ang",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Weiran",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NAANL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ang Lu, Weiran Wang, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Deep multilingual corre- lation for improved word embeddings. In Proceed- ings of NAANL-HLT.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Bilingual word representations with monolingual quality in mind",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Bilingual word representations with monolin- gual quality in mind. In Proceedings of the 1st Work- shop on Vector Space Modeling for Natural Lan- guage Processing.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Visualizing data using t-sne",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey E Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9:2579-2605.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. In Proceedings of ICLR.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Exploiting similarities among languages for machine translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1309.4168"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39- 41.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Counter-fitting word vectors to linguistic constraints",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Mrk\u0161ic",
"suffix": ""
},
{
"first": "Diarmuid",
"middle": [],
"last": "Os\u00e9aghdha",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161ic",
"suffix": ""
},
{
"first": "Lina",
"middle": [],
"last": "Rojas-Barahona",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Mrk\u0161ic, Diarmuid OS\u00e9aghdha, Blaise Thom- son, Milica Ga\u0161ic, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to lin- guistic constraints. In Proceedings of NAACL-HLT.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Improved word representation learning with sememes",
"authors": [
{
"first": "Yilin",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Ruobing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yilin Niu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Improved word representation learning with sememes. In Proceedings of ACL.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of EMNLP.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A survey of cross-lingual embedding models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.04902"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder. 2017. A survey of cross-lingual em- bedding models. arXiv preprint arXiv:1706.04902.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Learning cross-lingual word embeddings via matrix co-factorization",
"authors": [
{
"first": "Tianze",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianze Shi, Zhiyuan Liu, Yang Liu, and Maosong Sun. 2015. Learning cross-lingual word embeddings via matrix co-factorization. In Proceedings of ACL- IJCNLP.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Hownet based chinese question automatic classification",
"authors": [
{
"first": "Jingguang",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Dongfeng",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Dexin",
"middle": [],
"last": "Lv",
"suffix": ""
},
{
"first": "Yanju",
"middle": [],
"last": "Dong",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Chinese Information Processing",
"volume": "21",
"issue": "1",
"pages": "90--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingguang Sun, Dongfeng Cai, Dexin Lv, and Yanju Dong. 2007. Hownet based chinese question auto- matic classification. Journal of Chinese Information Processing, 21(1):90-95.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Cross-lingual models of word embeddings: An empirical comparison",
"authors": [
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word em- beddings: An empirical comparison. In Proceedings of ACL.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "On the role of seed lexicons in learning bilingual word embeddings",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Anna Korhonen. 2016. On the role of seed lexicons in learning bilingual word embed- dings. In Proceedings of ACL.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Bilingual word embeddings from non-parallel documentaligned data applied to bilingual lexicon induction",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2015. Bilin- gual word embeddings from non-parallel document- aligned data applied to bilingual lexicon induction. In Proceedings of ACL-IJCNLP.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Minimally-constrained multilingual embeddings via artificial code-switching",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wick",
"suffix": ""
},
{
"first": "Pallika",
"middle": [],
"last": "Kanani",
"suffix": ""
},
{
"first": "Adam",
"middle": [
"Craig"
],
"last": "Pocock",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wick, Pallika Kanani, and Adam Craig Pocock. 2016. Minimally-constrained multilingual embeddings via artificial code-switching. In Pro- ceedings of AAAI.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Lexical sememe prediction via word embeddings and matrix factorization",
"authors": [
{
"first": "Ruobing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Xingchi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruobing Xie, Xingchi Yuan, Zhiyuan Liu, and Maosong Sun. 2017. Lexical sememe prediction via word embeddings and matrix factorization. In Pro- ceedings of AAAI.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Chinese liwc lexicon expansion via hierarchical classification of word embeddings with sememe attention",
"authors": [
{
"first": "Xiangkai",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Cunchao",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiangkai Zeng, Cheng Yang, Cunchao Tu, Zhiyuan Liu, and Maosong Sun. 2018. Chinese liwc lexicon expansion via hierarchical classification of word em- beddings with sememe attention. In Proceedings of AAAI.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Bilingual lexicon induction from non-parallel data with minimal supervision",
"authors": [
{
"first": "Meng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Haoruo",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huan-Bo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meng Zhang, Haoruo Peng, Yang Liu, Huan-Bo Luan, and Maosong Sun. 2017. Bilingual lexicon induc- tion from non-parallel data with minimal supervi- sion. In Proceedings of AAAI.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Chinese word sense disambiguation using hownet",
"authors": [
{
"first": "Yuntao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ling",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Yongcheng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of International Conference on Natural Computation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuntao Zhang, Ling Gong, and Yongcheng Wang. 2005. Chinese word sense disambiguation using hownet. In Proceedings of International Conference on Natural Computation.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Bilingual word embeddings for phrase-based machine translation",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Will",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Will Y Zou, Richard Socher, Daniel Cer, and Christo- pher D Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceed- ings of EMNLP.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "An example of HowNet.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "Two examples of nearest English and Chinese words.",
"type_str": "figure",
"num": null
},
"TABREF0": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>exhibits the evaluation results of cross-</td></tr><tr><td>lingual lexical sememe prediction with different</td></tr></table>"
},
"TABREF1": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>: Evaluation results of cross-lingual lexi-</td></tr><tr><td>cal sememe prediction with different seed lexicon</td></tr><tr><td>sizes.</td></tr></table>"
},
"TABREF3": {
"text": "Evaluation results of cross-lingual lexical sememe prediction with different word frequencies. The number of words in each frequency range is 497, 458, 522 and 523 respectively.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF5": {
"text": "Bilingual lexicon induction performance with different seed lexicon sizes.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
}
}
}
}