ACL-OCL / Base_JSON /prefixB /json /bucc /2020.bucc-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:08:05.119673Z"
},
"title": "Automatic Creation of Correspondence Table of Meaning Tags from Two Dictionaries in One Language Using Bilingual Word Embedding",
"authors": [
{
"first": "Teruo",
"middle": [],
"last": "Hirabayashi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ibaraki University",
"location": {
"addrLine": "4-12-1 Nakanarusawa",
"settlement": "Hitachi",
"region": "Ibaraki",
"country": "JAPAN"
}
},
"email": ""
},
{
"first": "Kanako",
"middle": [],
"last": "Komiya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ibaraki University",
"location": {
"addrLine": "4-12-1 Nakanarusawa",
"settlement": "Hitachi",
"region": "Ibaraki",
"country": "JAPAN"
}
},
"email": "kanako.komiya.nlp@vc.ibaraki.ac.jp"
},
{
"first": "Masayuki",
"middle": [],
"last": "Asahara",
"suffix": "",
"affiliation": {},
"email": "masayu-a@ninjal.ac.jp"
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shinnou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ibaraki University",
"location": {
"addrLine": "4-12-1 Nakanarusawa",
"settlement": "Hitachi",
"region": "Ibaraki",
"country": "JAPAN"
}
},
"email": "hiroyuki.shinnou.0828@vc.ibaraki.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we show how to use bilingual word embeddings (BWE) to automatically create a corresponding table of meaning tags from two dictionaries in one language and examine the effectiveness of the method. To do this, we had a problem: the meaning tags do not always correspond one-to-one because the granularities of the word senses and the concepts are different from each other. Therefore, we regarded the concept tag that corresponds to a word sense the most as the correct concept tag corresponding the word sense. We used two BWE methods, a linear transformation matrix and VecMap. We evaluated the most frequent sense (MFS) method and the corpus concatenation method for comparison. The accuracies of the proposed methods were higher than the accuracy of the random baseline but lower than those of the MFS and corpus concatenation methods. However, because our method utilized the embedding vectors of the word senses, the relations of the sense tags corresponding to concept tags could be examined by mapping the sense embeddings to the vector space of the concept tags. Also, our methods could be performed when we have only concept or word sense embeddings whereas the MFS method requires a parallel corpus and the corpus concatenation method needs two tagged corpora.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we show how to use bilingual word embeddings (BWE) to automatically create a corresponding table of meaning tags from two dictionaries in one language and examine the effectiveness of the method. To do this, we had a problem: the meaning tags do not always correspond one-to-one because the granularities of the word senses and the concepts are different from each other. Therefore, we regarded the concept tag that corresponds to a word sense the most as the correct concept tag corresponding the word sense. We used two BWE methods, a linear transformation matrix and VecMap. We evaluated the most frequent sense (MFS) method and the corpus concatenation method for comparison. The accuracies of the proposed methods were higher than the accuracy of the random baseline but lower than those of the MFS and corpus concatenation methods. However, because our method utilized the embedding vectors of the word senses, the relations of the sense tags corresponding to concept tags could be examined by mapping the sense embeddings to the vector space of the concept tags. Also, our methods could be performed when we have only concept or word sense embeddings whereas the MFS method requires a parallel corpus and the corpus concatenation method needs two tagged corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recently, corpora that have tags from more than one tag set are increasing. For example, The Balanced Corpus of Contemporary Written Japanese (BCCWJ) (Maekawa et al., 2014) is tagged with concept tags from Word List by Semantic Principles (WLSP) (National Institute for Japanese Language and Linguistics, 1964) after tagged with sense tags from Iwanami Kokugo Jiten (Nishio et al., 1994) . Because these tags are tagged referring to different dictionaries, the word senses of a word are different from each other. However, both tagging schemes are common in a way, that is, a unique meaning is given to every word in the corpus. Wu (Wu et al., 2019 ) created a corresponding table of word senses from Iwanami Kokugo Jiten and concept numbers form WLSP manually. If we could this process automatically, tagging of corpora would be much easier. Therefore, in this paper, we describe how to utilize bilingual word embeddings (BWE) to automatically create a corresponding table of meaning tags from two dictionaries in one language, Japanese, and examine the effectiveness of the method.",
"cite_spans": [
{
"start": 150,
"end": 172,
"text": "(Maekawa et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 279,
"end": 310,
"text": "Language and Linguistics, 1964)",
"ref_id": null
},
{
"start": 366,
"end": 387,
"text": "(Nishio et al., 1994)",
"ref_id": null
},
{
"start": 632,
"end": 648,
"text": "(Wu et al., 2019",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "BWE is classified into four groups according to how to make cross-lingual word embeddings 1 . First approach is monolingual mapping. These approaches initially train monolingual word embeddings and learn a transformation matrix that maps representations in one language to those of the other language. Mikolov et al. (Mikolov et al., 1 http://ruder.io/cross-lingual-embeddings/ 2013b) have shown that vector spaces can encode meaningful relations between words and that the geometric relations that hold between words are similar across languages. Because they do not assume the use of specific language, their method can be used to extend and refine dictionaries for any language pairs. Second approach is pseudo-cross-lingual. These approaches create a pseudocross-lingual corpus by mixing contexts of different languages. Xiao and Guo (Xiao and Guo, 2014) proposed the first pseudo-cross-lingual method that utilized translation pairs. They first translated all words that appeared in the source language corpus into the target language using Wiktionary. Then they filtered out the noises of these pairs and trained the model with this corpus in which these pairs are replaced with placeholders to ensure that translations of the same word have the same vector representation. Third approach is cross-lingual training. These approaches train their embeddings on a parallel corpus and optimize a cross-lingual constraint between embeddings of different languages that encourages embeddings of similar words to be close to each other in a shared vector space. Hermann and Blunsom (Hermann and Blunsom, 2014) trained two models to output sentence embeddings for input sentences in two different languages. They retrained these models with sentence embeddings using a least-squares method. Final approach is joint optimization. They not only consider a cross-lingual constraint, but also jointly optimize mono-lingual and cross-lingual objectives. Klementiev et al. (Klementiev et al., 2012) was the first research using joint optimization. Zou (Zou et al., 2013) used a matrix factorization approach to learn cross-lingual word representations for English and Chinese and utilized the representa-tions for machine translation task. In this paper, we train BWE model by monolingual mapping and create a correspondence table of meaning tags using the model. To our knowledge, this research is the first research that uses BWE to find correspondences of meaning tags in one language.",
"cite_spans": [
{
"start": 302,
"end": 333,
"text": "Mikolov et al. (Mikolov et al.,",
"ref_id": null
},
{
"start": 334,
"end": 335,
"text": "1",
"ref_id": null
},
{
"start": 825,
"end": 858,
"text": "Xiao and Guo (Xiao and Guo, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 1581,
"end": 1608,
"text": "(Hermann and Blunsom, 2014)",
"ref_id": "BIBREF3"
},
{
"start": 1947,
"end": 1990,
"text": "Klementiev et al. (Klementiev et al., 2012)",
"ref_id": "BIBREF4"
},
{
"start": 2044,
"end": 2062,
"text": "(Zou et al., 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Usually, BWE is used for cross-lingual applications, e.g., machine translation. The word embeddings trained from a parallel corpus, a comparable corpus, or two monolingual corpora are necessary for BWE. On the other hand, the number of corpora that were tagged by more than one tag sets is increasing. One corpus could have tags of part of speeches, word senses, named entities, and so on. We can regard a corpus that was tagged with two tag sets as a parallel corpus. For example, a corpus that was tagged with the meaning tags of two dictionaries in one language would be regarded as a parallel corpus of the meaning tag sets of two dictionaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3."
},
{
"text": "In this research, we show how to utilize BWE to automatically find the correspondences of meaning tags in one language and investigate the effectiveness of the method. We generated two sets of word embeddings from a corpus with two meaning tags from different dictionaries. After that, we find correspondences of the meanings from two dictionaries using BWE. We used BCCWJ with concept tags from WLSP and sense tags from Iwanami Kokugo Jiten for the experiments. Both the word sense of Iwanami Kokugo Jiten and the concept number of WLSP represent a meaning of words and both of them are classified using a tree structure. The meaning tags do not always correspond one-to-one because the granularities of the word senses and the concepts are different from each other. However, the final purpose of this research is to automatically create a correspondence table between the word senses and the concept tags. We regarded the concept tag that corresponds to a word sense the most as the correct concept tag corresponding the word sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3."
},
{
"text": "Iwanami Kokugo Jiten is a Japanese monolingual dictionary. In Iwanami Kokugo Jiten, each word sense has a sense tag such as \"17877-0-0-1-0\", composed of \"headline ID\"-\"compound word ID\"-\"large classification ID\"-\"medium classification ID\"-\"small classification ID.\" When word sense has no corresponding ID, it would be 0. For example, the word senses and their corresponding sense tags of a word \" (child or children) are listed in Table 1 2 . Figure 1 shows the tree structure of Iwanami Kokugo Jiten. In this research, we used Annotated Corpus of Iwanami Japanese Dictionary Fifth Edition 2004, which is BCCWJ tagged with Iwamnami Kokugo Jiten, provided Gengo Shigen Kyokai, or Language Resource Academy 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 432,
"end": 440,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 445,
"end": 453,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Sense Tags from Iwanami Kokugo Jiten",
"sec_num": "3.1."
},
{
"text": "WLSP is a Japanese thesaurus in which a word is classified and ordered according to its meaning. One record is composed of the following elements, record ID number, lemma number, type of record, class, division, section, article, concept number, paragraph number, small paragraph number, word number, lemma with explanatory note, lemma without explanatory note, reading and reverse reading. Concept number consists of a category, a medium item and a classification item. We used concept numbers as the concept tags. For example,\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concept Tags from WLSP",
"sec_num": "3.2."
},
{
"text": "(child or children)\" is polyseme and two concepts are registered in WLSP, which are \"1.2050\" and \"1.2130\" (Table 2) . This paper utilizes a corpus that is in its infancy, namely BCCWJ annotated with concept tags or concept numbers of WLSP. The goal of our research is to find the correspondences of the meaning tags from two dictionaries. In the example of \" (child or children),\" we think that the word senses \"17877-0-0-1-0\" and \"17877-0-0-2-0\" in Iwanami Kokugo Jiten respectively correspond to concepts \"1.2050\" and \"1.2130\" in WLSP, however, please note that the meaning tags do not always correspond one-to-one. We utilized only two sets of meaning tag from BCCWJ and did not use the reference source: the dictionaries. Figure 2 shows the tree structure of WLSP.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 115,
"text": "(Table 2)",
"ref_id": "TABREF1"
},
{
"start": 726,
"end": 734,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Concept Tags from WLSP",
"sec_num": "3.2."
},
{
"text": "We used monolingual mapping. Monolingual mapping consists of two steps. First, monolingual word embeddings are trained for each language. In our research, one language corresponds to one meaning tag set in Japanese. After that, they are mapped to a common vector space so that word embeddings of the words whose meanings are similar to each other in two languages can be brought closer. Because the geometrical relations that hold between words are similar across languages, it is possible to transform a vector space of a language to that of another language using a linear projection. In this research, we adapted two methods of BWE, linear transformation matrix and VecMap. A ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Word Embeddings",
"sec_num": "3.3."
},
{
"text": "We utilized BCCWJ tagged with word senses of Iwanami Kokugo Jiten and BCCWJ tagged with concepts of WLSP. Table 3 shows the number of word tokens, unique words, unique word senses, and unique concepts. The settings of word2vec are shown in Table 4 . We used C-Bow algorithm and we set the number of dimensions as 200, the window size as 5, the number of iterations as 5, the batch size as 1,000, and the min-count as 1, respectively. We set the min-count as 1 because the corpus size was small. 1. Generate a word-sense-tag and concept-tag corpora respectively, and learn word-sense or concept embeddings for each corpus from them using word2vec 7 (Mikolov et al., 2013a; Mikolov et al., 2013c; Mikolov et al., 2013d ) (cf. Figure 3) .",
"cite_spans": [
{
"start": 648,
"end": 671,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF6"
},
{
"start": 672,
"end": 694,
"text": "Mikolov et al., 2013c;",
"ref_id": "BIBREF8"
},
{
"start": 695,
"end": 716,
"text": "Mikolov et al., 2013d",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 106,
"end": 113,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 240,
"end": 247,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 724,
"end": 733,
"text": "Figure 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "4.1."
},
{
"text": "2. Learn a linear projection matrix W from the vector space of the word-senses to that of the concepts using pairs of the embeddings for monosemous common nouns, which are generated in the last step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "4.1."
},
{
"text": "3. Apply the matrix W to the word-sense embeddings and obtain the projected concept embeddings for them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "4.1."
},
{
"text": "We defined a monosemous word as a word that meets two conditions, which are, (1) it has only one sense in Iwanami Kokugo Jiten and (2) it does not have any concept number in WLSP. We chose them because the concepts in WLSP are like synsets in English WordNet; many words share a concept. Therefore, if a word has a concept number, we cannot treat the word as monosemous word because we generated word embeddings for each concept number. We used 104 monosemous common nouns as seed words of our experiments. We randomly extracted ten words for evaluation data and used other 94 words for the training data to obtain the number of epochs that minimize the loss. We iterated this operation for 20 times and used the average number of epochs for the number of epochs of the final experiment. Table 5 shows learning parameters of the linear transformation matrix. ",
"cite_spans": [],
"ref_spans": [
{
"start": 788,
"end": 795,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "4.1."
},
{
"text": "VecMap 8 is used for the second method of BWE. When we used a linear transformation matrix, we projected the vector space of word senses of Iwanami Kokugo Jiten into that of concepts of WLSP. However, VecMap projects both the vector spaces of word senses and concepts into a new vector space. The three options, supervised, semisupervised, and identical, were compared. Supervised and semi-supervised VecMap utilize the specified words but Identical VecMap uses identical words in two languages as the seeds of the projection. Therefore, the seed words of supervised and semi-supervised VecMap are the same as the linear transformation matrix but that of identical VecMap is different from it. The seed words of identical VecMap is monosemous words whereas those of supervised or semisupervised VecMap is monosemous common nouns. The number of monosemous words, the seed words of identical VecMap, is 2,015. We used default settings for the tool of VecMap for each option. Table 6 lists the default settings of the parameters of each specific option and the general default settings of them.",
"cite_spans": [],
"ref_spans": [
{
"start": 973,
"end": 980,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": ". VecMap",
"sec_num": null
},
{
"text": "We evaluated the correspondences of the meaning tags as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.1.3."
},
{
"text": "1. Calculate the cosine similarities between the projected concept embeddings and the embeddings of the concepts from the target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.1.3."
},
{
"text": "2. Choose the concepts that have the highest similarities to the projected concept embeddings as the corresponding concepts for the word senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.1.3."
},
{
"text": "3. Calculate the accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.1.3."
},
{
"text": "We targeted at polysemous nouns that appeared equal to or more than 50 times in the corpus. They were nine words, which were, \" (relationship)\", \" (technology)\", \" (field)\", \" (child)\", \" (time) \", \" (market)\", \" (phone)\", \" (place)\", and \" (before)\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.1.3."
},
{
"text": "and their word senses were 25 in total. We regarded an estimated concept tag to be correct when it is the same to the tag aligned with its corresponding sense tag most frequently in the tagged corpus. We evaluated the most frequent sense (MFS) accuracy for comparison. For MFS, the most frequent concept from WLSP for each word type in a corpus was regarded as the corresponding concept number for all the word senses for the word from Iwanami Kokugo Jiten. Also, we tested another comparative method, which is \"concatenation corpus method; a concept sequence corpus and a word sense sequence corpus are concatenated, and the concept embeddings and the word sense embeddings were generated together at the same time. Table 7 shows the accuracies of the corresponding meanings. Thirteen out of 25 word senses were aligned with the correct concept tags by a linear transformation matrix, and the accuracy was 52.0%. The results of VecMap were 36.0%, 48.0%, and 48.0% when supervised, semisupervised, and identical options were used. In the comparative experiment, 16 out of 25 word senses were aligned with the correct concept tags by both the MFS and corpus concatenation methods, and the accuracy was 64.0%. The accuracy of the random baseline, which is the method where each word sense was chosen at random, was 41.5%. The list of concept tags estimated by the linear transformation matrix, i.e., the best method of BWE in Table 7 , the MFS and corpus concatenation methods, and the oracle for 25 word senses of 9 words are shown in Table 8 . The correct concept tags are shown in bold. \"X-X-X-X\" in the word sense of \" (before)\" means a new word sense not listed in a dictionary, and in this research, it was considered as one of the word sense of the experiment.",
"cite_spans": [],
"ref_spans": [
{
"start": 717,
"end": 724,
"text": "Table 7",
"ref_id": "TABREF6"
},
{
"start": 1424,
"end": 1431,
"text": "Table 7",
"ref_id": "TABREF6"
},
{
"start": 1534,
"end": 1541,
"text": "Table 8",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.1.3."
},
{
"text": "According to Table 7 , the accuracies of the proposed the methods were lower than the accuracy of the MFS method and the corpus concatenation method. However, as mentioned above, in reality, one concept tag does not always correspond one sense tag. Sometimes one concept tag corresponds to plural sense tags and vice versa. We chose to make one-to-one correspondence for simplicity. From this perspective, the proposed methods have an advantage: the relations of the sense tags corresponding to a concept tag can be examined by mapping the sense embeddings to the vector space of the concept tags. Since the corpus concatenation method also uses word2vec, it also examine the relations of the sense tags but our method could be performed when we have only concept or word sense embeddings and do not have any tagged corpora. Also, in this research, we conducted the experiments using a corpus where two kinds of meaning tags are assigned. However, it is possible to use two different corpora for two meaning tag sets for our proposed methods, the use of BWE. In other words, we can conduct the experiments using two corpora, for example, a corpus assigned with concept tags from WLSP and another corpus assigned with word senses from Iwanami Kokugo Jiten. In that case, comparable corpora would be better than two monolingual corpora for BWE because the meanings of words should be similar to each other. Also, the accuracies may be lower when we use different two corpora because words do not share the contexts in two monolingual corpora. Furthermore, it is desirable to use a relatively large corpus for the experiments in this research because only the concepts or word senses of words appeared in the corpus are able to have a corresponding meaning. In this research, the experiments were performed on words that appeared 50 times or more in the corpus, but when the number of occurrences for each word sense was counted, there were four word senses that appeared only once. Since we used word2vec tool, it is preferable to use a corpus where all the meanings appear more than the threshold value 9 . We had a hypothesis that relatively large number of examples are required to generate meaning embeddings. Therefore, we examined how the correspondence accuracies between the word senses and the concepts differ depending on the occurrences of the word senses in the corpus. Figure 4 shows correct and incorrect numbers of the examples according to the occurrences of the word senses. For this figure, 25 word senses were grouped by occurrences so that each group has 5 word senses. The numbers of correct and incorrect answers are plotted on the vertical axis for each group and these groups are shown in order of the decreasing occurrences. The label of the bar graph in Figure 4 indicates \"minimum number of occurrences in each group\" \"maximum number of occurrences in each group\". Figure 4 , there was no correlation between the occurrences of the word senses and the correspondence accuracies in this research. Because both the concept tags and the word sense tags were manually annotated on BCCWJ, the accuracies of annotations are very high. However, since there are still few corpora with which two or more types of tags are assigned, we plan to use a tagger to automatically tag one type of meaning tags on a corpus with another type of meaning tags for the preprocessing of the proposed method for future work. However, in this case, the performance of the threshold value. Default setting is five. We set this value to one to acquire meaning vectors for the words that appeared only once. 0-0-1-0 1.2600 1.2600 1.2640 1.2640 0-0-2-0",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 7",
"ref_id": "TABREF6"
},
{
"start": 2380,
"end": 2388,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 2778,
"end": 2786,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 2890,
"end": 2898,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "1.2600 1.2600 1.2640 1.2600 0-0-3-0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "1.2600 1.2600 1.2640 1.2600",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "(phone) 35881 0-0-1-0 1.4620 1.3122 1.3122 1.3122 0-0-2-0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "1.4620 1.3122 1.4620 1.4620",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "(place) 41150 0-0-1-0 1.3833 1.1700 1.1700 1.1700 0-0-2-0 1.3833 1.1700 1.3833 1.1700",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "(before) 48488 0-0-1-1 1.1740 1.1670 1.1740 1.1740 0-0-2-0 1.1650 1.1670 1.1740 1.1740 0-0-2-1 1.1635 1.1670 1.1635 1.1670 0-0-2-2 1.1635 1.1670 1.1740 1.1670 X-X-X-X 1.1635 1.1670 1.1650 1.1635 tagger should be considered to guarantee the quality of the automatic tagged corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "In this research, we described how to utilize bilingual word embeddings to obtain the correspondences of meanigs from two dictionaries in one language and investigated the effectiveness of the method. We used BCCWJ with concept tags from WLSP and sense tags from Iwanami Kokugo Jiten for the experiments. The experiments showed that the correspondence accuracies of the proposed methods were lower than MFS baseline or the corpus concatenation method. However, because our method utilizes the embedding vectors of the word senses, the relation of the sense tags corresponding to concept tags can be examined by mapping the sense embeddings to the vector space of the concept tags. Also, our method could be performed when we have only concept or word sense embeddings. However, it is necessary to expand the corpus for the further evaluation because the proposed method uses one corpus for both the training and the test and only the word senses or the concepts that appeared in the corpus are able to have correspondence. In addition, we would like to investigate further how the accuracy of this study changes when the corpus is expanded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "We eliminated the compound words from the dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "English translations inTable 1are quoted from Longman Dictionary of Contemporary English 5 .6 https://www.gsk.or.jp/catalog/gsk2010-a/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://code.google.com/archive/p/word2vec/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/artetxem/vecmap#publications",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Word2vec generates vectors only for the word (word senses or concepts in this research) that appeared equal to or more than a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by JSPS KAKENHI Grants Number 18K11421, 17H00917, and a project of the Center for Corpus Development, NINJAL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning bilingual word embeddings with (almost) no bilingual data",
"authors": [
{
"first": "M",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Agirre",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Artetxe, M., Labaka, G., and Agirre, E. (2017). Learn- ing bilingual word embeddings with (almost) no bilin- gual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451-462.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations",
"authors": [
{
"first": "M",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Agirre",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "5012--5019",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Artetxe, M., Labaka, G., and Agirre, E. (2018a). Gener- alizing and improving bilingual word embedding map- pings with a multi-step framework of linear transforma- tions. In Proceedings of the Thirty-Second AAAI Confer- ence on Artificial Intelligence, pages 5012-5019.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A robust self-learning method for fully unsupervised crosslingual mappings of word embeddings",
"authors": [
{
"first": "M",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Agirre",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "789--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Artetxe, M., Labaka, G., and Agirre, E. (2018b). A ro- bust self-learning method for fully unsupervised cross- lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 789-798.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multilingual models for compositional distributed semantics",
"authors": [
{
"first": "K",
"middle": [
"M"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1404.4641"
]
},
"num": null,
"urls": [],
"raw_text": "Hermann, K. M. and Blunsom, P. (2014). Multilingual models for compositional distributed semantics. arXiv preprint arXiv:1404.4641.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Inducing crosslingual distributed representations of words",
"authors": [
{
"first": "A",
"middle": [],
"last": "Klementiev",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Bhattarai",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING 2012",
"volume": "",
"issue": "",
"pages": "1459--1474",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klementiev, A., Titov, I., and Bhattarai, B. (2012). In- ducing crosslingual distributed representations of words. Proceedings of COLING 2012, pages 1459-1474.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Balanced corpus of contemporary written japanese. Language resources and evaluation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Maekawa",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Yamazaki",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ogiso",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Maruyama",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ogura",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Kashino",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Koiso",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Yamaguchi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Tanaka",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Den",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "48",
"issue": "",
"pages": "345--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maekawa, K., Yamazaki, M., Ogiso, T., Maruyama, T., Ogura, H., Kashino, W., Koiso, H., Yamaguchi, M., Tanaka, M., and Den, Y. (2014). Balanced corpus of contemporary written japanese. Language resources and evaluation, 48(2):345-371.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ICLR Workshop 2013",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient estimation of word representations in vector space. In Proceedings of ICLR Workshop 2013, pages 1-12.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Exploiting similarities among languages for machine translation",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Q",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1309.4168"
]
},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Le, Q. V., and Sutskever, I. (2013b). Ex- ploiting similarities among languages for machine trans- lation. arXiv preprint arXiv:1309.4168.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NIPS 2013",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G., and Dean, J. (2013c). Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS 2013, pages 1-9.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Tau Yih",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NAACL 2013",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., tau Yih, W., and Zweig, G. (2013d). Linguis- tic regularities in continuous space word representations. In Proceedings of NAACL 2013, pages 746-751.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Word List by Semantic Principles. Shuuei Shuppan",
"authors": [],
"year": 1964,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "National Institute for Japanese Language and Linguistics. (1964). Word List by Semantic Principles. Shuuei Shup- pan, In Japanese.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Alignment table between word list by semantic principles and annotated corpus of iwanami japanese dictionary fifth edition",
"authors": [
{
"first": "P",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kondo",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Moriyama",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ogiwara",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Asahara",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Language Resources Workshop",
"volume": "",
"issue": "",
"pages": "337--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, P., Kondo, M., Moriyama, N., Ogiwara, A., and Asa- hara, M. (2019). Alignment table between word list by semantic principles and annotated corpus of iwanami japanese dictionary fifth edition 2004 . In Proceedings of the Language Resources Workshop 2019, pages 337- 342.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distributed word representation learning for cross-lingual dependency parsing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "119--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao, M. and Guo, Y. (2014). Distributed word represen- tation learning for cross-lingual dependency parsing. In Proceedings of the Eighteenth Conference on Computa- tional Natural Language Learning, pages 119-129.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bilingual word embeddings for phrase-based machine translation",
"authors": [
{
"first": "W",
"middle": [
"Y"
],
"last": "Zou",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1393--1398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zou, W. Y., Socher, R., Cer, D., and Manning, C. D. (2013). Bilingual word embeddings for phrase-based machine translation. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Pro- cessing, pages 1393-1398.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Tree Structure of Iwanami Kokugo Jiten",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Tree Structure of WLSP Figure 3: Word-sense-tag and Concept-tag Sentences 4.1.2",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "Numbers of correct and incorrect answers according to occurrences of the word senses Despite our hypothesis, according to",
"type_str": "figure"
},
"TABREF0": {
"num": null,
"type_str": "table",
"text": "Word Senses and Their Corresponding Sense Tags of \" (Child or Children)\" from Iwanami Kokugo Jiten 4 Son/daughter. A son or daughter of any age.",
"content": "<table><tr><td>Sense Tag</td><td>Word Sense</td></tr><tr><td colspan=\"2\">17877-0-0-1-0 &lt;1&gt;</td></tr><tr><td/><td>Young person. Someone who is not yet an</td></tr><tr><td/><td>adult. Kid.</td></tr><tr><td colspan=\"2\">17877-0-0-2-0 &lt;2&gt;</td></tr></table>",
"html": null
},
"TABREF1": {
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td colspan=\"4\">Concept-tags and Their Corresponding Class, Division, Section of \"</td><td>(child or children) from WLSP</td></tr><tr><td>Concept number</td><td>Class</td><td colspan=\"2\">Division Section</td><td>Article</td></tr><tr><td>1.2050</td><td>Nominal words</td><td>Agent</td><td>Human</td><td>Young or old</td></tr><tr><td>1.2130</td><td>Nominal words</td><td>Agent</td><td colspan=\"2\">Family Child or descendant</td></tr><tr><td colspan=\"3\">linear projection matrix W was learned when we used a lin-</td><td/><td/></tr><tr><td colspan=\"3\">ear transformation matrix. VecMap is an implementation</td><td/><td/></tr><tr><td colspan=\"3\">of a framework of Artetxe et al. to learn cross-lingual word</td><td/><td/></tr><tr><td colspan=\"3\">embedding mappings (Artetxe et al., 2017)(Artetxe et al.,</td><td/><td/></tr><tr><td>2018a)(Artetxe et al., 2018b).</td><td/><td/><td/><td/></tr></table>",
"html": null
},
"TABREF2": {
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td colspan=\"2\">: Statistic Data of BCCWJ</td></tr><tr><td>Number of Word tokens</td><td>340,995</td></tr><tr><td>Number of Unique Words</td><td>25,321</td></tr><tr><td>Number of Unique Word Senses</td><td>26,713</td></tr><tr><td>Number of Unique Concepts</td><td>3,164</td></tr></table>",
"html": null
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td colspan=\"2\">: Settings of word2vec</td></tr><tr><td>Parameters</td><td>Settings</td></tr><tr><td>Dimensionality</td><td>200</td></tr><tr><td colspan=\"2\">Learning Algorithm C-BoW</td></tr><tr><td>Window Size</td><td>5</td></tr><tr><td>Number of Epochs</td><td>5</td></tr><tr><td>Batch Size</td><td>1,000</td></tr><tr><td>min-count</td><td>1</td></tr></table>",
"html": null
},
"TABREF4": {
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td colspan=\"2\">: Learning Parameters of Linear Transformation</td></tr><tr><td>Matrix</td><td/></tr><tr><td>Parameters</td><td>Settings</td></tr><tr><td>Dimensionality</td><td>200 200</td></tr><tr><td colspan=\"2\">Optimization Algorithm Adam</td></tr><tr><td>Number of Epochs</td><td>118</td></tr></table>",
"html": null
},
"TABREF5": {
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td/><td/><td>: Parameters of VecMap</td><td/></tr><tr><td>Option</td><td>Parameter</td><td colspan=\"2\">Default Setting of Specific Option General Default Setting</td></tr><tr><td>Supervised</td><td>Batch size</td><td>1000</td><td>10000</td></tr><tr><td colspan=\"2\">Semi-supervised Self-Learning</td><td>TRUE</td><td>FALSE</td></tr><tr><td colspan=\"2\">Semi-supervised Vocabulary cutoff</td><td>200,000</td><td>0</td></tr><tr><td colspan=\"2\">Semi-supervised csls neibourhood</td><td>10</td><td>0</td></tr><tr><td>Identical</td><td>Self-Learning</td><td>TRUE</td><td>FALSE</td></tr><tr><td>Identical</td><td>Vocabulary cutoff</td><td>200,000</td><td>0</td></tr><tr><td>Identical</td><td>csls neibourhood</td><td>10</td><td>0</td></tr></table>",
"html": null
},
"TABREF6": {
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td colspan=\"2\">: Accuracies of Each Method</td></tr><tr><td>Method</td><td>Accuracy</td></tr><tr><td>Linear Transformation Matrix</td><td>52.0 %</td></tr><tr><td>VecMap Supervised</td><td>36.0 %</td></tr><tr><td>VecMap Semi-supervised</td><td>48.0 %</td></tr><tr><td>VecMap Identical</td><td>48.0 %</td></tr><tr><td>MFS</td><td>64.0 %</td></tr><tr><td>Corpus Concatenation</td><td>64.0 %</td></tr><tr><td>Random</td><td>41.5 %</td></tr></table>",
"html": null
},
"TABREF7": {
"num": null,
"type_str": "table",
"text": "Correspondence Table of\"Iwanami Kokugo Jiten\" and \"WLSP\"",
"content": "<table><tr><td>Words</td><td colspan=\"2\">Word Numbers Word Senses</td><td colspan=\"4\">Concept Numbers Linear transformation Matrix MFS Corpus Concatenation Oracle</td></tr><tr><td/><td/><td>0-0-1-0</td><td>1.1110</td><td>1.1110</td><td>1.1110</td><td>1.1110</td></tr><tr><td>(relationship)</td><td>9667</td><td>0-0-2-0 0-0-3-0</td><td>1.3500 1.1110</td><td>1.1110 1.1110</td><td>1.1110 1.1110</td><td>1.1110 1.1110</td></tr><tr><td>(technology)</td><td>10703</td><td>0-0-1-0 0-0-2-0</td><td>1.3421 1.3421</td><td>1.3850 1.3850</td><td>1.3850 1.3850</td><td>1.3850 1.3421</td></tr><tr><td>(field)</td><td>15615</td><td>0-0-1-0 0-0-2-0</td><td>1.2620 1.2620</td><td>1.2620 1.2620</td><td>1.1700 1.2620</td><td>1.1700 1.2620</td></tr><tr><td>(child)</td><td>17877</td><td>0-0-1-0 0-0-2-0</td><td>1.2130 1.2130</td><td>1.2050 1.2050</td><td>1.2130 1.2130</td><td>1.2050 1.2130</td></tr><tr><td/><td/><td>0-0-1-0</td><td>1.1600</td><td>1.1600</td><td>1.1962</td><td>1.1600</td></tr><tr><td>(time)</td><td>20676</td><td>0-0-2-0 0-0-3-0</td><td>1.1962 1.1600</td><td>1.1600 1.1600</td><td>1.1962 1.1600</td><td>1.1962 1.1600</td></tr><tr><td/><td/><td>0-0-4-0</td><td>1.1962</td><td>1.1600</td><td>1.1600</td><td>1.1600</td></tr><tr><td>(market)</td><td>21128</td><td/><td/><td/><td/><td/></tr></table>",
"html": null
}
}
}
}