| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:00:46.113990Z" |
| }, |
| "title": "Composing Word Vectors for Japanese Compound Words Using Bilingual Word Embeddings", |
| "authors": [ |
| { |
| "first": "Teruo", |
| "middle": [], |
| "last": "Hirabayashi", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Kanako", |
| "middle": [], |
| "last": "Komiya", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "kanako.komiya.nlp@vc.ibaraki.ac.jp" |
| }, |
| { |
| "first": "Masayuki", |
| "middle": [], |
| "last": "Asahara", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "masayu-a@ninjal.ac.jp" |
| }, |
| { |
| "first": "Hiroyuki", |
| "middle": [], |
| "last": "Shinnou", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "hiroyuki.shinnou.0828@vc.ibaraki.ac.jp" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This study conducted an experiment to compare the word embeddings of a compound word and a word in Japanese on the same vector space using bilingual word embeddings. Because Japanese does not have word delimiters between words; thus various word definitions exist according to dictionaries and corpora. We divided one corpus into words on the basis of two definitions, namely, shorter and ordinary words and longer compound words, and regarded two word-sequences as a parallel corpus of different languages. We then generated word embeddings from the corpora of these languages and mapped the vectors into the common space using monolingual mapping methods, a linear transformation matrix, and VecMap. We evaluated our methods by synonym ranking using a thesaurus. Furthermore, we conducted experiments of two comparative methods: (1) a method where the compound words were divided into words and the word embeddings were averaged and (2) a method where the word embeddings of the latter words are regarded as those of the compound words. The VecMap results with the supervised option outperformed that with the identical option, linear transformation matrix, and the latter word method, but could not beat the average method.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This study conducted an experiment to compare the word embeddings of a compound word and a word in Japanese on the same vector space using bilingual word embeddings. Because Japanese does not have word delimiters between words; thus various word definitions exist according to dictionaries and corpora. We divided one corpus into words on the basis of two definitions, namely, shorter and ordinary words and longer compound words, and regarded two word-sequences as a parallel corpus of different languages. We then generated word embeddings from the corpora of these languages and mapped the vectors into the common space using monolingual mapping methods, a linear transformation matrix, and VecMap. We evaluated our methods by synonym ranking using a thesaurus. Furthermore, we conducted experiments of two comparative methods: (1) a method where the compound words were divided into words and the word embeddings were averaged and (2) a method where the word embeddings of the latter words are regarded as those of the compound words. The VecMap results with the supervised option outperformed that with the identical option, linear transformation matrix, and the latter word method, but could not beat the average method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Japanese words have many definitions because Japanese does not have word delimiters between words, and word boundaries are unspecific. Therefore, the Japanese dictionary defines words individ-ually. Japanese has different word definitions according to each corpus and dictionary. The long unit for compound words and the short unit for words in UniDic 1 (Maekawa et al., 2010) developed by the National Institute for Japanese Language and Linguistics (NINJAL) are some of them. For example, \"", |
| "cite_spans": [ |
| { |
| "start": 354, |
| "end": 376, |
| "text": "(Maekawa et al., 2010)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": ", ichigo-gari, strawberry picking\" is defined as one word (short unit), whereas \"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": ", budou-gari, grape picking\" is defined as a compound word (long unit) with two words (short unit) 2 in UniDic. Due to the limit of the dictionary's coverage, a morphological analyzer using UniDic treats \" , ichigo-gari, strawberry picking\" as one word and \"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": ", budou-gari, grape picking\" as two words, making it impossible to directly compare the word meanings of these two words via word embeddings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Therefore, to address the word unit discrepancy issue, this study proposes the usage of bilingual word embeddings (BWEs), which is usually used for mapping the word embeddings of two different languages into the same vector space, to map the word embeddings of long and short units into a common vector space. Using the BWE makes it easy to compare the word embeddings of \" , ichigo-gari, strawberry picking\" and \" , budou-gari, grape picking\" because both are on the same vector space. This situation is more convenient for many application systems like an information retrieval system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "According to a survey of cross-lingual word embedding models 3 , the BWE is classified into four groups according to how cross-lingual word embeddings are made.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The first approach is monolingual mapping. This approach initially trains monolingual word embeddings and learns a transformation matrix that maps representations in one language to those of the other language. Mikolov et al. (2013) showed that vector spaces can encode meaningful relations between words and that the geometric relations that hold between words are similar across languages. They did not assume the use of specific language; thus their method can be used to extend and refine dictionaries for any language pairs.", |
| "cite_spans": [ |
| { |
| "start": 211, |
| "end": 232, |
| "text": "Mikolov et al. (2013)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The second approach is pseudo-cross-lingual. This approach creates a pseudo-cross-lingual corpus by mixing contexts of different languages. Xiao and Guo (2014) proposed the first pseudo-cross-lingual method that utilized translation pairs. They first translated all words that appeared in the source language corpus into the target language using Wiktionary. They then filtered out the noises of these pairs and trained the model with this corpus, in which the pairs were replaced with placeholders to ensure that the translations of the same word have the same vector representation.", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 159, |
| "text": "Xiao and Guo (2014)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The third approach is cross-lingual training. This approach trains their embeddings on a parallel corpus and optimizes a cross-lingual constraint between the embeddings of different languages that encourages embeddings of similar words to be close to each other in a shared vector space. Hermann and Blunsom (2014) trained two models to output sentence embeddings for input sentences in two different languages. They retrained these models with sentence embeddings using a least squares method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The final approach is joint optimization, which not only considers a cross-lingual constraint but also jointly optimizes monolingual and cross-lingual objectives. Klementiev et al. (2012) performed the first research using joint optimization. Zou et al. (2013) used a matrix factorization approach to learn crosslingual word representations for English and Chinese and utilized the representations for a machine translation task. In this study, we used the first approach, monolingual mapping.", |
| "cite_spans": [ |
| { |
| "start": 163, |
| "end": 187, |
| "text": "Klementiev et al. (2012)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The nearest works to this research are those of Komiya et al. (2019) and Kouno and Komiya (2020) . Komiya et al. (2019) composed word embeddings for long units from the two word embeddings of short units using a feed-forward neural network system. They classified the dependency relations of two short units into 13 groups and trained a composition model for each dependency relation. Meanwhile, Kouno and Komiya (2020) performed the multitask learning of the composition of word embeddings and the classification of dependency relations.", |
| "cite_spans": [ |
| { |
| "start": 48, |
| "end": 68, |
| "text": "Komiya et al. (2019)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 73, |
| "end": 96, |
| "text": "Kouno and Komiya (2020)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 99, |
| "end": 119, |
| "text": "Komiya et al. (2019)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 396, |
| "end": 419, |
| "text": "Kouno and Komiya (2020)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We utilized the BWE herein for the same purpose. To the best of our knowledge, our study is the first to use the BWE to map the word embeddings of different word delimitation definitions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The BWE is usually used for cross-lingual applications (e.g., machine translation).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this study, we mapped the word embeddings of short and long units into the common vector space for a comparison. short units are language units defined from the perspective of morphology (Ogura et al., 2007) , whereas long units are those defined based on a Japanese base phrase unit, bunsetsu (Fujiike et al., 2008) . A long unit consists of one or more short units. For the BWE, we utilized the linear transformation matrix and the VecMap 4 .", |
| "cite_spans": [ |
| { |
| "start": 190, |
| "end": 210, |
| "text": "(Ogura et al., 2007)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 297, |
| "end": 319, |
| "text": "(Fujiike et al., 2008)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We used monolingual mapping comprising two steps. First, monolingual word embeddings were trained for each language. We regarded the corpora of different term units as the corpora of two different languages and mapped them to a common vector space such that the word embeddings of the words whose meanings were similar to each other in two languages can be brought closer. The geometrical relations that hold between words are similar across languages; thus a vector space of a language can be transformed into that of another language using a linear projection. We adapted hereikn two methods of the BWE, namely, linear transformation matrix and VecMap. A linear projection matrix W was learned when we used a linear transformation matrix. VecMap is an implementation of a framework of Artetxe et al. (2017) to learn cross-lingual word embedding mappings (Artetxe et al., 2018a) (Artetxe et al., 2018b) .", |
| "cite_spans": [ |
| { |
| "start": 787, |
| "end": 808, |
| "text": "Artetxe et al. (2017)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 856, |
| "end": 879, |
| "text": "(Artetxe et al., 2018a)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 880, |
| "end": 903, |
| "text": "(Artetxe et al., 2018b)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bilingual Word Embeddings", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We conducted the following experiments when a linear transformation matrix was learned:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear Transformation Matrix", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "1. Generate short and long unit corpora and learn short or long unit embeddings for each corpus from them using word2vec (cf. Figure 1 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 126, |
| "end": 134, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Linear Transformation Matrix", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "2. Learn a linear projection matrix W from the vector space of the short units to that of the long units using pairs of embeddings for common words generated in the last step.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear Transformation Matrix", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "3. Apply matrix W to the short unit embeddings and obtain the projected long unit embeddings for them.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear Transformation Matrix", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "VecMap was used as another method of the BWE. We projected the vector space of the short units into that of the long units when we used the linear transformation matrix. However, VecMap projected both the vector spaces of the short and long units into a new vector space. The two options (i.e., supervised and identical) were compared. The supervised VecMap uses the specified words, whereas the identical VecMap uses identical words in two languages as the projection seeds. Therefore, the seed words of the supervised VecMap were the same as the linear transformation matrix but those of the identical VecMap were different.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VecMap", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "We used NWJC2vec (Shinnou et al., 2017) for the word embeddings of the short units and the Balanced Corpus of Contemporary Japanese (BCCWJ) for the word embeddings of the long units using word2vec.", |
| "cite_spans": [ |
| { |
| "start": 17, |
| "end": 39, |
| "text": "(Shinnou et al., 2017)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "NWJC2vec is a set of word embeddings generated from the 25 billion word scale NWJC-2014-4Q dataset (Asahara et al., 2014) , which is an enor-mous Japanese corpus, NINJAL Web Japanese Corpus (NWJC), developed using the word2vec tool. The summary statistics for the NWJC-2014-4Q data and the parameters used to generate the word embeddings are respectively presented in Tables 1 and 2 . We used continuous bag-of-words (CBOW) as a model architecture to produce the word embeddings.", |
| "cite_spans": [ |
| { |
| "start": 99, |
| "end": 121, |
| "text": "(Asahara et al., 2014)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 368, |
| "end": 383, |
| "text": "Tables 1 and 2", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word Embeddings", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "BCCWJ is the 100 million word scale balanced corpus that contains texts from multiple domains constructed by NINJAL. Each text in this corpus has short ande long unit versions. The summary statistics for BCCWJ are listed in Table 3 . The word2vec settings for training the word embeddings with BC-CWJ are summarized in Table 4 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 224, |
| "end": 231, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 319, |
| "end": 326, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word Embeddings", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "NWJC2vec contains morphological information, but the word embeddings generated for the long units using BCCWJ do not contain them. Therefore, the word embeddings for the short units can be differentiated from the words with the same spellings but are different parts of speech, whereas those for the long units cannot. Consequently, for some words, the word embeddings for the short units of some words had multiple vectors, but we still directly used them.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Embeddings", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The learning parameters of the linear transformation matrix are shown in Table 5 . We used a 200-by-200 dimensional linear transformation matrix. We used Adam as the optimizer of loss function and iterated the training for 1,164 epochs. We decided on the number of epochs according to the preliminary experiments using 55,630 words randomly extracted from the training data. We averaged the best number of five trials. The vocabulary size of the word embeddings for BCCWJ and NWJC and the seed words we used for the linear transformation matrix is shown in Table 6 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 73, |
| "end": 80, |
| "text": "Table 5", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 557, |
| "end": 564, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Bilingual Word Embeddings", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We used the default settings for the VecMap tool for each option. The default settings of the parameters of each specific option and their general default settings are listed in Table 7 . The vocabulary size of the word embeddings for BCCWJ and NWJC and the seed words used for VecMap is presented in Table 8.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 178, |
| "end": 185, |
| "text": "Table 7", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Bilingual Word Embeddings", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The number of long units decreased for VecMap compared with the linear transformation matrix be- cause of the limitation of the machine power. We used 278,143 seed words and 11,662 compound words annotated with a concept number for the evaluation, which resulted to a total of 289,805 words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bilingual Word Embeddings", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We evaluated our methods by the ranking of synonyms using a thesaurus. Using a thesaurus, we can evaluate the similarity of concepts referring knowledge of people. However, if we directly use cosine similarity between concepts, the thresholds are difficult to decide. Therefore, we used the ranking among the nodes of the thesaurus. We used \"Word List Figure 2 . The WLSP has a tree structure; thus, we assumed that the concepts belonging to the same node or synonyms were similar to each other. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 352, |
| "end": 360, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Vocabulary size (Number of word tokens) BCCWJ (long unit) 2,745,657 NWJC2vec (short unit) 1,534,957 Seed words 278,143 Table 6 : Vocabulary size (number of word tokens) of the word embeddings for BCCWJ and NWJC and seed words for the linear transformation matrix Figure 3 : Example of the nodes of the WLSP An example of the WLSP nodes is presented in Figure 3 . In this figure, we assumed that hot dog was closer to hamburger than water or pencil. We used hot dog instead of long term like \" , grape picking\" and hamburger instead of short term like \" , strawberry picking\" for example. We used water and pencil as short terms in this example.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 119, |
| "end": 126, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 263, |
| "end": 271, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 352, |
| "end": 360, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Corpus", |
| "sec_num": null |
| }, |
| { |
| "text": "We evaluated the mapped word embeddings on the basis of this assumption and subsequently defined \"long term\" and \"short term.\" A compound word that is a long unit and consists of two short units is referred to as \"long term,\" whereas a word that is a short unit with no long unit is referred to as \"short term.\"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus", |
| "sec_num": null |
| }, |
| { |
| "text": "All the NWJC or BCCWJ words were not listed on the WLSP; thus we had two compound word conditions for evaluation: (1) the compound word should be a long term listed on the WLSP, and 2its constituents of it should be short terms listed on the WLSP. Hereinafter, wl i denotes the compound word, and ws i 1 and ws i 2 denote the constituents. The evaluation procedures are as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Procedure", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "1. For each long term wl i , identify a node N i (0) to which the long term belongs in WLSP.", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 52, |
| "text": "(0)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Procedure", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "N i (0) includes synonyms of wl i and both long and short terms. We assumed that every node has at least two words such that the similarity between them can be calculated. For example, if wl i is the word hot dog, the corresponding node N i (0) includes synonyms such as hamburger. In Figure 3 , N i (0) is Node 1.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 285, |
| "end": 293, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation Procedure", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "2. Calculate s i (0), which is the average similarity between the word embeddings of wl i and all the short terms in N i (0), using the mapped word embeddings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Procedure", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "For this step, we calculated s i (0), which is the average similarity between the word embeddings of hot dog and those of hamburger and other concepts in N i (0) (Node 1). We used the cosine similarity for the similarity and the arithmetic mean to average the similarity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Procedure", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "N i (1)...N i (n) of N i (0).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Obtain sibling nodes", |
| "sec_num": "3." |
| }, |
| { |
| "text": "A sibling nodes N i (1)...N i (n) include a node that contains a word, such as water, and another node that contains a word such as pencil. In Figure 3, N i (1) ...N i (n) includes Nodes 2 and 3.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 143, |
| "end": 160, |
| "text": "Figure 3, N i (1)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Obtain sibling nodes", |
| "sec_num": "3." |
| }, |
| { |
| "text": "4. Similarly, calculate s i (k), which is the average similarity between the word embeddings of wl i and those of all the short terms in node N i (k)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Obtain sibling nodes", |
| "sec_num": "3." |
| }, |
| { |
| "text": "5. Obtain the ranking of s i (0) in s i (0)...s i (n).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Obtain sibling nodes", |
| "sec_num": "3." |
| }, |
| { |
| "text": "We used 11,459 long terms for the evaluation because 11,662 long terms and their constituent short", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Obtain sibling nodes", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Default setting of specific option General sefault setting Supervised Batch size 1,000 10,000 Identical Self-learning TRUE FALSE Identical Vocabulary cutoff 200,000 0 Identical csls neibourhood 10 0 terms were annotated with a concept number, but 203 of them had un-annotated synonyms in the node to which the word belongs (N i (0)). The number of nodes we used was 881 after excluding 14 nodes that included a word with no word embeddings. We performed two comparative methods, namely, average and latter word methods. For the average method, the word embeddings of a long term were calculated as the average of its constituent short terms, that is, the average of the word embeddings of ws i 1 and ws i 2 was used. For the latter word method, the word embeddings of the latter short term were regarded as the word embeddings of the long term, that is, the word embeddings of ws i 2 were used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter", |
| "sec_num": null |
| }, |
| { |
| "text": "The average rankings of the correct node according to each method are shown in Table 9 Table 9 shows that the best method among the three proposed methods is VecMap with the supervised option. The ranking of the correct node when the method was used was 131.98th. The number of nodes we used was 881; thus, if the node is randomly selected, the ranking would be 440th or 441st. Therefore, VecMap outperformed the random baseline and the latter word method (Table 9) . However, the average method known as the strong comparative method was the best among all the methods tested. BWEs could not beat it. This result indicates that the additive compositionality holds for many long units. For future work, Skipgram can be tried instead of CBOW algorithm. Also, other word embeddings such as Glove could be another option. Theoretically, we believe that our methods can be applied even if the dimensionalities of two embeddings are different,but should be tested to know the real results.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 79, |
| "end": 86, |
| "text": "Table 9", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 87, |
| "end": 94, |
| "text": "Table 9", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 456, |
| "end": 465, |
| "text": "(Table 9)", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In this study, we mapped word the embeddings of a compound word and word in Japanese into the same vector space using the BWE. We used the linear transformation matrix and VecMap as the BWE methods. VecMap with the supervised option outperformed one baseline, which was the method where the word embeddings of the latter constituent word are regarded as the word embeddings of the compound word but could not beat another baseline, which was the method where the average of the word embeddings of the constituents was used for the word embeddings of the compound word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "https://unidic.ninjal.ac.jp/ (In Japanese) 2 means strawberries; means grapes; and means picking or hunting in Japanese.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://ruder.io/cross-lingual-embeddings/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/artetxem/vecmap#publications", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://pj.ninjal.ac.jp/corpus center/goihyo.html", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was supported by JSPS KAKENHI Grants Number 18K11421, 17KK0002, and a project of the Younger Researchers Grants from Ibaraki University.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Learning bilingual word embeddings with (almost) no bilingual data", |
| "authors": [ |
| { |
| "first": "Mikel", |
| "middle": [], |
| "last": "Artetxe", |
| "suffix": "" |
| }, |
| { |
| "first": "Gorka", |
| "middle": [], |
| "last": "Labaka", |
| "suffix": "" |
| }, |
| { |
| "first": "Eneko", |
| "middle": [], |
| "last": "Agirre", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "451--462", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 451-462.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations", |
| "authors": [ |
| { |
| "first": "Mikel", |
| "middle": [], |
| "last": "Artetxe", |
| "suffix": "" |
| }, |
| { |
| "first": "Gorka", |
| "middle": [], |
| "last": "Labaka", |
| "suffix": "" |
| }, |
| { |
| "first": "Eneko", |
| "middle": [], |
| "last": "Agirre", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "5012--5019", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embed- ding mappings with a multi-step framework of lin- ear transformations. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence, pages 5012-5019.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings", |
| "authors": [ |
| { |
| "first": "Mikel", |
| "middle": [], |
| "last": "Artetxe", |
| "suffix": "" |
| }, |
| { |
| "first": "Gorka", |
| "middle": [], |
| "last": "Labaka", |
| "suffix": "" |
| }, |
| { |
| "first": "Eneko", |
| "middle": [], |
| "last": "Agirre", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "789--798", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 789-798.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Archiving and analysing techniques of the ultra-large-scale webbased corpus project of ninjal", |
| "authors": [ |
| { |
| "first": "Masayuki", |
| "middle": [], |
| "last": "Asahara", |
| "suffix": "" |
| }, |
| { |
| "first": "Kikuo", |
| "middle": [], |
| "last": "Maekawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Mizuho", |
| "middle": [], |
| "last": "Imada", |
| "suffix": "" |
| }, |
| { |
| "first": "Sachi", |
| "middle": [], |
| "last": "Kato", |
| "suffix": "" |
| }, |
| { |
| "first": "Hikari", |
| "middle": [], |
| "last": "Konishi", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "25", |
| "issue": "", |
| "pages": "129--148", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Masayuki Asahara, Kikuo Maekawa, Mizuho Imada, Sachi Kato, and Hikari Konishi. 2014. Archiving and analysing techniques of the ultra-large-scale web- based corpus project of ninjal, japan. Alexandria, 25(1-2):129-148.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Short-term unit alanysis of balanced corpus of contemporary japanese", |
| "authors": [ |
| { |
| "first": "Yumi", |
| "middle": [], |
| "last": "Fujiike", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Ogura", |
| "suffix": "" |
| }, |
| { |
| "first": "Toshinobu", |
| "middle": [], |
| "last": "Ogiso", |
| "suffix": "" |
| }, |
| { |
| "first": "Hanae", |
| "middle": [], |
| "last": "Koiso", |
| "suffix": "" |
| }, |
| { |
| "first": "Kiyotaka", |
| "middle": [], |
| "last": "Uchimoto", |
| "suffix": "" |
| }, |
| { |
| "first": "Satsuki", |
| "middle": [], |
| "last": "Soma", |
| "suffix": "" |
| }, |
| { |
| "first": "Takenori", |
| "middle": [], |
| "last": "Nakamura", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the NLP2008", |
| "volume": "", |
| "issue": "", |
| "pages": "931--934", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yumi Fujiike, Hideki Ogura, Toshinobu Ogiso, Hanae Koiso, Kiyotaka Uchimoto, Satsuki Soma, and Takenori Nakamura. 2008. Short-term unit alanysis of balanced corpus of contemporary japanese. In Pro- ceedings of the NLP2008, (In Japanese), pages 931- 934.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Multilingual models for compositional distributed semantics", |
| "authors": [], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1404.4641" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karl Moritz Hermann and Phil Blunsom. 2014. Multilin- gual models for compositional distributed semantics. arXiv preprint arXiv:1404.4641.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Inducing crosslingual distributed representations of words", |
| "authors": [ |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Klementiev", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| }, |
| { |
| "first": "Binod", |
| "middle": [], |
| "last": "Bhattarai", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of COLING 2012", |
| "volume": "", |
| "issue": "", |
| "pages": "1459--1474", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representa- tions of words. Proceedings of COLING 2012, pages 1459-1474.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Composing word vectors for japanese compound words using dependency relations", |
| "authors": [ |
| { |
| "first": "Kanako", |
| "middle": [], |
| "last": "Komiya", |
| "suffix": "" |
| }, |
| { |
| "first": "Takumi", |
| "middle": [], |
| "last": "Seitou", |
| "suffix": "" |
| }, |
| { |
| "first": "Minoru", |
| "middle": [], |
| "last": "Sasaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Hiroyuki", |
| "middle": [], |
| "last": "Shinnou", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kanako Komiya, Takumi Seitou, Minoru Sasaki, and Hiroyuki Shinnou. 2019. Composing word vectors for japanese compound words using dependency rela- tions. CICLING. no 229.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Composition of word representation of long-term units from word representations of short-term units using multitask learning", |
| "authors": [ |
| { |
| "first": "Shinji", |
| "middle": [], |
| "last": "Kouno", |
| "suffix": "" |
| }, |
| { |
| "first": "Kanako", |
| "middle": [], |
| "last": "Komiya", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the NLP2020", |
| "volume": "", |
| "issue": "", |
| "pages": "209--212", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shinji Kouno and Kanako Komiya. 2020. Composi- tion of word representation of long-term units from word representations of short-term units using mul- titask learning. In Proceedings of the NLP2020, (In Japanese), pages 209-212.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Design, Compilation, and Preliminary Analyses of Balanced Corpus of Contemporary Written Japanese", |
| "authors": [ |
| { |
| "first": "Kikuo", |
| "middle": [], |
| "last": "Maekawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Makoto", |
| "middle": [], |
| "last": "Yamazaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Takehiko", |
| "middle": [], |
| "last": "Maruyama", |
| "suffix": "" |
| }, |
| { |
| "first": "Masaya", |
| "middle": [], |
| "last": "Yamaguchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Ogura", |
| "suffix": "" |
| }, |
| { |
| "first": "Wakako", |
| "middle": [], |
| "last": "Kashino", |
| "suffix": "" |
| }, |
| { |
| "first": "Toshinobu", |
| "middle": [], |
| "last": "Ogiso", |
| "suffix": "" |
| }, |
| { |
| "first": "Hanae", |
| "middle": [], |
| "last": "Koiso", |
| "suffix": "" |
| }, |
| { |
| "first": "Yasuharu", |
| "middle": [], |
| "last": "Den", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC 2010)", |
| "volume": "", |
| "issue": "", |
| "pages": "1483--1486", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kikuo Maekawa, Makoto Yamazaki, Takehiko Maruyama, Masaya Yamaguchi, Hideki Ogura, Wakako Kashino, Toshinobu Ogiso, Hanae Koiso, and Yasuharu Den. 2010. Design, Compilation, and Preliminary Analyses of Balanced Corpus of Contemporary Written Japanese. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC 2010), pages 1483-1486.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Balanced corpus of contemporary written japanese. Language resources and evaluation", |
| "authors": [ |
| { |
| "first": "Kikuo", |
| "middle": [], |
| "last": "Maekawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Makoto", |
| "middle": [], |
| "last": "Yamazaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Toshinobu", |
| "middle": [], |
| "last": "Ogiso", |
| "suffix": "" |
| }, |
| { |
| "first": "Takehiko", |
| "middle": [], |
| "last": "Maruyama", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Ogura", |
| "suffix": "" |
| }, |
| { |
| "first": "Wakako", |
| "middle": [], |
| "last": "Kashino", |
| "suffix": "" |
| }, |
| { |
| "first": "Hanae", |
| "middle": [], |
| "last": "Koiso", |
| "suffix": "" |
| }, |
| { |
| "first": "Masaya", |
| "middle": [], |
| "last": "Yamaguchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Makiro", |
| "middle": [], |
| "last": "Tanaka", |
| "suffix": "" |
| }, |
| { |
| "first": "Yasuharu", |
| "middle": [], |
| "last": "Den", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "48", |
| "issue": "", |
| "pages": "345--371", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kikuo Maekawa, Makoto Yamazaki, Toshinobu Ogiso, Takehiko Maruyama, Hideki Ogura, Wakako Kashino, Hanae Koiso, Masaya Yamaguchi, Makiro Tanaka, and Yasuharu Den. 2014. Balanced corpus of con- temporary written japanese. Language resources and evaluation, 48(2):345-371.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Exploiting similarities among languages for machine translation", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| } |
| ], |
| "year": 1964, |
| "venue": "Word List by Semantic Principles. Shuuei Shuppan", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1309.4168" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. National Institute for Japanese Language and Linguis- tics. 1964. Word List by Semantic Principles. Shuuei Shuppan, In Japanese.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Short-term unit alanysis of balanced corpus of contemporary japanese", |
| "authors": [ |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Ogura", |
| "suffix": "" |
| }, |
| { |
| "first": "Toshinobu", |
| "middle": [], |
| "last": "Ogiso", |
| "suffix": "" |
| }, |
| { |
| "first": "Hanae", |
| "middle": [], |
| "last": "Koiso", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the NLP2007", |
| "volume": "", |
| "issue": "", |
| "pages": "720--723", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hideki Ogura, Toshinobu Ogiso, Hanae Koiso, Yumi Fu- jiike, and Satsuki Soma. 2007. Short-term unit alany- sis of balanced corpus of contemporary japanese. In Proceedings of the NLP2007, (In Japanese), pages 720-723.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "nwjc2vec: Nwjc2vec: Word embedding data constructed from ninjal web japanese corpus", |
| "authors": [ |
| { |
| "first": "Hiroyuki", |
| "middle": [], |
| "last": "Shinnou", |
| "suffix": "" |
| }, |
| { |
| "first": "Masayuki", |
| "middle": [], |
| "last": "Asahara", |
| "suffix": "" |
| }, |
| { |
| "first": "Kanako", |
| "middle": [], |
| "last": "Komiya", |
| "suffix": "" |
| }, |
| { |
| "first": "Minoru", |
| "middle": [], |
| "last": "Sasaki", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Journal of Natural Language Processing", |
| "volume": "24", |
| "issue": "5", |
| "pages": "705--720", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hiroyuki Shinnou, Masayuki Asahara, Kanako Komiya, and Minoru Sasaki. 2017. nwjc2vec: Nwjc2vec: Word embedding data constructed from ninjal web japanese corpus. Journal of Natural Language Pro- cessing (In Japanese), 24(5):705-720.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Distributed word representation learning for cross-lingual dependency parsing", |
| "authors": [ |
| { |
| "first": "Min", |
| "middle": [], |
| "last": "Xiao", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuhong", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "119--129", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Min Xiao and Yuhong Guo. 2014. Distributed word representation learning for cross-lingual dependency parsing. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 119-129.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Bilingual word embeddings for phrase-based machine translation", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Will", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Zou", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Cer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1393--1398", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Will Y Zou, Richard Socher, Daniel Cer, and Christo- pher D Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Nat- ural Language Processing, pages 1393-1398.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "text": "Figure 1: Short and long unit corpora Number of URLs collected 83,992,556 Number of sentence 1,463,142,939 Number of words (tokens) 25,836,947,421", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "text": "Tree structure of the Word List by Semantic Principles (WLSP)", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF0": { |
| "text": "Summary statistics for the NWJC-2014-4Q dataset", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Parameters</td><td>Options</td><td>Settings</td></tr><tr><td>CBOW or skip-gram</td><td>-cbow</td><td>1</td></tr><tr><td>Dimensionality</td><td>-size</td><td>200</td></tr><tr><td>Window size</td><td colspan=\"2\">-window 8</td></tr><tr><td colspan=\"3\">Number of negative samples -negative 25</td></tr><tr><td>Hierarchical softmax</td><td>-hs</td><td>0</td></tr><tr><td colspan=\"2\">Minimum sample threshold -sample</td><td>1e-4</td></tr><tr><td>Number of iterations</td><td>-iter</td><td>15</td></tr></table>" |
| }, |
| "TABREF1": { |
| "text": "", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"2\">: Parameters used to generate NWJC2vec</td></tr><tr><td>Number of text samples</td><td>172,675</td></tr><tr><td colspan=\"2\">Number of short units (tokens) 104,911,464</td></tr><tr><td>Number of long units (tokens)</td><td>83,585,665</td></tr></table>" |
| }, |
| "TABREF2": { |
| "text": "", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>" |
| }, |
| "TABREF4": { |
| "text": "Settings of word2vec", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Parameters</td><td>Settings</td></tr><tr><td>Dimensionality</td><td>200 \u00d7 200</td></tr><tr><td colspan=\"2\">Optimization algorithm Adam</td></tr><tr><td>Number of epochs</td><td>1,164</td></tr></table>" |
| }, |
| "TABREF5": { |
| "text": "", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>: Learning parameters of the linear transformation</td></tr><tr><td>matrix</td></tr><tr><td>5 as a thesaurus. The WLSP is a Japanese thesaurus</td></tr><tr><td>that classifies and orders a word according to its</td></tr><tr><td>meaning. One record is composed of the following</td></tr><tr><td>elements: record ID number, lemma number, type</td></tr><tr><td>of record, class, division, section, article, concept</td></tr><tr><td>number, paragraph number, small paragraph num-</td></tr><tr><td>ber, word number, lemma with explanatory note,</td></tr><tr><td>lemma without explanatory note, reading and re-</td></tr><tr><td>verse reading. The concept number consists of a cat-</td></tr><tr><td>egory, a medium item, and a classification item. The</td></tr><tr><td>tree structure of the WLSP is shown in</td></tr></table>" |
| }, |
| "TABREF6": { |
| "text": "", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td>: Parameters of VecMap</td></tr><tr><td>Corpus</td><td>Vocabulary size</td></tr><tr><td>BCCWJ (long unit)</td><td>289,805</td></tr><tr><td>NWJC2vec (short unit)</td><td>1,534,957</td></tr><tr><td>Seed words</td><td>278,143</td></tr></table>" |
| }, |
| "TABREF7": { |
| "text": "Vocabulary size of word embeddings for BC-CWJ and NWJC and seed words for VecMap", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>" |
| }, |
| "TABREF9": { |
| "text": "Average rankings of the correct node according to method", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>" |
| } |
| } |
| } |
| } |