| { |
| "paper_id": "S01-1023", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:35:41.387085Z" |
| }, |
| "title": "ATR-SLT System for SENSEVAL-2 Japanese Translation Task", |
| "authors": [ |
| { |
| "first": "Tadashi", |
| "middle": [ |
| "K" |
| ], |
| "last": "Urnano", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "ATR Spoken Language Translation Research Laboratories", |
| "institution": "", |
| "location": { |
| "addrLine": "2-2-2 Hikaridai Seika-cho Soraku-gun Kyoto", |
| "postCode": "619-0288", |
| "country": "JAPAN" |
| } |
| }, |
| "email": "tadashi.kumano@co.jp" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Kashioka", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "ATR Spoken Language Translation Research Laboratories", |
| "institution": "", |
| "location": { |
| "addrLine": "2-2-2 Hikaridai Seika-cho Soraku-gun Kyoto", |
| "postCode": "619-0288", |
| "country": "JAPAN" |
| } |
| }, |
| "email": "hideki.kashioka@co.jp" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Tanaka", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "ATR Spoken Language Translation Research Laboratories", |
| "institution": "", |
| "location": { |
| "addrLine": "2-2-2 Hikaridai Seika-cho Soraku-gun Kyoto", |
| "postCode": "619-0288", |
| "country": "JAPAN" |
| } |
| }, |
| "email": "hideki.tanaka@atr@co.jp" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We propose a translation selection system based on the vector space model. When each translation candidate of a word is given as a pair of expressions containing the word and its translation, selecting the translation of the word can be considered equivalent to selecting the expression having the most similar context among candidate expressions. The proposed method expresses the context information in \"context vectors\" constructed from content words co-occurring with the target word. Context vectors represent detailed information composed of lexical attributes(word forms, semantic codes, etc.) and syntactic relations (syntactic dependency, etc.) of the co-occurring words. We tested the proposed method with the SENSEVAL-2 Japanese translation task. Precision/recall was 45.8% to the gold standard m the experiment with the evaluation set.", |
| "pdf_parse": { |
| "paper_id": "S01-1023", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We propose a translation selection system based on the vector space model. When each translation candidate of a word is given as a pair of expressions containing the word and its translation, selecting the translation of the word can be considered equivalent to selecting the expression having the most similar context among candidate expressions. The proposed method expresses the context information in \"context vectors\" constructed from content words co-occurring with the target word. Context vectors represent detailed information composed of lexical attributes(word forms, semantic codes, etc.) and syntactic relations (syntactic dependency, etc.) of the co-occurring words. We tested the proposed method with the SENSEVAL-2 Japanese translation task. Precision/recall was 45.8% to the gold standard m the experiment with the evaluation set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The SENSEVAL-2 Japanese translation task defines a sense of a Japanese word as an English translation. The same Japanese word in different contexts may have different English translations; therefore, translation ambiguity arises.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Translation Memory (henceforth TM) defining word senses were given to the task participants. Each target word has translation pairs of Japanese and English expressions as word sense candidates 1 . The target word is marked in the Japanese expression, but the corresponding part is unspecified in the English expression. Hence, selecting the most appropriate translation of the target Japanese word in the evaluation expression can be considered to be equivalent to selecting the expression with the most similar context in the TM. This is equivalent to the word sense disambiguation problem in a single language. 1 Each target word has 21.6 pairs on average.", |
| "cite_spans": [ |
| { |
| "start": 613, |
| "end": 614, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Generally, word sense disambiguation uses context information, such as the frequency of words that co-occur with the target word. The context information is learned from the correctly-annotated training corpora. However, no training corpus was given for the task and the given TM had shorter contexts because the TM expressions were rather incomplete. Therefore, instead of learning the co-occurring words with the target word from the training corpora, we extract detailed information from the TM expressions as context information. We utilize the information of co-occurring words with the target word (context words) as shown below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "95", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 lexical attributes (word form, part-ofspeech, semantic codes on thesaurus, etc.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "95", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 syntactic relations to the target word (dependency relation, etc.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "95", |
| "sec_num": null |
| }, |
| { |
| "text": "We employed the vector space model, which is used for text retrieval (Salton and McGill, 1983) to calculate the similarity between the context word information of evaluation expressions and those of the TM. The detailed context information are expressed as \"context vectors.\" We use cosine values between context vectors as a measure of similarity.", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 94, |
| "text": "(Salton and McGill, 1983)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "95", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, we will explain first how to construct \"context vectors,\" and then show the accuracy of the selection experiment to the correct data (gold standard).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "95", |
| "sec_num": null |
| }, |
| { |
| "text": "Context Vectors 2.1 Context Vectors 2.1.1 Concept We will explain how to construct a context vector from an expression e1 with the target word \"rEI, (aida; interval)\", as an illustration. Figure 1 shows the expression, which contains the content words \":;Kpft} (fuuju; married couple)\", \"-f-1:lt (kodomo; child)\", and \"JiiitL . .-_;, l\" J%1fv' f;: 1'1 < (to visit in hospital at the interval during one's work)\"", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 188, |
| "end": 196, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Translation Selection Using", |
| "sec_num": "2" |
| }, |
| { |
| "text": "nutte shiqoto \u00a2 shigoto \u00a2 \u00a2 nutte \u00a2 \u00a2 \u00a2 aida nutte ... mzmaz mimai iku iku ( '-v-\" '--v--\"' ~ Amodifying_TW Amodified_by_TW : Atarget: \u2022 \u2022 \u2022 : ),follow", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Selection Using", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The ratio of vector components for each word attribute 6 ( umareru; be born)\", and shows that the phrases containing these content words have some syntactic dependencies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Selection Using", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We then prepare a table that enumerates all possible syntactic relations between target word and context words, as in Table 1 . For each expression, we then insert corresponding words to the column for each syntactic relation. For example, the row for e1 of Table 1 can be obtained by the enumeration of expression e1. If a syntactic relation is applicable to several words, such as the relation \"following words\" in Table 1 , all of them are enumerated in the same column. If no content word comes under the syntactic relation, it is assigned empty ( \u00a2).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 118, |
| "end": 125, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 417, |
| "end": 424, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Translation Selection Using", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Each row of the table is designated a \"context vector\" Ce of a corresponding expression e.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Selection Using", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In the preceding section, the table was explained as if it had context words in its elements, but \"word attribute vectors\" of context words are assigned to them practically. Hence, context vectors are the conjunctions of \"word attribute vectors.\" Each word attribute vector aw of a word w expresses lexical attributes of w, such as POS or semantic code. Word attribute vectors have a fixed dimension number, and each ele- ment has a non-negative value. The procedure for constructing word attribute vectors will be described below in Section 2.1.3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Calculation of Context Vectors", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "When several context words fall under the same syntactic relation like kodomo and umareru as we can sec in the \"following words\" relation in Table 1 , the word vectors assigned to the relation is calculated by selecting the maximum value for every vector component among values of all words in that relation. The calculation named vecmax is defined as follows:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 141, |
| "end": 148, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Calculation of Context Vectors", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Calculation of Context Vectors", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "vecmax ai = (b1, b2, ... , bn), z=l. .. m {", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Calculation of Context Vectors", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "ai is a n-dimensional vector, aij is a j-th element of vector ai, and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Calculation of Context Vectors", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "bj = _max aij\u2022 z=l. .. m", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Calculation of Context Vectors", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "When joining word attribute vectors into a context vector, each word attribute vector is given a weight in order to get a certain ratio of vector components for each syntactic relation. This is necessary to specify the degree of the contribution to the context vectors according to the type of syntactic relation. For example, assuming that the ratio of the vector components is specified using Asyn_rel ( syn_rel denotes a specific syntactic relation type) as shown in Table 1, the context vector Ce 1 of the expression e1 will be calculated as follows: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Calculation of Context Vectors", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "EB Amodifying_TW \u2022 lafuuful EB afuufu aumareru EB Amodified_by_ TW \u2022 I I EB .. \u2022 aumareru", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Calculation of Context Vectors", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "For lexical attributes, we prepare another table similar to that for context words described in the previous section. Table 2 shows that thetable enumerates attributes for all words appearing for each lexical attribute. For each word, values are assigned to the column corresponding to the lexical attribute. The value zero is assigned to the column when the lexical attribute is not applicable to the word. In Table 2 , the lexical attributes of each context word in expression e1 are expressed in each row. The row is called \"word lexical attributes\" aw of the corresponding word w. We employ the semantic codes of a Japanese thesaurus as the semantic attributes. A semantic code may have superordinates because a thesaurus represents semantic relations on the hierarchical tree structure. For example, the word fuufu has semantic codes on seven levels, from \"Noun 7 4'' on the leaf node to \"Noun 1\" on the top, in the thesaurus \"Nihongo Goi Taikei (Ikehara et al., 1997 )\" that we used. We treat all semantic codes as semantic attributes of word attribute vectors, and assign values to the corresponding elements equally.", |
| "cite_spans": [ |
| { |
| "start": 951, |
| "end": 972, |
| "text": "(Ikehara et al., 1997", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 118, |
| "end": 125, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 411, |
| "end": 418, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word Attribute Vectors", |
| "sec_num": "2.1.3" |
| }, |
| { |
| "text": "Each lexical attribute of a word attribute vector should be assigned a value, the ratio of component vectors for each word lexical attribute being the specific value Tfword_attr ( word_attr de-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Attribute Vectors", |
| "sec_num": "2.1.3" |
| }, |
| { |
| "text": "N3 N2 'Nl P26 Pl7 Pl6 Pl ... ;Jf :]!!' 7 0 0 0 0 0 ~ !jf1 ~ 0 0 0 0 0 0 0 0 ~ 7f 7 ~ 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Attribute Vectors", |
| "sec_num": "2.1.3" |
| }, |
| { |
| "text": "notes a specific word attribute type) in Table 2 . Semantic attributes may have multiple components to be assigned values, each component should be normalized by the number of the components (See Table 2 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 41, |
| "end": 48, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 196, |
| "end": 203, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word Attribute Vectors", |
| "sec_num": "2.1.3" |
| }, |
| { |
| "text": "To select an appropriate translation for an evaluation expression containing a target Japanese word, we need to compare the context vector of the evaluation expression with the context vectors of all candidate Japanese expressions in the TM. We then choose the candidate whose cosine value to the context vector of the evaluation expression is the maximum. Each expression should have a unique context vector in order to compare context vectors. But context words, like target words, have ambiguity, and they have several candidates for semantic codes in the thesaurus. It seems unacceptable that the method requires disambiguation of context words before disambiguation of the target word. Therefore, we decided not to disambiguate context words before constructing the context vector. Instead, we construct \"context vector candidates\" from all combinations of the context word candidates. All combinations of the context vector candidates are used for calculating similarity, and the combination that has the maximum value is selected as the pair of the evaluation and the TM expressions. We can resolve ambiguity of context words when selecting the translation of the target word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Selection", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "3.1 Resources, etc.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description of Participating System", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our system used the following resources in addition to the given TM and evaluation set. JUMAN (Kurohashi and Nagao, 1998 ) Japanese Syntactic Analyzer:", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 120, |
| "text": "(Kurohashi and Nagao, 1998", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description of Participating System", |
| "sec_num": "3" |
| }, |
| { |
| "text": "KNP (Kurohashi, 1998 ) Thesaurus:", |
| "cite_spans": [ |
| { |
| "start": 4, |
| "end": 20, |
| "text": "(Kurohashi, 1998", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description of Participating System", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Nihongo Goi Taikei (Ikehara et al., 1997) ", |
| "cite_spans": [ |
| { |
| "start": 19, |
| "end": 41, |
| "text": "(Ikehara et al., 1997)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Description of Participating System", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The following parameters have significant effects on the accuracy of our method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameters", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "1. The 77word_attr ratio of vector components specified for each word attribute when making word attribute vectors (Section 2.1.3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameters", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "2. The Asyn_rel ratio of the vector components specified for each syntactic relation when joining word attribute vectors into context vectors (Section 2.1.2) However, we did not optimize the parameters in our participating system, because of the task specification that no training corpus was given and the time limitations in the course of system development. Parameters were given manually by considering the parameter functions. All of the lexical and syntactic attributes and parameters that represent the ratio between attributes, which our participating system employed, are shown in Table 3 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 590, |
| "end": 597, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parameters", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Our participating system marked both the precision and the recall at 45.8% of the correct data (the gold standard) in the evaluation corpus selection. However, our participating system had some serious bugs in the vector normalization process. After correcting the bugs, we made another selection experiment using the same parameters described in Section 3.2. The accuracy of the corrected system was 49.3% (nouns: 50.0%, predicates: 48.5%).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We proposed a translation selection method for the SENSEVAL-2 Japanese translation task. The proposed method calculates the similarity between an evaluation expression containing the target word and Japanese expressions containing the same word in the TM. For calculating similarity, \"context vectors\" are constructed. Context vectors represent lexical attributes of context words and syntactic relations between context words and the target word. The system employed the proposed method with an accuracy of 49.3% after bug elimination.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5 Summary", |
| "sec_num": "98" |
| }, |
| { |
| "text": "Future plans are as follows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5 Summary", |
| "sec_num": "98" |
| }, |
| { |
| "text": "1. To optimize parameters using the gold standard. We would like to use the optimized parameters to study the relation between context information type and accuracy on translation selection. In addition, we will examine whether employed lexical and syntactic attributes are appropriate for the task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5 Summary", |
| "sec_num": "98" |
| }, |
| { |
| "text": "2. To apply the machine learning method to the task, preparing the training corpora. We will make use of the detailed context information proposed, the lexical and syntactic attributes, at machine learning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5 Summary", |
| "sec_num": "98" |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Japanese Morphological Analysis System JUMAN version 3.61. Kyoto University", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kurohashi", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Nagao", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Kurohashi and M. Nagao, 1998. Japanese Mor- phological Analysis System JUMAN version 3.61. Kyoto University. (in Japanese).", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Japanese Syntactic Analysis System KNP version 2.0 b6 user's manual", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kurohashi", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Kurohashi, 1998. Japanese Syntactic Analysis System KNP version 2.0 b6 user's manual. Kyoto University. (in Japanese).", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Introduction to Modern Information Retrieval", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Salton", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mcgill", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Salton and M. J. McGill. 1983. Introduction to Modern Information Retrieval. McGraw-Hill.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "fuufu-no aida-ni kodomo-ga umareru Expression: ~~W (]) ~ (:: -=f-1~ f:J\\ ~a::tL{) Syntactic Dependencies m Expres-SIOn e 1 96", |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF0": { |
| "text": "", |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td/><td/><td/><td/><td colspan=\"6\">: Context Vectors Construction</td><td/></tr><tr><td/><td/><td/><td/><td colspan=\"7\">Type ot syntactic relationship to the target word</td><td/><td/></tr><tr><td/><td colspan=\"4\">modifying target word in case relation: wu NU NJ ...</td><td colspan=\"4\">modihed by target word in case relation: wu NU JVJ . ..</td><td>target word</td><td>...</td><td>following words</td><td>all context words</td></tr><tr><td/><td>( e1)</td><td colspan=\"8\">fv:u.fu-no aida-ni kodomo-ga umarcru \":}cfrm (/) rs~ f: ~itt \"/J{ .ilEitL-0 (a baby is born to the couple)\"</td><td/><td/><td/></tr><tr><td>(</td><td>\u00a2</td><td>fuuju</td><td>\u00a2</td><td>\u00a2</td><td>\u00a2</td><td>\u00a2</td><td>umareru</td><td>\u00a2</td><td>aida</td><td>...</td><td>kodomo umareru</td><td>fuufu kodomo umareru</td></tr><tr><td/><td>(e2)</td><td colspan=\"3\">shigoto-no aida-wo nutte \".f\u00b1 $-(!) Fs~ ~ 'd;;</td><td>mirnai-ni</td><td>iku</td><td/><td/><td/><td/><td/><td/></tr></table>", |
| "num": null, |
| "html": null |
| }, |
| "TABREF1": { |
| "text": "Constructing Word Attribute Vectors", |
| "type_str": "table", |
| "content": "<table><tr><td>u-ma-re-ru</td></tr></table>", |
| "num": null, |
| "html": null |
| }, |
| "TABREF2": { |
| "text": "", |
| "type_str": "table", |
| "content": "<table><tr><td>: Employed Parameters</td></tr></table>", |
| "num": null, |
| "html": null |
| } |
| } |
| } |
| } |