ACL-OCL / Base_JSON /prefixU /json /U12 /U12-1009.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U12-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:07:22.428650Z"
},
"title": "Segmentation and Translation of Japanese Multi-word Loanwords",
"authors": [
{
"first": "James",
"middle": [],
"last": "Breen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": "jimbreen@gmail.com"
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": ""
},
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nanyang Technological University",
"location": {
"country": "Singapore"
}
},
"email": "bond@ieee.org"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The Japanese language has absorbed large numbers of loanwords from many languages, in particular English. As well as using single loanwords, compound nouns, multiword expressions (MWEs), etc. constructed from loanwords can be found in use in very large quantities. In this paper we describe a system which has been developed to segment Japanese loanword MWEs and construct likely English translations. The system, which leverages the availability of large bilingual dictionaries of loanwords and English n-gram corpora, achieves high levels of accuracy in discriminating between single loanwords and MWEs, and in segmenting MWEs. It also generates useful translations of MWEs, and has the potential to being a major aid to lexicographers in this area.",
"pdf_parse": {
"paper_id": "U12-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "The Japanese language has absorbed large numbers of loanwords from many languages, in particular English. As well as using single loanwords, compound nouns, multiword expressions (MWEs), etc. constructed from loanwords can be found in use in very large quantities. In this paper we describe a system which has been developed to segment Japanese loanword MWEs and construct likely English translations. The system, which leverages the availability of large bilingual dictionaries of loanwords and English n-gram corpora, achieves high levels of accuracy in discriminating between single loanwords and MWEs, and in segmenting MWEs. It also generates useful translations of MWEs, and has the potential to being a major aid to lexicographers in this area.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The work described in this paper is part of a broader project to identify unrecorded lexemes, including neologisms, in Japanese corpora. Since such lexemes include the range of lexical units capable of inclusion in Japanese monolingual and bilingual dictionaries, it is important to be able to identify and extract a range of such units, including compound nouns, collocations and other multiword expressions (MWEs: Sag et al. (2002) , Baldwin and Kim (2009) ).",
"cite_spans": [
{
"start": 409,
"end": 433,
"text": "(MWEs: Sag et al. (2002)",
"ref_id": null
},
{
"start": 436,
"end": 458,
"text": "Baldwin and Kim (2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unlike some languages, where there is official opposition to the incorporation of foreign words, Japanese has assimilated a large number of such words, to the extent that they constitute a sizeable proportion of the lexicon. For example, over 10% of the entries and sub-entries in the major Kenky\u016bsha New Japanese-English Dictionary (5th ed.) (Toshiro et al., 2003) are wholly or partly made up of loanwords. In addition there are several published dictionaries consisting solely of such loanwords. Estimates of the number of loanwords and particularly MWEs incorporating loanwords in Japanese range into the hundreds of thousands. While a considerable number of loanwords have been taken from Portuguese, Dutch, French, etc., the overwhelming majority are from English.",
"cite_spans": [
{
"start": 343,
"end": 365,
"text": "(Toshiro et al., 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Loanwords are taken into Japanese by adapting the source language pronunciation to conform to the relatively restricted set of syllabic phonemes used in Japanese. Thus \"blog\" becomes burogu, and \"elastic\" becomes erasutikku. When written, the syllables of the loanword are transcribed in the katakana syllabic script (\u30d6\u30ed\u30b0, \u30a8\u30e9\u30b9\u30c6\u30a3\u30c3\u30af), which in modern Japanese is primarily used for this purpose. This use of a specific script means possible loanwords are generally readily identifiable in text and can be extracted without complex morphological analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The focus of this study is on multiword loanwords. This is because there are now large collections of basic Japanese loanwords along with their translations, and it appears that many new loanwords are formed by adopting or assembling MWEs using known loanwords. As evidence of this, we can cite the numbers of katakana sequences in the the Google Japanese n-gram corpus (Kudo and Kazawa, 2007) . Of the 2.6 million 1-grams in that cor-pus, approximately 1.6 million are in katakana or other characters used in loanwords. 1 Inspection of those 1-grams indicates that once the words that are in available dictionaries are removed, the majority of the more common members are MWEs which had not been segmented during the generation of the corpus. Moreover the n-gram corpus also contains 2.6 million 2-grams and 900,000 3-grams written in katakana. Even after allowing for the multiple-counting between the 1, 2 and 3grams, and the imperfections in the segmentation of the katakana sequences, it is clear that the vast numbers of multiword loanwords in use are a fruitful area for investigation with a view to extraction and translation.",
"cite_spans": [
{
"start": 370,
"end": 393,
"text": "(Kudo and Kazawa, 2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the work presented in this paper we describe a system which has been developed to segment Japanese loanword MWEs and construct likely English translations, with the ultimate aim of being part of a toolkit to aid the lexicographer. The system builds on the availability of large collections of translated loanwords and a large English n-gram corpus, and in testing is performing with high levels of precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There has not been a large amount of work published on the automatic and semiautomatic extraction and translation of Japanese loanwords. Much that has been reported has been in areas such as backtransliteration (Matsuo et al., 1996; Knight and Graehl, 1998; Bilac and Tanaka, 2004) , or on extraction from parallel bilingual corpora (Brill et al., 2001 ). More recently work has been carried out exploring combinations of dictionaries and corpora (Nakazawa et al., 2005) , although this lead does not seem to have been followed further.",
"cite_spans": [
{
"start": 211,
"end": 232,
"text": "(Matsuo et al., 1996;",
"ref_id": "BIBREF14"
},
{
"start": 233,
"end": 257,
"text": "Knight and Graehl, 1998;",
"ref_id": null
},
{
"start": 258,
"end": 281,
"text": "Bilac and Tanaka, 2004)",
"ref_id": null
},
{
"start": 333,
"end": 352,
"text": "(Brill et al., 2001",
"ref_id": "BIBREF2"
},
{
"start": 447,
"end": 470,
"text": "(Nakazawa et al., 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prior Work",
"sec_num": "2"
},
{
"text": "Both Bilac and Tanaka (2004) and Nakazawa et al. (2005) address the issue of segmentation of MWEs. This is discussed in 3.1 below.",
"cite_spans": [
{
"start": 5,
"end": 28,
"text": "Bilac and Tanaka (2004)",
"ref_id": null
},
{
"start": 33,
"end": 55,
"text": "Nakazawa et al. (2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prior Work",
"sec_num": "2"
},
{
"text": "As mentioned above, loan words in Japanese are currently written in the katakana script. This is an orthographical convention that has been applied relatively strictly since the late 1940s, when major script reforms were carried out. Prior to then loanwords were also written using the hiragana syllabary and on occasions kanji (Chinese characters). The katakana script is not used exclusively for loanwords. Other usage includes: a. transcription of foreign person and place names and other named entities. Many Japanese companies use names which are transcribed in katakana. Chinese (and Korean) place names and person names, although they are usually available in kanji are often written in katakana transliterations; b. the scientific names of plants, animals, etc. c. onomatopoeic words and expressions, although these are often also written in hiragana; d. occasionally for emphasis and in some contexts for slang words, in a similar fashion to the use of italics in English. The proportion of katakana words that were not loanwords was measured by Brill et al. (2001) at about 13%. (The impact and handling of these is discussed briefly at the end of Section 4.)",
"cite_spans": [
{
"start": 1055,
"end": 1074,
"text": "Brill et al. (2001)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Role and Nature of Katakana Words in Japanese",
"sec_num": "3"
},
{
"text": "When considering the extraction of Japanese loan words from text, there are a number of issues which need to be addressed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Role and Nature of Katakana Words in Japanese",
"sec_num": "3"
},
{
"text": "As mentioned above, many loanwords appear in the form of MWEs, and their correct analysis and handling often requires separation into their composite words. In Japanese there is a convention that loanword MWEs have a \"middle-dot\" punctuation character (\u30fb) inserted between the components, however while this convention is usually followed in dictionaries, it is rarely applied elsewhere. Web search engines typically ignore this character when indexing, and a search for a very common MWE: \u30c8\u30de\u30c8\u30bd\u30fc\u30b9 tomatos\u014dsu \"tomato sauce\", reveals that it almost always appears as an undifferentiated string. Moreover, the situation is confused by the common use of the \u30fb character to separate items in lists, in a manner similar to a semi-colon in English. In practical terms, systems dealing with loanwords MWEs must be prepared to do their own segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation",
"sec_num": "3.1"
},
{
"text": "One approach to segmentation is to utilize a Japanese morphological analysis system. These have traditionally been weak in the area of segmentation of loanwords, and tend to default to treating long katakana strings as 1-grams. In testing a list of loanwords and MWEs using the ChaSen system , Bilac and Tanaka (2004) report a precision and recall of approximately 0.65 on the segmentation, with a tendency to undersegment being the main problem. Nakazawa et al. (2005) report a similar tendency with the JUMAN morphological analyzer (Kurohashi and Nagao, 1998). The problem was most likely due to the relatively poor representation of loanwords in the morpheme lexicons used by these systems. For example the IPADIC lexicon used at that time only had about 20,000 words in katakana, and many of those were proper nouns.",
"cite_spans": [
{
"start": 447,
"end": 469,
"text": "Nakazawa et al. (2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation",
"sec_num": "3.1"
},
{
"text": "In this study, we use the MeCab morphological analyzer (Kudo et al., 2004) with the recently-developed UniDic lexicon (Den et al., 2007) , as discussed below.",
"cite_spans": [
{
"start": 55,
"end": 74,
"text": "(Kudo et al., 2004)",
"ref_id": "BIBREF10"
},
{
"start": 118,
"end": 136,
"text": "(Den et al., 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation",
"sec_num": "3.1"
},
{
"text": "As they were largely dealing with nonlexicalized words, Bilac and Tanaka 2004used a dynamic programming model trained on a relatively small (13,000) list of katakana words, and reported a high precision in their segmentation. Nakazawa et al. (2005) used a larger lexicon in combination with the JU-MAN analyzer and reported a similar high precision.",
"cite_spans": [
{
"start": 226,
"end": 248,
"text": "Nakazawa et al. (2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation",
"sec_num": "3.1"
},
{
"text": "A number of loanwords are taken from languages other than English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-English Words",
"sec_num": "3.2"
},
{
"text": "The JMdict dictionary (Breen, 2004) has approximately 44,000 loanwords, of which 4% are marked as coming from other languages. Inspection of a sample of the 22,000 entries in the Gakken A Dictionary of Katakana Words (Kabasawa and Sat\u014d, 2003) indicates a similar proportion. (In both dictionaries loanwords from languages other than English are marked with their source language.) This relatively small number is known to cause some problems with generating translations through transliterations based on English, but the overall impact is not very significant.",
"cite_spans": [
{
"start": 217,
"end": 242,
"text": "(Kabasawa and Sat\u014d, 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Non-English Words",
"sec_num": "3.2"
},
{
"text": "A number of katakana MWEs are constructions of two or more English words forming a term which does not occur in English. An example is \u30d0\u30fc\u30b8\u30e7\u30f3\u30a2\u30c3\u30d7 b\u0101joNappu \"version up\", meaning upgrading software, etc. These constructions are known in Japanese as \u548c\u88fd\u82f1\u8a9e wasei eigo \"Japanese-made English\". Inspection of the JMdict and Gakken dictionaries indicate they make up approximately 2% of katakana terms, and while a nuisance, are not considered to be a significant problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo-English Constructions",
"sec_num": "3.3"
},
{
"text": "Written Japanese has a relatively high incidence of multiple surface forms of words, and this particularly applies to loan words. Many result from different interpretations of the pronunciation of the source language term, e.g. the word for \"diamond\" is both \u30c0\u30a4\u30e4\u30e2\u30f3\u30c9 daiyamoNdo and \u30c0\u30a4\u30a2\u30e2\u30f3\u30c9 daiamoNdo, with the two occurring in approximately equal proportions. (The JMdict dictionary records 10 variants for the word \"vibraphone\", and 9 each for \"whiskey\" and \"vodka\".) In some cases two different words have been formed from the one source word, e.g. the English word \"truck\" was borrowed twice to form \u30c8\u30e9\u30c3\u30af torakku meaning \"truck, lorry\" and \u30c8\u30ed\u30c3\u30b3 torokku meaning \"trolley, rail car\". Having reasonably complete coverage of alternative surface forms is important in the present project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Orthographical Variants",
"sec_num": "3.4"
},
{
"text": "As our goal is the extraction and translation of loanword MWEs, we need to address the twin tasks of segmentation of the MWEs into their constituent source-language components, and generation of appropriate transla-tions for the MWEs as a whole. While the back-transliteration approaches in previous studies have been quite successful, and have an important role in handling single-word loanwords, we decided to experiment with an alternative approach which builds on the large lexicon and n-gram corpus resources which are now available. This approach, which we have labelled \"CLST\" (Corpus-based Loanword Segmentation and Translation) builds upon a direction suggested in Nakazawa et al. (2005) in that it uses a large English n-gram corpus both to validate alternative segmentations and select candidate translations. The three key resources used in CLST are: a. a dictionary of katakana words which has been assembled from: i. the entries with katakana headwords or readings in the JMdict dictionary; ii. the entries with katakana headwords in the Kenky\u016bsha New Japanese-English Dictionary; iii. the katakana entries in the Eijiro dictionary database; 2 iv. the katakana entries in a number of technical glossaries covering biomedical topics, engineering, finance, law, etc.; v. the named-entities in katakana from the JMnedict named-entity database. 3 This dictionary, which contains both base words and MWEs, includes short English translations which, where appropriate, have been split into identifiable senses. It contains a total of 270,000 entries. b. a collection of 160,000 katakana words drawn from the headwords of the dictionary above. It has been formed by splitting the known MWEs into their components where this can be carried out reliably; c. the Google English n-gram corpus 4 . This contains 1-grams to 5-grams collected from the Web in 2006, along with fre-\u30bd\u30fc\u30b7\u30e3\u30eb\u30fb\u30d6\u30c3\u30af\u30de\u30fc\u30af\u30fb\u30b5\u30fc\u30d3\u30b9 \u30bd\u30fc\u30b7\u30e3\u30eb\u30fb\u30d6\u30c3\u30af\u30de\u30fc\u30af\u30fb\u30b5\u30fc\u30fb\u30d3\u30b9 \u30bd\u30fc\u30b7\u30e3\u30eb\u30fb\u30d6\u30c3\u30af\u30fb\u30de\u30fc\u30af\u30fb\u30b5\u30fc\u30d3\u30b9 \u30bd\u30fc\u30b7\u30e3\u30eb\u30fb\u30d6\u30c3\u30af\u30fb\u30de\u30fc\u30af\u30fb\u30b5\u30fc\u30fb\u30d3\u30b9 \u30bd\u30fc\u30fb\u30b7\u30e3\u30eb\u30fb\u30d6\u30c3\u30af\u30de\u30fc\u30af\u30fb\u30b5\u30fc\u30d3\u30b9 \u30bd\u30fc\u30fb\u30b7\u30e3\u30eb\u30fb\u30d6\u30c3\u30af\u30de\u30fc\u30af\u30fb\u30b5\u30fc\u30fb\u30d3\u30b9 \u30bd\u30fc\u30fb\u30b7\u30e3\u30eb\u30fb\u30d6\u30c3\u30af\u30fb\u30de\u30fc\u30af\u30fb\u30b5\u30fc\u30d3\u30b9 \u30bd\u30fc\u30fb\u30b7\u30e3\u30eb\u30fb\u30d6\u30c3\u30af\u30fb\u30de\u30fc\u30af\u30fb\u30b5\u30fc\u30fb\u30d3\u30b9 Table 1 : Segmentation Example quency counts. In the present project we use a subset of the corpus consisting only of case-folded alphabetic tokens.",
"cite_spans": [
{
"start": 674,
"end": 696,
"text": "Nakazawa et al. (2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 2036,
"end": 2043,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approach to Segmentation and MWE Translation",
"sec_num": "4"
},
{
"text": "The process of segmenting an MWE and deriving a translation is as follows: a. using the katakana words in (b) above, generate all possible segmentations of the MWE. A recursive algorithm is used for this. Table 1 shows the segments derived for the MWE \u30bd\u30fc\u30b7\u30e3\u30eb\u30d6\u30c3\u30af\u30de\u30fc\u30af\u30b5\u30fc\u30d3\u30b9 s\u014dsharubukkum\u0101kus\u0101bisu \"social bookmark service\". b. for each possible segmentation of an MWE, assemble one or more possible glosses as follows: i. take each element in the segmented MWE, extract the first gloss in the dictionary and assemble a composite potential translation by simply concatenating the glosses. Where there are multiple senses, extract the first gloss from each and assemble all possible combinations. (The first gloss is being used as lexicographers typically place the most relevant and succinct translation first, and this has been observed to be often the most useful when building composite glosses.) As examples, for \u30bd\u30fc\u30b7\u30e3\u30eb\u30fb\u30d6\u30c3\u30af\u30de\u30fc\u30af\u30fb\u30b5\u30fc\u30d3\u30b9 the element \u30b5\u30fc\u30d3\u30b9 has two senses \"service\" and \"goods or services without charge\", so the possible glosses were \"social bookmark service\" and \"social bookmark goods or services without charge\". For \u30bd\u30fc\u30b7\u30e3\u30eb\u30fb\u30d6\u30c3\u30af\u30fb\u30de\u30fc\u30af\u30fb\u30b5\u30fc\u30d3\u30b9 the element \u30de\u30fc\u30af has senses of \"mark\", \"paying attention\", \"markup\" and \"Mach\", so the potential glosses were \"social book mark service\", \"social book markup service\", \"social book Mach service\", etc. A total of 48 potential translations were assembled for this MWE. ii. where the senses are tagged as being affixes, also create combinations where the gloss is attached to the preceding or following gloss as appropriate. iii. if the entire MWE is in the dictionary, extract its gloss as well. It may seem unusual that a single sense is being sought for an MWE with polysemous elements. This comes about because in Japanese polysemous loanwords are almost always due to them being derived from multiple source words. For example \u30e9\u30f3\u30d7 raNpu has three senses reflecting that it results from the borrowing of three distinct English words: \"lamp\", \"ramp\" and \"rump\". On the other hand, MWEs containing \u30e9\u30f3\u30d7, such as \u30cf\u30ed\u30b2\u30f3\u30e9\u30f3\u30d7 harogeNraNpu \"halogen lamp\" or \u30aa\u30f3\u30e9\u30f3\u30d7 oNraNpu \"on-ramp\" almost invariably are associated with one sense or another. c. attempt to match the potential translations with the English n-grams, and where a match does exist, extract the frequency data. For the example above, only \"social bookmark service\", which resulted from the \u30bd\u30fc\u30b7\u30e3\u30eb\u30fb\u30d6\u30c3\u30af\u30de\u30fc\u30af\u30fb\u30b5\u30fc\u30d3\u30b9 segmentation, was matched successfully; d. where match(es) result, choose the one with the highest frequency as both the most likely segmentation of the MWE and the candidate translation. The approach described above assumes that the term being analyzed is a MWE, when in fact it may well be a single word. In the case of as-yet unrecorded words we would expect that either no segmentation is accepted or that any possible segmentations have relatively low frequencies associated with the potential translations, and hence can be flagged for closer inspection. As some of the testing described below involves deciding whether a term is or is not a MWE, we have enabled the system to handle single terms as well by checking the unsegmented term against the dictionary and extracting n-gram frequency counts for the glosses. This enables the detection and rejection of possible spurious segmentations. As an example of this, the word \u30dc\u30fc\u30eb\u30c8 b\u014druto \"vault\" occurs in one of the test files described in the following section. A possible segmentation (\u30dc\u30fc\u30fb\u30eb\u30c8) was generated with potential translations of \"bow root\" and \"baud root\". The first of these occurs in the English 2-grams with a frequency of 63, however \"vault\" itself has a very high frequency in the 1-grams so the segmentation would be rejected.",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approach to Segmentation and MWE Translation",
"sec_num": "4"
},
{
"text": "As pointed out above, a number of katakana words are not loanwords. For the most part these would not be handled by the CLST segmentation/translation process as they would not be reduced to a set of known segments, and would be typically reported as failures. The transliteration approaches in earlier studies also have problems with these words. Some of the non-loanwords, such as scientific names of plants, animals, etc. or words written in katakana for emphasis, can be detected and filtered prior to attempted processing simply by comparing the katakana form with the equivalent hiragana form found in dictionaries. Some of the occurrences of Chinese and Japanese names in text can be detected at extraction time, as such names are often written in forms such as \"...\u91d1\u937e\u6ccc(\u30ad\u30e0\u30b8\u30e7\u30f3\u30d4\u30eb)...\". 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Segmentation and MWE Translation",
"sec_num": "4"
},
{
"text": "Evaluation of the CLST system was carried out in two stages: testing the segmentation using data used in previous studies to ensure it was discriminating between single loanwords and MWEs, and testing against a collection of MWEs to evaluate the quality of the translations proposed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "The initial tests of CLST were of the segmentation function and the identification of single words/MWEs. We were fortu- Table 2 : Results from Segmentation Tests nate to be able to use the same data used by Bilac and Tanaka (2004) , which consisted of 150 out-of-lexicon katakana terms from the EDR corpus (EDR, 1995) and 78 from the NTCIR-2 test collection (Kando et al., 2001 ). The terms were hand-marked as to whether they were single words or MWEs. Unfortunately we detected some problems with this marking, for example \u30b7\u30a7\u30fc\u30af\u30b9\u30d4\u30a2 sh\u0113kusupia \"Shakespeare\" had been segmented (shake + spear) whereas \u30db\u30fc\u30eb\u30d0\u30fc\u30cb\u30f3\u30b0 h\u014drub\u0101niNgu \"hole burning\" had been left as a single word. We considered it inappropriate to use this data without amending these terms. As a consequence of this we are not able to make a direct comparison with the results reported in Bilac and Tanaka (2004) . Using the corrected data we analyzed the two datasets and report the results in Table 2 . We include the results from analyzing the data using MeCab/UniDic as well for comparison. The precision and recall achieved was higher than that reported in Bilac and Tanaka (2004) . As in Bilac and Tanaka (2004) , we calculate the scores as follows: N is the number of terms in the set, c is the number of terms correctly segmented or identified as 1-grams, e is the number of terms incorrectly segmented or identified, and n = c + e. Recall is calculated as c N , precision as c n , and the F-measure as 2\u00d7precision\u00d7recall precision+recall . As can be seen, our CLST approach has achieved a high degree of accuracy in identifying 1-grams and segmenting the MWEs. Although it was not part of the test, it also proposed the correct translations for almost all the MWEs. The less-than-perfect recall is entirely due to the few cases where either no segmentation was proposed, or where the proposed segmentation could not be validated with the English n-grams.",
"cite_spans": [
{
"start": 207,
"end": 230,
"text": "Bilac and Tanaka (2004)",
"ref_id": null
},
{
"start": 306,
"end": 317,
"text": "(EDR, 1995)",
"ref_id": "BIBREF5"
},
{
"start": 358,
"end": 377,
"text": "(Kando et al., 2001",
"ref_id": "BIBREF7"
},
{
"start": 845,
"end": 868,
"text": "Bilac and Tanaka (2004)",
"ref_id": null
},
{
"start": 1118,
"end": 1141,
"text": "Bilac and Tanaka (2004)",
"ref_id": null
},
{
"start": 1150,
"end": 1173,
"text": "Bilac and Tanaka (2004)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 120,
"end": 127,
"text": "Table 2",
"ref_id": null
},
{
"start": 951,
"end": 958,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Segmentation",
"sec_num": "5.1"
},
{
"text": "The performance of MeCab/UniDic is interesting, as it also has achieved a high level of accuracy. This is despite the UniDic lexicon only having approximately 55,000 katakana words, and the fact that it is operating outside the textual context for which it has been trained. Its main shortcoming is that it tends to over-segment, which is a contrast to the performance of ChaSen/IPADIC reported in Bilac and Tanaka (2004) where undersegmentation was the problem.",
"cite_spans": [
{
"start": 398,
"end": 421,
"text": "Bilac and Tanaka (2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation",
"sec_num": "5.1"
},
{
"text": "The second set of tests of CLST was directed at developing translations for MWEs. The initial translation tests were carried out on two sets of data, each containing 100 MWEs. The sets of data were obtained as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation",
"sec_num": "5.2"
},
{
"text": "a. the 100 highest-frequency MWEs were selected from the Google Japanese 2grams. The list of potential MWEs had to be manually edited as the 2grams contain a large number of oversegmented words, e.g. \u30a2\u30a4\u30b3\u30f3 aikoN \"icon\" was split: \u30a2\u30a4\u30b3+\u30f3, and \u30aa\u30fc\u30af\u30b7\u30e7\u30f3 \u014dkushoN \"auction\" was split \u30aa\u30fc\u30af+\u30b7\u30e7\u30f3; b. the katakana sequences were extracted from a large collection of articles from 1999 in the Mainichi Shimbun (a Japanese daily newspaper), and the 100 highest-frequency MWEs extracted. After the data sets were processed by CLST the results were examined to determine if the segmentations had been carried out correctly, and to assess the quality of the proposed translations. The translations were graded into three groups: (1) acceptable as a dictionary gloss, (2) understandable, but in need of improvement, and (3) wrong or inadequate. An example of a translation graded as 2 is \u30de\u30a4\u30ca\u30b9\u30a4\u30aa\u30f3 mainasuioN \"minus ion\", where \"negative ion\" would be better, and one graded as 3 is \u30d5\u30ea\u30fc\u30de\u30fc\u30b1\u30c3\u30c8 fur\u012bm\u0101ketto \"free market\", where the correct translation is \"flea market\". For the most part the translations receiving a grading of 2 were the same as would have been produced by a back-transliteration system, and in many cases they were the wasei eigo constructions described above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation",
"sec_num": "5.2"
},
{
"text": "Some example segmentations, possible translations and gradings are in The assessments of the segmentation and the gradings of the translations are given in Table 4 . The precision, recall and F measures have been calculated on the basis that a grade of 2 or better for a translation is a satisfactory outcome.",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 163,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Translation",
"sec_num": "5.2"
},
{
"text": "A brief analysis was conducted on samples of 25 MWEs from each test set to ascertain whether they were already in dictionaries, or the degree to which they were suitable for inclusion in a dictionary. The dictionaries used for this evaluation were the commercial Kenkyusha Online Dictionary Service 6 which has eighteen Japanese, Japanese-English and English-Japanese dictionaries in its search tool, and the free WWWJDIC online dictionary, 7 which has the JMdict and JMnedict dictionaries, as well as numerous glossaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation",
"sec_num": "5.2"
},
{
"text": "Of the 50 MWEs sampled: a. 34 (68%) were in dictionaries; b. 11 (22%) were considered suitable for inclusion in a dictionary. In some cases the generated translation was not considered appropriate without some modification, i.e. it had been categorized as \"2\"; c. 3 (6%) were proper names (e.g. hotels,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation",
"sec_num": "5.2"
},
{
"text": "6 http://kod.kenkyusha.co.jp/service/ 7 http://www.edrdg.org/cgibin/wwwjdic/wwwjdic?1C software packages); d. 2 (4%) were not considered suitable for inclusion in a dictionary as they were simple collocations such as \u30e1\u30cb\u30e5\u30fc\u30a8\u30ea\u30a2 meny\u016beria \"menu area\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation",
"sec_num": "5.2"
},
{
"text": "As the tests described above were carried out on sets of frequently-occurring MWEs, it was considered appropriate that some further testing be carried out on less common loanword MWEs. Therefore an additional set of 100 lower-frequency MWEs which did not occur in the dictionaries mentioned above were extracted from the Mainichi Shimbun articles and were processed by the CLST system. Of these 100 MWEs: a. 1 was not successfully segmented; b. 83 of the derived translations were classified as \"1\" and 16 as \"2\"; c. 8 were proper names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation",
"sec_num": "5.2"
},
{
"text": "The suitability of these MWEs for possible inclusion in a bilingual dictionary was also evaluated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation",
"sec_num": "5.2"
},
{
"text": "In fact the overwhelming majority of the MWEs were relatively straightforward collocations, e.g. \u30de\u30e9\u30bd\u30f3\u30e9\u30f3\u30ca\u30fc marasoNraNn\u0101 \"marathon runner\" and \u30ed\u30c3\u30af\u30b3\u30f3\u30b5\u30fc\u30c8 rokkukoNs\u0101to \"rock concert\", and were deemed to be not really appropriate as dictionary entries. 5 terms were assessed as being dic-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation",
"sec_num": "5.2"
},
{
"text": "In addition to katakana, loanwords use the \u30fc (ch\u014doN) character for indicating lengthened vowels, and on rare occasions the \u30fd and \u30fe syllable repetition characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.eijiro.jp/e/index.htm 3 http://www.csse.monash.edu.au/~jwb/ enamdict_doc.html 4 http://www.ldc.upenn.edu/Catalog/ CatalogEntry.jsp?catalogId=LDC2006T13",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Kim Jong-Pil, a former South Korean politician.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "tionary candidates. Several of these, e.g. \u30b4\u30fc\u30eb\u30c9\u30d7\u30e9\u30f3 g\u014drudopuraN \"gold plan\" and \u30a8\u30fc\u30b9\u30b9\u30c8\u30e9\u30a4\u30ab\u30fc \u0113susutoraik\u0101 \"ace striker\" were category 2 translations, and their possible inclusion in a dictionary would largely be because their meanings are not readily apparent from the component words, and an expanded gloss would be required.Some points which emerge from the analysis of the results of the tests described above are: a. to some extent, the Google n-gram test data had a bias towards the types of constructions favoured by Japanese webpage designers, e.g. \u30b7\u30e7\u30c3\u30d4\u30f3\u30b0\u30c8\u30c3\u30d7 shoppiNgutoppu \"shopping top\", which possibly inflated the proportion of translations being scored with a 2; b. some of the problems leading to a failure to segment the MWEs were due to the way the English n-gram files were constructed. Words with apostrophes were split, so that \"men's\" was recorded as a bigram: \"men+'s\". This situation is not currently handled in CLST, which led to some of the segmentation failures, e.g. with \u30e1\u30f3\u30ba\u30a2\u30a4\u30c6\u30e0 meNzuaitemu \"men's item\";",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "In this paper we have described the CLST (Corpus-based Loanword Segmentation and Translation) system which has been developed to segment Japanese loanword MWEs and construct likely English translations. The system, which leverages the availability of large bilingual dictionaries of loanwords and English n-gram corpora, is achieving high levels of accuracy in discriminating between single loanwords and MWEs, and in segmenting MWEs. It is also generating useful translations of MWEs, and has the potential to being a major aide both to lexicography in this area, and to translating. The apparent success of an approach based on a combination of large corpora and relatively simple heuristics is consistent with the conclusions reached in a number of earlier investigations (Banko and Brill, 2001; Lapata and Keller, 2004) .Although the CLST system is performing at a high level, there are a number of areas where refinement and experimentation on possible enhancements can be carried out. They include:a. instead of using the \"first-gloss\" heuristic, experiment with using all available glosses. This would be at the price of increased processing time, but may improve the performance of the segmentation and translation; b. align the searching of the n-gram corpus to cater for the manner in which words with apostrophes, etc. are segmented. At present this is not handled correctly; c. tune the presentation of the glosses in the dictionaries so that they will match better with the contents of the n-gram corpus. At present the dictionary used is simply a concatenation of several sources, and does not take into account such things as the n-gram corpus having hyphenated words segmented; d. extend the system by incorporating a back-transliteration module such as that reported in Bilac and Tanaka (2004) . This would cater for single loanwords and thus provide more complete coverage.",
"cite_spans": [
{
"start": 775,
"end": 798,
"text": "(Banko and Brill, 2001;",
"ref_id": "BIBREF2"
},
{
"start": 799,
"end": 823,
"text": "Lapata and Keller, 2004)",
"ref_id": "BIBREF12"
},
{
"start": 1787,
"end": 1810,
"text": "Bilac and Tanaka (2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "IPADIC version 2.7.0 User's Manual",
"authors": [
{
"first": "Masayuki",
"middle": [],
"last": "Asahara",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "NAIST, Information Science Division",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masayuki Asahara and Yuji Matsumoto. 2003. IPADIC version 2.7.0 User's Manual (in Japanese). NAIST, Information Science Divi- sion.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multiword expressions",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Su",
"middle": [
"Nam"
],
"last": "Kim",
"suffix": ""
}
],
"year": 2009,
"venue": "Handbook of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Baldwin and Su Nam Kim. 2009. Mul- tiword expressions. In Nitin Indurkhya and Fred J. Damerau, editors, Handbook of Natural Language Processing. CRC Press, Boca Raton, USA, 2nd edition.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Scaling to very very large corpora for natural language disambiguation",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting of the ACL and 10th Conference of the EACL (ACL-EACL 2001)",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko and Eric Brill. 2001. Scaling to very very large corpora for natural language dis- ambiguation. In Proceedings of the 39th Annual Meeting of the ACL and 10th Conference of the EACL (ACL-EACL 2001), Toulouse, France. Slaven Bilac and Hozumi Tanaka. 2004. A Hy- brid Back-transliteration System for Japanese. In Proceedings of the 20th international con- ference on Computational Linguistics, COLING '04, Geneva, Switzerland. James Breen. 2004. JMdict: a Japanese- Multilingual Dictionary. In Proceedings of the COLING-2004 Workshop on Multilingual Re- sources, pages 65-72, Geneva, Switzerland. Eric Brill, Gary Kacmarcik, and Chris Brock- ett. 2001. Automatically Harvesting Katakana-",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "English Term Pairs from Search Engine Query Logs",
"authors": [],
"year": 2001,
"venue": "Proceedings of the Sixth Natural Language Processing Pacific Rim Symposium",
"volume": "",
"issue": "",
"pages": "393--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "English Term Pairs from Search Engine Query Logs. In Proceedings of the Sixth Natural Language Processing Pacific Rim Symposium, November 27-30, 2001, Hitotsubashi Memo- rial Hall, National Center of Sciences, Tokyo, Japan, pages 393-399.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The development of an electronic dictionary for morphological analysis and its application to Japanese corpus linguistics",
"authors": [
{
"first": "Yasuharu",
"middle": [],
"last": "Den",
"suffix": ""
},
{
"first": "Toshinobu",
"middle": [],
"last": "Ogiso",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Ogura",
"suffix": ""
},
{
"first": "Atsushi",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Nobuaki",
"middle": [],
"last": "Minematsu",
"suffix": ""
}
],
"year": 2007,
"venue": "Japanese Linguistics",
"volume": "22",
"issue": "",
"pages": "101--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yasuharu Den, Toshinobu Ogiso, Hideki Ogura, Atsushi Yamada, Nobuaki Minematsu, Kiy- otaka Uchimoto, and Hanae Koiso. 2007. The development of an electronic dictionary for morphological analysis and its application to Japanese corpus linguistics (in Japanese). Japanese Linguistics, 22:101-123.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "EDR Electronic Dictionary Technical Guide",
"authors": [
{
"first": "",
"middle": [],
"last": "Edr",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "EDR, 1995. EDR Electronic Dictionary Technical Guide. Japan Electronic Dictionary Research Institute, Ltd. (in Japanese).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Dictionary of Katakana Words",
"authors": [
{
"first": "Y\u014dichi",
"middle": [],
"last": "Kabasawa",
"suffix": ""
},
{
"first": "Morio",
"middle": [],
"last": "Sat\u014d",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y\u014dichi Kabasawa and Morio Sat\u014d, editors. 2003. A Dictionary of Katakana Words. Gakken.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Overview of Japanese and English Information Retrieval Tasks (JEIR) at the Second NTCIR Workshop",
"authors": [
{
"first": "Noriko",
"middle": [],
"last": "Kando",
"suffix": ""
},
{
"first": "Kazuko",
"middle": [],
"last": "Kuriyama",
"suffix": ""
},
{
"first": "Masaharu",
"middle": [],
"last": "Yoshioka",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Second NTCIR Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noriko Kando, Kazuko Kuriyama, and Masaharu Yoshioka. 2001. Overview of Japanese and En- glish Information Retrieval Tasks (JEIR) at the Second NTCIR Workshop. In Proceedings of the Second NTCIR Workshop, Jeju, Korea. Kevin Knight and Jonathan Graehl. 1998.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Japanese Web N-gram Corpus version 1",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and Hideto Kazawa. 2007. Japanese Web N-gram Corpus version 1. http://www. ldc.upenn.edu/Catalog/docs/LDC2009T08/.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Applying conditional random fields to Japanese morphological analysis",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Kaoru",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP 2004)",
"volume": "",
"issue": "",
"pages": "230--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo, Kaoru Yamamoto, and Yuji Mat- sumoto. 2004. Applying conditional random fields to Japanese morphological analysis. In Proceedings of the 2004 Conference on Em- pirical Methods in Natural Language Process- ing (EMNLP 2004), pages 230-237, Barcelona, Spain.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Nihongo keitai-kaiseki sisutemu JUMAN [Japanese morphological analysis system JU-MAN] version 3.5",
"authors": [
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadao Kurohashi and Makoto Nagao. 1998. Nihongo keitai-kaiseki sisutemu JUMAN [Japanese morphological analysis system JU- MAN] version 3.5. Technical report, Kyoto University. (in Japanese).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The web as a baseline: Evaluating the performance of unsupervised web-based models for a range of NLP tasks",
"authors": [
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Human Langauge Techinology Conference and Conference on Empirical Methods in National Language Processing (HLT/NAACL-2004)",
"volume": "",
"issue": "",
"pages": "121--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirella Lapata and Frank Keller. 2004. The web as a baseline: Evaluating the performance of unsupervised web-based models for a range of NLP tasks. In Proceedings of the Human Langauge Techinology Conference and Confer- ence on Empirical Methods in National Lan- guage Processing (HLT/NAACL-2004), pages 121-128, Boston, USA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Japanese Morphological Analysis System ChaSen Version 2.3.3 Manual. Technical report",
"authors": [
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
},
{
"first": "Akira",
"middle": [],
"last": "Kitauchi",
"suffix": ""
},
{
"first": "Tatsuo",
"middle": [],
"last": "Yamashita",
"suffix": ""
},
{
"first": "Yoshitaka",
"middle": [],
"last": "Hirano",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Matsuda",
"suffix": ""
},
{
"first": "Kazuma",
"middle": [],
"last": "Takaoka",
"suffix": ""
},
{
"first": "Masayuki",
"middle": [],
"last": "Asahara",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuji Matsumoto, Akira Kitauchi, Tatsuo Ya- mashita, Yoshitaka Hirano, Hiroshi Matsuda, Kazuma Takaoka, and Masayuki Asahara. 2003. Japanese Morphological Analysis System ChaSen Version 2.3.3 Manual. Technical re- port, NAIST.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Translation of 'katakana' words using an English dictionary and grammar",
"authors": [
{
"first": "Yoshihiro",
"middle": [],
"last": "Matsuo",
"suffix": ""
},
{
"first": "Mamiko",
"middle": [],
"last": "Hatayama",
"suffix": ""
},
{
"first": "Satoru",
"middle": [],
"last": "Ikehara",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Information Processing Society of Japan",
"volume": "53",
"issue": "",
"pages": "65--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshihiro Matsuo, Mamiko Hatayama, and Satoru Ikehara. 1996. Translation of 'katakana' words using an English dictionary and grammar (in Japanese). In Proceedings of the Information Processing Society of Japan, volume 53, pages 65-66.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatic Acquisition of Basic Katakana Lexicon from a Given Corpus",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 2nd International Joint Conference on Natural Language Processing (IJCNLP-05)",
"volume": "",
"issue": "",
"pages": "682--693",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Daisuke Kawahara, and Sadao Kurohashi. 2005. Automatic Acquisi- tion of Basic Katakana Lexicon from a Given Corpus. In Proceedings of the 2nd International Joint Conference on Natural Language Process- ing (IJCNLP-05), pages 682-693, Jeju, Korea.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Multiword expressions: A pain in the neck for NLP",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ivan",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Sag",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Flickinger",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 3rd International Conference on Intelligent Text Processing and Computational Linguistics (CICLing-2002)",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan A. Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multi- word expressions: A pain in the neck for NLP. In Proceedings of the 3rd International Confer- ence on Intelligent Text Processing and Com- putational Linguistics (CICLing-2002), pages 1-15, Mexico City, Mexico.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Kenky\u00fbsha New Japanese-English Dictionary",
"authors": [],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Watanabe Toshiro, Edmund Skrzypczak, and Paul Snowdon (eds). 2003. Kenky\u00fbsha New Japanese-English Dictionary, 5th Edition. Kenky\u00fbsha.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table><tr><td>MWE</td><td>Segmentation</td><td colspan=\"3\">Possible Translation Frequency Grade</td></tr><tr><td>\u30ed\u30b0\u30a4\u30f3\u30d8\u30eb\u30d7</td><td>\u30ed\u30b0\u30a4\u30f3\u30fb\u30d8\u30eb\u30d7</td><td>login help</td><td>541097</td><td>1</td></tr><tr><td>\u30ed\u30b0\u30a4\u30f3\u30d8\u30eb\u30d7</td><td>\u30ed\u30b0\u30fb\u30a4\u30f3\u30fb\u30d8\u30eb\u30d7</td><td>log in help</td><td>169972</td><td>-</td></tr><tr><td colspan=\"2\">\u30ad\u30fc\u30ef\u30fc\u30c9\u30e9\u30f3\u30ad\u30f3\u30b0 \u30ad\u30fc\u30ef\u30fc\u30c9\u30fb\u30e9\u30f3\u30ad\u30f3\u30b0</td><td>keyword ranking</td><td>39818</td><td>1</td></tr><tr><td colspan=\"3\">\u30ad\u30fc\u30ef\u30fc\u30c9\u30e9\u30f3\u30ad\u30f3\u30b0 \u30ad\u30fc\u30fb\u30ef\u30fc\u30c9\u30fb\u30e9\u30f3\u30ad\u30f3\u30b0 key word ranking</td><td>74</td><td>-</td></tr><tr><td>\u30ad\u30e3\u30ea\u30a2\u30a2\u30c3\u30d7</td><td>\u30ad\u30e3\u30ea\u30a2\u30fb\u30a2\u30c3\u30d7</td><td>career up</td><td>13043</td><td>2</td></tr><tr><td>\u30ad\u30e3\u30ea\u30a2\u30a2\u30c3\u30d7</td><td>\u30ad\u30e3\u30ea\u30a2\u30fb\u30a2\u30c3\u30d7</td><td>carrier up</td><td>2552</td><td>-</td></tr><tr><td>\u30ad\u30e3\u30ea\u30a2\u30a2\u30c3\u30d7</td><td>\u30ad\u30e3\u30ea\u30a2\u30fb\u30a2\u30c3\u30d7</td><td>career close up</td><td>195</td><td>-</td></tr><tr><td>\u30ad\u30e3\u30ea\u30a2\u30a2\u30c3\u30d7</td><td>\u30ad\u30e3\u30ea\u30a2\u30fb\u30a2\u30c3\u30d7</td><td>career being over</td><td>188</td><td>-</td></tr><tr><td>\u30ad\u30e3\u30ea\u30a2\u30a2\u30c3\u30d7</td><td>\u30ad\u30e3\u30ea\u30a2\u30fb\u30a2\u30c3\u30d7</td><td>carrier increasing</td><td>54</td><td>-</td></tr></table>",
"html": null,
"type_str": "table",
"text": "",
"num": null
},
"TABREF2": {
"content": "<table><tr><td>Data</td><td>Failed</td><td colspan=\"3\">Translation Grades</td><td/></tr><tr><td>Set</td><td colspan=\"2\">Segmentations 1</td><td>2</td><td>3</td><td colspan=\"2\">Precision Recall F</td></tr><tr><td>Google</td><td>9</td><td colspan=\"2\">66 24</td><td>1</td><td>98.90</td><td>90.00 94.24</td></tr><tr><td>Mainichi (Set 1)</td><td>3</td><td colspan=\"2\">77 19</td><td>1</td><td>98.97</td><td>96.00 97.46</td></tr><tr><td>Mainichi (Set 2)</td><td>1</td><td colspan=\"2\">83 16</td><td>0</td><td>100.00</td><td>99.00 99.50</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Sample Segmentations and Translations",
"num": null
},
"TABREF3": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Results from Translation Tests",
"num": null
}
}
}
}