{ "paper_id": "O13-2000", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:04:01.228471Z" }, "title": "Computational Linguistics & Chinese Language Processing Aims and Scope", "authors": [ { "first": "Shu-Chuan", "middle": [], "last": "Tseng", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Taipei", "country": "Taiwan" } }, "email": "tsengsc@gate.sinica.edu.tw" }, { "first": "Joseph", "middle": [ "Z" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Taipei", "country": "Taiwan" } }, "email": "" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": { "settlement": "Hsinchu", "country": "Taiwan" } }, "email": "jason.jschang@gmail.com" }, { "first": "Jyh-Shing", "middle": [ "Roger" ], "last": "Jang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan University", "location": { "country": "Taiwan" } }, "email": "jang@csie.ntu.edu.tw" }, { "first": "Dipankar", "middle": [], "last": "Das", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Taipei", "country": "Taiwan" } }, "email": "" }, { "first": "Sivaji", "middle": [], "last": "Bandyopadhyay", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": { "settlement": "Hsinchu", "country": "Taiwan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Information about the lexical capacity of the speakers of a specific language is indispensible for empirical and experimental studies on the human behavior of using speech as a communicative means. Unlike the increasing number of gigantic text-or web-based corpora that have been developed in recent decades, publicly distributed spoken resources, espcially conversations, are few in number. This article studies the lexical coverage of a corpus of Taiwan Mandarin conversations recorded in three speaking scenarios. A wordlist based on this corpus has been prepared and provides information about frequency counts of words and parts of speech processed by an automatic system. Manual post-editing of the results was performed to ensure the usability and reliability of the wordlist. Syllable information was derived by automatically converting the Chinese characters to a conventional romanization scheme, followed by manual correction of conversion errors and disambiguiation of homographs. As a result, the wordlist contains 405,435 ordinary words and 57,696 instances of discourse particles, markers, fillers, and feedback words. Lexical coverage in Taiwan Mandarin conversation is revealed and is compared with a balanced corpus of texts in terms of words, syllables, and word categories.", "pdf_parse": { "paper_id": "O13-2000", "_pdf_hash": "", "abstract": [ { "text": "Information about the lexical capacity of the speakers of a specific language is indispensible for empirical and experimental studies on the human behavior of using speech as a communicative means. Unlike the increasing number of gigantic text-or web-based corpora that have been developed in recent decades, publicly distributed spoken resources, espcially conversations, are few in number. This article studies the lexical coverage of a corpus of Taiwan Mandarin conversations recorded in three speaking scenarios. A wordlist based on this corpus has been prepared and provides information about frequency counts of words and parts of speech processed by an automatic system. Manual post-editing of the results was performed to ensure the usability and reliability of the wordlist. Syllable information was derived by automatically converting the Chinese characters to a conventional romanization scheme, followed by manual correction of conversion errors and disambiguiation of homographs. As a result, the wordlist contains 405,435 ordinary words and 57,696 instances of discourse particles, markers, fillers, and feedback words. Lexical coverage in Taiwan Mandarin conversation is revealed and is compared with a balanced corpus of texts in terms of words, syllables, and word categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Exchange and communication of thoughts are mainly performed by producing and perceiving/interpreting words, whether in text or speech. In spite of philosophical debates on the concept of words, it is more or less accepted by most of the disciplines working with languages that one of the possibilities of exploring the lexical capacity of the users of a specific language is to examine the distribution of words collected in a large-scale balanced corpus. Different from the lexical entries listed in a dictionary, corpus data provide 2 Shu-Chuan Tseng information about lexical knowledge of language users that resembles their experiences and abilities in a realistic context. Of this information, word frequency counts are simple and primitive information. Nevertheless, they are directly associated with the lexical capacity of language users in a given scenario. Word frequency is one of the most essential kinds of information when implementing language-related technology tools and systems. Once a reliable word list is available, different computational models can be developed or applied to examine the role lexical knowledge plays in using a language (Baayan, 2001) . For pedagogical purposes, word counts based on real corpus data will help prepare authentic learning materials for first and second language learners (Xiao et al., 2009; Knowles, 1990; McCarthy, 1999) . For research purposes, empirical information about lexical capacity is indispensible for constructing stimuli and testing hypotheses for word-or phonology-related psycholinguistic experiments (Wepman & Lozar, 1973) . In each kind of application using the word distribution information mentioned above, it is important that the sources we obtain the information from should resemble the word distribution of tokens and types as authentic language input available to the language users.", "cite_spans": [ { "start": 1160, "end": 1174, "text": "(Baayan, 2001)", "ref_id": "BIBREF2" }, { "start": 1327, "end": 1346, "text": "(Xiao et al., 2009;", "ref_id": null }, { "start": 1347, "end": 1361, "text": "Knowles, 1990;", "ref_id": "BIBREF13" }, { "start": 1362, "end": 1377, "text": "McCarthy, 1999)", "ref_id": "BIBREF15" }, { "start": 1572, "end": 1594, "text": "(Wepman & Lozar, 1973)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Nearly a century ago, Thorndike (1921) listed the 10,000 most widely used English words based on a 4.6-million-word corpus consisting of 41 different sources, which included children's literature, the Bible, classics, elementary school textbooks, and newspapers. The later version extended the list to 30,000 words (Thorndike & Lorge, 1944) . The main purpose of these earliest wordlists was to provide word information for teaching English. Nowadays, taking advantage of the latest technology, the amount and scale of textual corpora being collected via digital resources in recent decades have become enormous. The British National Corpus (BNC) contains 100 million English words. Within the corpus data, 90% were based on written texts (Leech et al., 2001) . The first released version of the American National Corpus (ANC) contained 11.5 million English words, 70% of which were written texts (Reppen & Ide, 2004) . Both the BNC and the ANC are balanced corpora. They consist of texts collected from different producers and genres, also including transcripts of spoken language. Purely textual corpora, such as the English Gigaword and the Chinese Gigaword, distributed by the Linguistic Data Consortium (LDC), are mostly collections of newspaper articles, reflecting a specific kind of language user behavior. Nevertheless, to reflect the lexical capacity of language users in natural speech communication, we need a corpus of \"naturally produced\" conversations with different sociolinguistic designs of speaker relationships and different conversation types. Compared with textual corpora, however, it is considerably more difficult to obtain this kind of corpora.", "cite_spans": [ { "start": 315, "end": 340, "text": "(Thorndike & Lorge, 1944)", "ref_id": null }, { "start": 739, "end": 759, "text": "(Leech et al., 2001)", "ref_id": "BIBREF14" }, { "start": 897, "end": 917, "text": "(Reppen & Ide, 2004)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Collecting and processing speech data cannot be accomplished automatically. The cost of preparing spoken corpora is high, especially when dealing with natural conversations. The types of spoken corpora vary to a large degree, ranging from reading a list of words/texts, Lexical Coverage in Taiwan Mandarin Conversation 3 telling a story, executing a task, to free conversation. To take English as an example, a number of conversational corpora have been collected for educational, clinical, or experimental studies of spoken word distribution (French et al., 1930; Howes, 1964; Howes, 1966) . They have attracted intensive attention, because they provide the most realistic materials to study how people converse to exchange thoughts and perform communication. During the last twenty years, the scale and the application of spoken corpora have been enormously extended. Svartvik and Quirk (1980) published a corpus of English conversation, later known as the London-Lund Corpus of English Conversation. A word frequency count of 190,000 words from the corpus was published four years later (Brown, 1984) . Later, a part of the BNC also contained conversations, with a focus on a balanced socio-geographic sampling of speakers of English (Crowdy, 1993) .", "cite_spans": [ { "start": 545, "end": 566, "text": "(French et al., 1930;", "ref_id": "BIBREF9" }, { "start": 567, "end": 579, "text": "Howes, 1964;", "ref_id": "BIBREF11" }, { "start": 580, "end": 592, "text": "Howes, 1966)", "ref_id": "BIBREF12" }, { "start": 872, "end": 897, "text": "Svartvik and Quirk (1980)", "ref_id": null }, { "start": 1092, "end": 1105, "text": "(Brown, 1984)", "ref_id": "BIBREF4" }, { "start": 1239, "end": 1253, "text": "(Crowdy, 1993)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 290, "end": 322, "text": "Taiwan Mandarin Conversation 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "With the growing number of spoken corpora being or having been processed, the technology and the concept of how to prepare spoken corpora has also been changed accordingly due to the extensive application possibilities and the available software (Gibbon et al., 1997) . Newly developed spoken corpora, for instance, transcribed with annotation schemes marking targeted linguistic phenomena, time-aligned with speech signals at different linguistic levels, automatically processed for word segmentation and parts of speech tagging on the transcripts, etc., have brought new horizons of how spoken corpora can be used for academic and educational purposes.", "cite_spans": [ { "start": 246, "end": 267, "text": "(Gibbon et al., 1997)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This paper studies the lexical coverage of a Taiwan Mandarin conversational corpus based on the derived Taiwan Mandarin Spoken Wordlist and compares it with the Sinica Corpus (Chen & Huang, 1996) , which is currently the largest POS-tagged text corpus of Taiwan Mandarin. This section gives an introduction to how the conversational corpus has been collected and processed and how the wordlist has been prepared.", "cite_spans": [ { "start": 175, "end": 195, "text": "(Chen & Huang, 1996)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Taiwan Mandarin Spoken Wordlist", "sec_num": "2." }, { "text": "The Taiwan Mandarin Conversational Corpus (the TMC Corpus, hereafter) is composed of three sub-corpora of Taiwan Mandarin conversations, which have been processed at the Institute of Linguistics, Academia Sinica (Tseng, 2004) . The Mandarin Conversational Dialogue Corpus (the MCDC) is a collection of 30 free conversations between speakers who were meeting for the first time (37 females and 23 males, with ages between 16 and 45). The project was executed in 2001. One year later, 30 speakers from the MCDC speakers were recruited again to record conversations with a person they knew well for the next two corpus collection projects. As a result, 33 female and 27 male speakers whose age ranged from 14 to 63 participated in the project. The Mandarin Topic-oriented Conversation Corpus (the MTCC) 4 Shu-Chuan Tseng is a collection of topic-specific conversations on selected news or events that took place in the year of 2001. The Mandarin Map Task Corpus (the MMTC) is a collection of task-oriented dialogues, basically following the Map Task design (Anderson et al., 1991) . Different from the MTCC and the MMTC, the free conversations in the MCDC were more formal, as the conversation partners were strangers. The final version of the TMC Corpus consists of 85 conversations, approximately 42 hours of speech recording. Five conversations were not included in the TMC Corpus because the participants spoke Taiwan Southern Min instead of Taiwan Mandarin to their conversation partners most of the time in their conversations. General information about the corpora is summarized in Table 1 . From the viewpoint of speaker relationship, the TMC Corpus contains conversations between strangers and conversations between people who are familiar with each other. From the viewpoint of the speaking situation, the TMC Corpus includes three different scenarios: free conversations, topic-specific conversations, and task-oriented conversations. That is, the TMC Corpus provides speech data of a variety of speaker groups communicating in different speaking styles and situations.", "cite_spans": [ { "start": 212, "end": 225, "text": "(Tseng, 2004)", "ref_id": null }, { "start": 1054, "end": 1077, "text": "(Anderson et al., 1991)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 1586, "end": 1593, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Taiwan Mandarin Conversational Corpus", "sec_num": "2.1" }, { "text": "The speech content of the 85 conversations was orthographically transcribed and carefully cross-checked. Words were transcribed in traditional Chinese characters. Pauses and paralinguistic sounds, such as inhalation, coughing, and laughter, were indicated in the transcripts. Items that are often used in spoken discourse, such as discourse particles, discourse markers, fillers, and feedback words, were transcribed with capital letters for two reasons. On the one hand, we wanted to distinguish these items from ordinary words due to their pragmatic function in conversation. On the other hand, it is not always possible to find the correct, or widely accepted, characters to transcribe these groups of items. For example, well-conventionalized characters are available in the writing notion for most of the discourse particles (Chao 1965) originating from Mandarin Chinese, such as: A \u554a, AI YA \u54ce\u5440, AI YOU \u5509\u5466, BA \u5427, E/EP \u5443, EN \u55ef, HAI \u55e8, HE \u5475, HEI \u563f, HWA \u5629, LA \u5566, LIE/LEI \u54a7, LO \u56c9, MA \u561b, Lexical Coverage in Taiwan Mandarin Conversation 5 NOU/NO \u558f, O \u5594/\u5662/\u54e6, OU \u5662, WA \u54c7, WA SAI \u54c7\uf96c, YE \u8036, YI \u54a6, and YOU \u5466. Nevertheless, some of the very common particles in contemporary Taiwan Mandarin conversation, such as EIN, HAN, HEIN, HO, HYO, and HAIN, originate from Taiwanese Southern Min -a major dialect spoken in Taiwan. For these particles, no widely acceptable characters are available to transcribe them. Capital letters signifying the way of pronunciation were used to transcribe discourse particles of this kind. Different from discourse particles, discourse markers noted in our transcribing system are originally lexical items, i.e. regular words with a matching character in the writing system. When their original semantic meaning is lost and their use becomes essentially pragmatic in conversation, however, they are regarded as a kind of discourse markers. Their function is similar to that of the discourse markers that are generally defined, e.g. well, but, and ok (Schiffrin, 1988) , marking emerging structure of conversation. In principle, they are used for a speaker to keep the floor or to stall more time to think of what to say next. Among the discourse markers annotated in the TMC Corpus, NA is the most frequently used marker. Originally, \u90a3 (NA) was a demonstrative determiner, meaning \"that\". As a discourse marker, however, it sometimes appears before a proper noun, which is grammatically incorrect in the case of a determiner. This example illustrates the difference between \u90a3 (NA) as a determiner and as a discourse marker. As a result, we noted discourse markers of this specific group, including NA, NE, NA GE, NE GE, NEI GE, SHEN ME, and ZHE", "cite_spans": [ { "start": 830, "end": 841, "text": "(Chao 1965)", "ref_id": "BIBREF5" }, { "start": 1971, "end": 1988, "text": "(Schiffrin, 1988)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Transcription", "sec_num": "2.2" }, { "text": "The third type, fillers and feedback words, themselves do not involve any concrete semantic meaning. Fillers function as discourse markers in a similar way, indicating hesitations in speech flow (Shriberg, 1994) . Feedback words are used as a response signal to the conversation partner. Different foci on spoken discourse may lead to diversified terminologies and systems of lexical items, for instance, Chao (1965) may regard some of the fillers and feedback words as interjections, carrying specific intonation contours. Nevertheless, in the TMC Corpus, our preliminary goal was to develop a coherent transcription convention for conversation. Basically, we transcribed them according to their syllable structure, because the surface forms of fillers and feedback words are systematically similar. Prosodic realization may add affined pragmatic interpretations to fillers and feedback words. Nevertheless, in the transcription system, we do not make further distinctions. There are four different sub-groups of fillers and feedback words: Zero onset + Schwa + dental nasal coda (UHN, UHNN, UHNHN) , zero onset + Schwa + bilabial nasal coda (UHM, UHMM, UHMHM), dental nasal onset + Schwa + dental nasal coda (NHN, NHNN, NHNHN) , and bilabial nasal onset + Schwa + bilabial nasal coda (MHM, MHMM, MHMHM, MHMHMHM, MHMHMHMHM). When they are produced with more than one syllable, each syllable is presented by a repeated H. A repeated nasal coda indicates a prolongation of the coda.", "cite_spans": [ { "start": 195, "end": 211, "text": "(Shriberg, 1994)", "ref_id": null }, { "start": 405, "end": 416, "text": "Chao (1965)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 1081, "end": 1099, "text": "(UHN, UHNN, UHNHN)", "ref_id": null }, { "start": 1210, "end": 1228, "text": "(NHN, NHNN, NHNHN)", "ref_id": null } ], "eq_spans": [], "section": "GE.", "sec_num": null }, { "text": "Foreign words, such as English or Japanese, are either written in their original writing convention or the equivalent romanization. Speech stretches containing pronunciation variants and code switching are transcribed in the way that the meaning of the speech content is written in Taiwan Mandarin writing convention.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GE.", "sec_num": null }, { "text": "The orthographic transcription of the corpus is presented in PRAAT with two tiers (Boersma & Weenink, 2012) . The first tier gives information about the speaker identity and the sequence number of the speaker's turn in a coded way, and the transcription of the speech content is presented on the second tier. The boundaries of all speaker turns are time-aligned with the speech signal. Figure 1 is an extract from the MCDC sub-corpus. ", "cite_spans": [ { "start": 82, "end": 107, "text": "(Boersma & Weenink, 2012)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 386, "end": 394, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Time-aligned Transcripts in PRAAT", "sec_num": "2.3" }, { "text": "Word boundaries in the Chinese texts are not marked by blanks. In order to prepare the wordlist of the TMC Corpus, we applied the CKIP word segmentation and POS tagging system to automatically process the transcripts (Chen & Huang, 1996) . The POS tagset developed by the CKIP team is listed in Table 2 (CKIP, 1998) . Slightly modifying the tagset, we added nominal expressions and idioms to the category S, because they act as independent sentences in conversation from both syntactic and pragmatic points of view and they should not be regarded as any one of the other POS categories. With regard to the input format of the system, the original design of the CKIP system was sentences. For processing the TMC Corpus, the content of each speaker turn was used as individual input to run the CKIP system. As the majority of the corpus data are long speaker turns of more than one sentence, there may arise difficulties in word segmentation and POS tagging. In this regard, manual post-editing would be necessary. Taiwan Mandarin Conversation 7 Table 2 . The CKIP POS Tagset.", "cite_spans": [ { "start": 217, "end": 237, "text": "(Chen & Huang, 1996)", "ref_id": "BIBREF6" }, { "start": 303, "end": 315, "text": "(CKIP, 1998)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 295, "end": 302, "text": "Table 2", "ref_id": "TABREF33" }, { "start": 1013, "end": 1055, "text": "Taiwan Mandarin Conversation 7 Table 2", "ref_id": "TABREF14" } ], "eq_spans": [], "section": "Word Segmentation and POS Tagging", "sec_num": "2.4" }, { "text": "Non-predicative adjective (A) Adverbs Adverb (D), quantitative adverb (Da), pre-verbal adverb of degree (Dfa), post-verbal adverb of degree (Dfb), sentential adverb (Dk), aspectual adverb (Di)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adjectives", "sec_num": null }, { "text": "Given a body of language data, no matter in the form of text or speech, lexical coverage revealed from the data varies according to producer-and genre-related factors. Each individual collection of a corpus is only representative of the specific producer group under a given condition of language production. The Sinica Corpus is a balanced corpus of texts containing different genres. In the design of the TMC Corpus, we have attempted to cover varieties of formal and informal speaking situations by the arrangement of conversation partners (strangers vs. familiar persons) and different speaking scenarios by the arrangements of tasks (free conversation, map task, and topic-specific). It is clear that the TMC Corpus and the Sinica Corpus are not directly and completely comparable in terms of producers and genres. Nevertheless, the TMC Corpus and the Sinica Corpus were compiled by adopting the same word segmentation and POS tagging system, and they are currently the largest conversational and textual corpora available for Taiwan Mandarin. For this reason, when we examine the lexical coverage of the TMC Corpus, the Sinica Corpus will be compared to explore the similarities and differences among words produced in the form of conversation and text. Wordlists derived from these two corpora were used, the Taiwan Mandarin Spoken Wordlist and the Word List with Accumulated Word Frequency in Sinica Corpus 3.0 (CKIP, 1998) . In order to collect information about syllables as well, we ran the same automatic conversion program to the Word List with Accumulated Word Frequency in Sinica Corpus 3.0.", "cite_spans": [ { "start": 1408, "end": 1431, "text": "Corpus 3.0 (CKIP, 1998)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Lexical Coverage in Conversational and Text Corpus", "sec_num": "3." }, { "text": "The results, however, were not manually checked, as we did for the TMC Corpus with the homograph errors. Sinica Corpus 4, 767, 048 55, 301 7, 515, 036 1, 120 392 For the current study, we have cleaned up errors we found in the wordlist of the Sinica Corpus, so the statistics summarized in Table 3 may be slightly different from the official ones published by the CKIP team. As one can see, the Sinica Corpus is about ten times bigger than the TMC Corpus.", "cite_spans": [ { "start": 112, "end": 121, "text": "Corpus 4,", "ref_id": null }, { "start": 122, "end": 126, "text": "767,", "ref_id": null }, { "start": 127, "end": 134, "text": "048 55,", "ref_id": null }, { "start": 135, "end": 141, "text": "301 7,", "ref_id": null }, { "start": 142, "end": 146, "text": "515,", "ref_id": null }, { "start": 147, "end": 153, "text": "036 1,", "ref_id": null }, { "start": 154, "end": 161, "text": "120 392", "ref_id": null } ], "ref_spans": [ { "start": 290, "end": 297, "text": "Table 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Lexical Coverage in Conversational and Text Corpus", "sec_num": "3." }, { "text": "Corpus coverage of different vocabulary sizes in both corpora is listed in Table 4 . The top 2000 word types in the TMC Corpus make up about 90% of the overall word tokens, whereas they only account for 70% of word tokens in the Sinica Corpus. McCarthy (1999: 236) has made a comparable proposal that \"\u2026 a round-figure pedagogical target of the first 2000 words in order of frequency will safely cover the everyday core with some margin for error.\" Counting homographs with different POS categories as distinct word types, 1,117 among the top 2000 word types occur in both corpora, including nouns, verbs, adverbs, conjunctions, determinatives, prepositions, pronouns, non-predicative adjectives, particles, the structural particle DE, and the copula SHI. These 1,117 word types shared in the top 2000 list of both corpora eventually account for 81% of the TMC corpus coverage and 58% of the Sinica corpus coverage. A selection of these 1,117 word types, the (approximately) top 100 words in both corpora, is listed in Appendix A. They may be regarded as the core vocabulary that is required for operable communication in the form of conversation and text. For educational purposes, this core vocabulary may be the target words for teaching praxis and materials to focus on (Xiao et al., 2009; Tao, 2009) . The word distribution in both corpora is presented in terms of the accumulative frequency in Figure 2 . To achieve a 90% of corpus coverage, the first 15,000 frequency-ranked word types in the Sinica Corpus and the first 2,000 ones in the TMC Corpus are required. Calculating the proportions of these word types in their corpus share, 27% of the observed word types in the Sinica Corpus and 12% in the TMC Corpus would account for the majority of the lexical coverage of each corpus. This may suggest that these two different vocabulary sets are required for fluent communication in the form of text and conversation. The size of word types differs largely in both corpora, i.e. 15,000 versus 2,000. Nevertheless, if we view the number of characters involved in the two vocabulary sets, there are 2,964 different characters in the case of the Sinica Corpus and 1,065 in the TMC Corpus. A Chinese character is normally also a morpheme in Mandarin Chinese and is equivalent to a tone-specified syllable. The large number of homographs in Chinese leads to asymmetry between the number of tone-specified syllables from the phonological point of view and the number of characters from the orthographic point of view. The vocabulary sets required for a fluent communication above are equivalent to 1,065 tone-specified syllables for text (1,120 for the Sinica Corpus in total), and 654 for conversation (1,076 for the TMC Corpus in total). ", "cite_spans": [ { "start": 244, "end": 264, "text": "McCarthy (1999: 236)", "ref_id": null }, { "start": 1274, "end": 1293, "text": "(Xiao et al., 2009;", "ref_id": null }, { "start": 1294, "end": 1304, "text": "Tao, 2009)", "ref_id": null } ], "ref_spans": [ { "start": 75, "end": 82, "text": "Table 4", "ref_id": "TABREF2" }, { "start": 1400, "end": 1408, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Word coverage", "sec_num": "3.1" }, { "text": "In the TMC Corpus, 1,076 different tone-specified syllable types were produced. In the Sinica Corpus, it was 1,120. Apparently, there is no clear difference between the text and conversation corpora in this regard, as shown in Figure 3 . Similarly, to account for 90% of the corpus coverage, 300 tone-specified syllable types are required in the TMC Corpus and 400 are required in the Sinica Corpus. Moreover, if we disregard tone information, the number of syllable structures is 390 in the TMC Corpus, and 392 in the Sinica Corpus. This is almost the same in both corpora. Among them, 385 syllable structures were found in both corpora and the other 15 syllable structures appeared in only one of the corpora. The figures of syllables in both wordlists suggest that the capacity of phonologically different syllables (with or without considerations of lexical tones) in Taiwan Mandarin used in text and conversation is of similar size. Nevertheless, the number of tone-specified syllables does not equal the number of characters, or morphemes in Mandarin, as we mentioned earlier. For use in the form of text or conversation, the discrepancy is noticable, as the vocabulary sets required for fluent communication differ significantly: 1,065 for text and 654 for conversation. ", "cite_spans": [], "ref_spans": [ { "start": 227, "end": 235, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Syllable Coverage", "sec_num": "3.2" }, { "text": "The proportions of the 14 categories of the CKIP POS tags in both corpora are summarized in Table 5 . The occurrences of nouns and verbs in the Sinica Corpus make up nearly 90% of the word tokens, suggesting that a certain percentage of nouns and verbs appear quite often in the Sinica Corpus. In contrast, the percentage of verbs and nouns in the TMC Corpus is only 45%. Words of the other categories, such as adverbs, pronouns, determinatives, prepositions, and conjunctions, were used significantly more often in conversation than in text. ", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 99, "text": "Table 5", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Distribution of Word Category", "sec_num": "3.3" }, { "text": "The tokens per type of verb and noun in the Sinica Corpus are high because the corpus share of tokens is high and that of types is rather low, as shown in Figure 4 . This may be due to the topics and the types of the articles included in the corpus, as the Sinica Corpus contains a large number of literary texts. On the contrary, the other word categories cover a much lower share of tokens, but more of types. In the TMC Corpus, a complementary distribution was observed. Verbs and nouns account for wider corpus coverage in terms of types than in terms of tokens. This suggests that different tasks and scenarios of conversations may elicit different vocabularies. The other word categories, mostly function words, account for more tokens than types. In particular, the use of adverbs is different in conversation and in text. This, to a certain degree, is similar to the distribution found in a comparative study of spoken and written corpora of Swedish (Allwood, 1998) . Adverbs, like the other function word categories (conjunctions and prepositions) were used more frequently in the spoken corpus than in the written corpus. Nevertheless, unlike in Taiwan Mandarin, pronouns and verbs were the most frequently produced categories in Swedish text and spoken corpora. The reason may lie in the characteristic of Chinese syntax. Zero anaphora is an often observed phenomenon in Chinese sentences. Therefore, pronouns are often used for addressing people in an interactive communication situation, for instance in conversation. As observed in the comparison of text and conversation, pronouns only make up 0.18% of the overall word tokens in the Sinica Corpus, but 10% in the TMC Corpus.", "cite_spans": [ { "start": 958, "end": 973, "text": "(Allwood, 1998)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 155, "end": 163, "text": "Figure 4", "ref_id": "FIGREF12" } ], "eq_spans": [], "section": "Figure 4. Parts of speech in conversational and text corpus.", "sec_num": null }, { "text": "Interaction in conversation is often marked by pragmatic indicators, such as prosodic prominence, or by the use of discourse items, such as particles or feedback items. In this regard, conversation clearly differs from text. This section is concerned with corpus coverage of discourse-related items in the TMC Corpus. Compared with ordinary words, discourse items were produced much more frequently. The proportion of the occurrences of ordinary words over those of the discourse items is approximately eight to one in the TMC Corpus. That is, on average, a speech stretch of a length of eight words is accompanied by at least one discourse item. These items mark discourse-relevant positions in conversation, and they usually are produced with distinctive prosodic patterns to indicate the structure of a spoken discourse. With regard to information delivery, they may be considered a kind of redundancy. Their main function is to express the attitudes (particles), the fluency (markers and fillers), and the attention (feedback words) of the speakers. Without these discourse-related items, a conversation would be more like a scripted dialogue. For academic purposes, we need to investigate these discourse items, because they function as a kind of juncture between concepts and also function as markers of emerging patterns in conversation. As listed in Table 6 , the tokens per type of discourse markers are 1,835, which is very high compared with ordinary words in the corpus. This suggests that the performance of automatic speech recognition systems working with conversation can be improved in an economical and efficient way by implementing information and knowledge about the position of these discourse-related items (syntactic or prosodic) and their phonetic representation. Discourse particles are produced more often than the top 1000 word types in the TMC Corpus, 342 tokens per type. The numbers of the distinct types of discourse particles and markers are small, but the tokens per type are high. Furthermore, fillers and feedback words have a limited number of phonetic variants, as their phonetic representations are systematically predictable. Thus, they can be studied in terms of their phonetic forms, pronunciation variations, and their relationship to the contextual information. Feedback words normally mark the structure of speaker turn changes. Automatic detection of the discourse items would significantly enhance the understanding of conversation content and structure.", "cite_spans": [], "ref_spans": [ { "start": 1358, "end": 1365, "text": "Table 6", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Discourse-related Items in the TMC Corpus", "sec_num": "3.4" }, { "text": "Spoken language is performed differently, given different speaking situations. To understand the lexical capacity of language users, no matter what purposes we have in mind, we need to base our investigations on realistic language data. The ideal corpus of this kind should take into account the versatility of speaker groups, conversation types, and speaking situations. In other words, it needs to be balanced among a variety of sociolinguistic settings. The concept of a balanced corpus for texts needs modification to be used for speech, as a balanced corpus of spoken data should also involve the spontaneous and interactive behavior of the speakers in specific speaking situations. Furthermore, the processing and presentation of speech corpora go beyond the consideration of the meta-data structures of text corpora. The transcribing convention needs to deal with the diversity of spoken phenomena in spontaneous speech. The alignment with the speech signal needs to manually or automatically be conducted to increase the innovative values of speech corpora applications for language technology system and language teaching tools. It is unlikely that the study of lexical coverage based on the Taiwan Mandarin Conversational Corpus represents the capacity of all Taiwan Mandarin speakers in all kinds of speaking situations. Nevertheless, we presented an attempt to provide empirical data for this line of research. With this data, we hope to extend our understanding about the notion how and why humans are capable of conversing by words for communication. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4." }, { "text": "The phrase translation problem is critical to many cross-language tasks, including statistical machine translation, cross-lingual information retrieval, and multilingual terminology (Bian & Chen, 2000; Kupiec, 1993) . Such systems typically use a bilingual lexicon or a parallel corpus to obtain phrase translations. Nevertheless, the out of vocabulary problem (OOV) is difficult to overcome, even with a very large training corpus, due to the Zipf nature of word distribution and the fact that new words, technical terms, and named entities arise frequently. On the other hand, the advent of the Internet has lead to an unprecedented buildup of multilingual texts. Specifically, there are an abundance of webpages consisting of mixed-code text, namely text written in more than one language. We observe that the mixed-code webpages typically are written in one language but interspersed with some sentential or phrasal translations written in another language. By retrieving and identifying such translation counterparts on the Web, we can cope with the OOV problem caused by the limited coverage of dictionaries and parallel corpora.", "cite_spans": [ { "start": 182, "end": 201, "text": "(Bian & Chen, 2000;", "ref_id": "BIBREF18" }, { "start": 202, "end": 215, "text": "Kupiec, 1993)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Consider a Wikipedia title, \"Named-entity recognition\". The best places to find the Chinese translations for this technical term are probably not some parallel corpus or dictionary, but rather mixed-code webpages that mention it in both Chinese and English. The following example is a snippet returned by the Bing search engine for the query \"named entity recognition\" requesting Chinese language webpages:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "http://zh.wikipedia.org/zh-hk/\u554f\u7b54\u7cfb\u7d71: \u5f9e\u7cfb\u7d71\u5167\u90e8\uf92d\u770b\uff0c\u554f\u7b54\u7cfb\u7d71\u4f7f\u7528\uf9ba\u5927 \uf97e\u6709\u5225\u65bc\u50b3\u7d71\u8cc7\u8a0a\u6aa2\uf96a\u7cfb\u7d71\u81ea\u7136\u8a9e\u8a00\u8655\uf9e4\u6280\u8853\uff0c\u5982\u81ea\u7136\u8a9e\u8a00\u5256\u6790(Natural Language Parsing)\u3001\u554f\u984c\u5206\uf9d0(Question Classification)\u3001\u5c08\u540d\u8fa8\uf9fc(Named Entity Recognition)\u7b49\u7b49\u3002", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this snippets, the author mentioned several technical terms in Chinese (e.g., \u81ea\u7136\u8a9e\u8a00\u5256\u6790 zhiran yuyan poxi, \u554f\u984c\u5206\uf9d0 wenti fenlei, and \u5c08\u540d\u8fa8\uf9fc zhuanming bianshi), followed by the source terms in brackets (Natural Language Parsing, Question Classification, and Named Entity Recognition, respectively). The term-translation pairs in the above example follow the parenthetical translation surface pattern in the form of \"Chinese translation (English term)\". This pattern is only one of many surface patterns found on the Web that may indicate a term-translation pair. In the following examples, we show different surface patterns of translation pairs found on the Web, with Chinese translations underlined and the counterpart English terms italicized: For a given English term, such translations can be extracted by classifying the Chinese characters in the snippets as either translation or otherwise. Intuitively, we can cast the problem as a sequence labeling task. To be effective, we need to associate each token (i.e., Chinese character or word) with some features to characterize the likelihood of the token being part of the translation. For example, by exploiting some external knowledge sources (e.g., bilingual dictionaries), we derive that the Chinese character \"\u8fa8\" (bian) in the Chinese word \"\u8fa8\uf9fc\" (bian-shi, recognition) is likely to be part of the translation of \"named entity recognition.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper, we present a new method that automatically obtains such labeled data and generates features for training a conditional random fields (CRF) model that is capable of identifying translation or transliteration in mixed-code snippets returned by search engines (e.g., Google or Bing). The system uses a small set of phrase-translation pairs to obtain search engine snippets that may contain both an English term and its Chinese translation from search engines. The snippets then are tagged automatically to train a CRF sequence labeler. We describe the training process in more detail in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "At run-time, we start with a given phrase (e.g., \"named-entity recognition\"), which is transformed into a query with a setup to retrieve webpages in the target language (e.g., Chinese). We then retrieve mixed-code snippets returned by the search engine and extract translations within the snippets. The identified translations can be used to supplement a bilingual terminology bank (e.g., adding multilingual titles to existing Wikipedia); alternatively, they can be used as additional training data for a machine translation system, as described in Lin, Zhao, Van Durme, and Pa\u015fca (2008) .", "cite_spans": [ { "start": 550, "end": 588, "text": "Lin, Zhao, Van Durme, and Pa\u015fca (2008)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Most previous works focus on extracting translation pairs where the counterpart terms appear near one another in the webpage, based on a limited set of short patterns. In our approach, we extract term and translation pairs that are near or far apart, and are not limited by a set of predefined patterns. We have evaluated our method based on English-Chinese language links in Wikipedia as the gold standard. Results show that our method produces output for 80% of the test cases with an exact match precision of 43%, outperforming previous works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The rest of the paper is organized as follows. In the next Section 2, we survey the related work that also aimed to mine translations from the Web. In Section 3, we give brief descriptions on resources we make use of. In Section 4, we describe in detail the problem statement and the proposed method. Finally, we report evaluation results and error analysis in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In machine translation, a source text is typically translated one sentence at a time, while cross-lingual information retrieval involves phrasal translation. The proposed methods for phrase translation in the literature rely on either handcrafted bilingual dictionaries, transliteration tables, or bilingual corpora. For example, Knight and Graehl (1998) described and evaluated a multi-stage machine translation method for performing backwards transliteration of Japanese names and technical terms into English, while Bian and Chen (2000) described cross-language information access to multilingual collections on the Internet. Recently, Smadja, McKeown, and Hatzivassiloglou (1996) proposed an algorithm for producing collocation and translation pairs, including noun and verb phrases, in bilingual corpora. Similarly, Kupiec (1993) propose an algorithm for finding noun phrase correspondence in bilingual corpora for bilingual lexicography and machine translation. Koehn and Knight (2003) described a noun phrase translation subsystem that improves word-based statistical machine translation methods.", "cite_spans": [ { "start": 330, "end": 354, "text": "Knight and Graehl (1998)", "ref_id": "BIBREF28" }, { "start": 519, "end": 539, "text": "Bian and Chen (2000)", "ref_id": "BIBREF18" }, { "start": 639, "end": 683, "text": "Smadja, McKeown, and Hatzivassiloglou (1996)", "ref_id": "BIBREF40" }, { "start": 821, "end": 834, "text": "Kupiec (1993)", "ref_id": "BIBREF30" }, { "start": 968, "end": 991, "text": "Koehn and Knight (2003)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Some methods in the literature also have aimed to exploit mixed code webpages for word and phrase translation. Nagata, Saito, and Suzuki (2001) presented a system for finding English translations for a given Japanese technical term in search engine results. Their method extracts English phrases appearing near the given Japanese term, and it scores translation candidates based on co-occurrence counts and location. Cao and Li (2002) proposed an EM algorithm for finding translation for base noun phrases on the Web. Kwok et al. (2005) focused on named entity phrases and implemented a cross-lingual name finder based on Chinese-English webpages. Wu, Lin, and Chang (2005) proposed a method for learning a set of surface patterns to find terms and translations occurring in short distance. Mixed-code webpage snippets were obtained by querying a search engine with English terms for Chinese webpages. They discovered that the most frequent pattern is where the translation immediately followed by the source term, with the coverage rate of 46%. Their results also indicate the stricter parenthetical pattern covers less than 30% of the translation instances.", "cite_spans": [ { "start": 111, "end": 143, "text": "Nagata, Saito, and Suzuki (2001)", "ref_id": "BIBREF37" }, { "start": 417, "end": 434, "text": "Cao and Li (2002)", "ref_id": "BIBREF19" }, { "start": 518, "end": 536, "text": "Kwok et al. (2005)", "ref_id": "BIBREF31" }, { "start": 648, "end": 673, "text": "Wu, Lin, and Chang (2005)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Researchers also have explored the hyperlinks in webpages as a source of bilingual Lu, Chien, and Lee (2004) proposed a method for mining terms and translations from anchor text directly or transitively. In a follow-up project, Cheng et al. (2004) proposed a method for translating unknown queries with web corpora for cross-language information retrieval. Similarly, Gravano and Henzinger (2006) also proposed systems and methods for using anchor text as parallel corpora for cross-language information retrieval.", "cite_spans": [ { "start": 83, "end": 108, "text": "Lu, Chien, and Lee (2004)", "ref_id": "BIBREF35" }, { "start": 228, "end": 247, "text": "Cheng et al. (2004)", "ref_id": "BIBREF21" }, { "start": 368, "end": 396, "text": "Gravano and Henzinger (2006)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "In a study more closely related to our work, Lin et al. (2008) proposed a method that performs word alignment between Chinese translations and English phrases within parentheses in crawled webpages. Their paper also proposed a novel and automatic evaluation method based on Wikipedia. The main difference from our work is that the alignment process in Lin et al. (2008) is done heuristically using a competitive linking algorithm proposed by Melamed (2000) , while we use a learning-based approach to align words and phrases. Moreover, in their method, only parenthetical translations are considered. With only the parenthetical pattern, their method is able to extract a significant number of translation pairs from crawled webpages without a given list of target English phrases. By restricting to parenthetical surface patterns however, many translation pairs in webpages may not be captured, including term-translation pairs that are further apart. In our work, we exploit surface patterns differently as a soft constraint in a CRF model and use an approach similar to Lin et al. (2008) to evaluate our results.", "cite_spans": [ { "start": 45, "end": 62, "text": "Lin et al. (2008)", "ref_id": "BIBREF34" }, { "start": 352, "end": 369, "text": "Lin et al. (2008)", "ref_id": "BIBREF34" }, { "start": 442, "end": 456, "text": "Melamed (2000)", "ref_id": "BIBREF36" }, { "start": 1073, "end": 1090, "text": "Lin et al. (2008)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "In contrast to the previous work in phrase and query translation, we present a learning-based approach that uses annotated data to develop the system. Nevertheless, we do not require human intervention to prepare the training data, but instead make use of language links in Wikipedia to automatically obtain the training data. The annotated data is further augmented with features indicative of translation and transliteration relations obtained from external lexical knowledge sources publicly-available on the Web. The trained CRF sequence labeler then is used to find translations on the Web for a given term.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "In this work, we rely on several resources that are available on the Internet. These resources are used for different purposes: the seed data are used for obtaining and labeling training data, the gold standard is used for automatic evaluation, and the external knowledge sources are used for generating features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Resources", "sec_num": "3." }, { "text": "Wikipedia is an online encyclopedia compiled by volunteers around the world. Anyone on the Internet can edit existing entries or create new entries to add to Wikipedia. Owing to the number of its participants, Wikipedia has achieved both high quantity and a quality comparable to traditional encyclopedias compiled by experts (Giles, 2005) . Due to these reasons, Wikipedia has become the largest and most popular reference tool.", "cite_spans": [ { "start": 326, "end": 339, "text": "(Giles, 2005)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Wikipedia", "sec_num": "3.1" }, { "text": "We extracted bilingual title pairs from the English and Chinese editions of Wikipedia as the gold standard for evaluation and as seeds to automatically collect and label training data from the Internet by querying search engines. Entries on the same topic among different language editions of Wikipedia are interlinked via the so-called language links. Nevertheless, only a small percentage of English articles are linked to editions of other languages. The Chinese Wikipedia contains only 398,206 articles, making it roughly one-tenth the size of the English Wikipedia. Furthermore, only 5% of the entries in the English Wikipedia contain language links to their Chinese counterparts. The proposed method can be used to find the translations of those English terms, thus speeding up the process of building a more complete multilingual Wikipedia. As will be described in Section 4, we extracted the titles of English-Chinese article pairs connected by language links for training and testing purposes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia", "sec_num": "3.1" }, { "text": "The content of Wikipedia is freely downloadable online. 1 We used the Google Freebase Wikipedia Extraction (WEX) instead of the official raw dump. The WEX is a processed version of the official dump, with the Wikipedia syntax transformed into XML. The WEX database can be freely downloaded online. 2", "cite_spans": [ { "start": 56, "end": 57, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Wikipedia", "sec_num": "3.1" }, { "text": "WordNet is a freely available, handcrafted lexical semantic database for English. 3 Starting its development in 1985 at Princeton University by a team of cognition scientists, WordNet was originally intended to support psycho-linguistic research. Over the years, WordNet has become increasingly popular in the fields of information retrieval, natural language processing, and artificial intelligent. Through each release, WordNet has grown into a comprehensive database of concepts in the English language. As of today, the stable 3.0 version of WordNet contains 207,000 semantic relations between 150,000 words organized in over 115,000 senses.", "cite_spans": [ { "start": 82, "end": 83, "text": "3", "ref_id": "BIBREF130" } ], "ref_spans": [], "eq_spans": [], "section": "WordNet", "sec_num": "3.2" }, { "text": "Senses inWordNet are represented as synonym sets (synsets). A synset with a definition contains one or more words, or lemmas, that express the same meaning. In addition, WordNet on the Web based on Conditional Random Fields provides other information for each synset, including example sentences and estimated frequency. For example, the synset {block, city_block} is defined as a rectangular area in a city surrounded by streets, whereas synset {block, cube} is defined as a three-dimensional shape with six square or rectangular sides. WordNet also records various semantic relations between its senses. These relations includes hypernyms, hyponyms, coordinate terms, holonym and meronym.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet", "sec_num": "3.2" }, { "text": "The Sinica Bilingual WordNet is part of the publicly accessible Sinica Bilingual Ontological WordNet (Sinica BOW) (Huang, 2003) . In this work, we treat the Sinica Bilingual WordNet as a bilingual dictionary, and use it as an external knowledge source to generate features for training the CRF model.", "cite_spans": [ { "start": 114, "end": 127, "text": "(Huang, 2003)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Sinica Bilingual WordNet", "sec_num": "3.3" }, { "text": "The Sinica Bilingual WordNet is a hand-crafted English-Chinese version of the original Princeton WordNet 1.6. It was compiled by collecting all possible Chinese translations of a synset's lemmas from various online bilingual dictionaries before a team of translators manually edited the acquired translations. For each synset, the translators selected at most three appropriate lexicalized words as translation equivalents. The Sinica BOW system can be freely-accessible online. 4 The Sinica Bilingual WordNet database can also be licensed for download. 5 ", "cite_spans": [ { "start": 479, "end": 480, "text": "4", "ref_id": "BIBREF131" }, { "start": 554, "end": 555, "text": "5", "ref_id": "BIBREF132" } ], "ref_spans": [], "eq_spans": [], "section": "Sinica Bilingual WordNet", "sec_num": "3.3" }, { "text": "The NICT Bilingual Technical Term Database is a resource freely available online. 6 In addition to the Sinica Bilingual WordNet, we also used the NICT database to generate features. While the Sinica Bilingual WordNet mainly contains common nouns, the NICT database mainly contains technical terms and proper nouns. By combining the two resources, we can generate translational features covering both common nouns and proper nouns.", "cite_spans": [ { "start": 82, "end": 83, "text": "6", "ref_id": "BIBREF133" } ], "ref_spans": [], "eq_spans": [], "section": "NICT Bilingual Technical Term Database", "sec_num": "3.4" }, { "text": "The NICT Bilingual Technical Term Database is maintained by committees in the National Academy for Educational Research of Taiwan (formerly National Institute for Compilation and Translation). The goal is to pursuit more uniform and standardized translations for technical terms used in textbooks, patents, national standards, and open source software. It contains over 1.1 million Chinese-English term translation pairs arranged into 72 categories (Table 9 ) and is kept up to date by constantly including more terms. Any user can suggest a new term and translation to the committees to be added to the database.", "cite_spans": [], "ref_spans": [ { "start": 449, "end": 457, "text": "(Table 9", "ref_id": "TABREF17" } ], "eq_spans": [], "section": "NICT Bilingual Technical Term Database", "sec_num": "3.4" }, { "text": "In 2006, Google published a ngram dataset based on public webpages through Linguistics Data Consortium for licensing. The Google Web 1T corpus is a 24 GB (gzip compressed) corpus that consists of n-grams ranging from unigram to five-grams generated from approximately 1 trillion words in publicly accessible Web pages. In this work, we use the Web 1T corpus to filter unlinked entries in the English Wikipedia with high frequency on the Web for manual evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Google Web 1T N-grams", "sec_num": "3.5" }, { "text": "Submitting an English phrase (e.g., \"named-entity recognition\") to search engines to find translations or transliteration is a good strategy used by many translators (Quah, 2006) . Unfortunately, the user has to sift through snippets to find the translations. Such translations usually exhibit characteristics related to word translation, word transliteration, surface patterns, and proximity to the occurrences of the given phrase. To find translations for a given term on the Web, a promising approach is automatically learning to extract phrasal translations or transliterations of a given query using the conditional random fields (CRF) model. To avoid human effort in preparing annotated data for training the model, we use an automatic procedure to retrieve and tag mixed-code search engine snippets using a set of bilingual Wikipedia titles. We also propose using external knowledge sources (i.e., bilingual dictionaries, name lists and terminology banks) to generate translational and transliterational features.", "cite_spans": [ { "start": 166, "end": 178, "text": "(Quah, 2006)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4." }, { "text": "We focus on the issue of finding translations in mixed code snippets returned by a search engine. The translations are identified, tallied, ranked, and returned as the output of the system. The returned translations can be used to supplement existing multilingual terminology banks, or used as additional training data for a machine translation system. Therefore, our goal is to return several reasonably precise translations that are available on the Web for the given phrase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "4.1" }, { "text": "Problem Statement: Given a phrasal term P and a full-text search engine SE (e.g., Bing or Google) that operates over a mixed-code document collection (e.g., the Web), our goal is to retrieve a probable translation T of P via SE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "4.1" }, { "text": "For this, we extract a set of translation candidates, c 1 , ..., c m from a set of mixed-code snippets, s 1 , ..., s n returned by SE, such that these candidates are likely to be translations T of P.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "4.1" }, { "text": "(1) Retrieve mixed-code snippets and tag translations (Section 4.3.1) 2 In the rest of this section, we describe our solution to this problem. First, we briefly introduce the Conditional Random Fields (CRF) model in Section 4.2. We describe a strategy (see Figure 1 ) for obtaining training data for identifying translation in snippets returned by SE (Section 4.3.2). This strategy relies on a set of term-translation pairs for training, derived from Wikipedia language links (Section 4.3.1). We will also describe our method for exploiting external knowledge sources to generate translation features (Section 4.3.2), transliteration features (Section 4.3.3), and distance features (Section 4.3.4) for sequence labeling. Finally, in Section 4.4, we describe how to extract and filter translations at run-time by applying the trained sequence labeler.", "cite_spans": [], "ref_spans": [ { "start": 257, "end": 265, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "Sequence labeling is the task of assigning labels from a finite set of categories to a sequence of observations. This problem is encountered in the field of computational linguistics, as well as in many other fields, including bio-informatics, speech recognition, and pattern recognition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "4.2" }, { "text": "Traditionally, the sequence labeling problem are often solved using the Hidden Markov Model (HMM) or Maximum Entropy Markov Model (MEMM). Both HMM and MEMM are directed graph models in which every outcome is conditioned on the corresponding observation node and the previous outcomes (i.e., Markov property).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "4.2" }, { "text": "Conditional Random Fields (CRF), proposed by Lafferty, McCallum, and Pereira (2001) , is considered the state-of-the-art sequence labeling algorithm. One of the major differences of CRF is that it is modeled as an undirected graph. For sequence labeling, the CRF graph is structured as an undirected linear chain (chained CRF). CRF obeys the Markov property with respect to the undirected graph, as every outcome is conditioned on its neighboring outcomes and potentially the entire observation sequence. In our case, the outcomes are B, I, O labels that indicate a sequence of Chinese characters in the search engine snippets that is likely the translation or transliteration of the given English term. The information available (the observable) for sequence labeling are the characters in the snippets themselves, and the three types of features we generate. ", "cite_spans": [ { "start": 45, "end": 83, "text": "Lafferty, McCallum, and Pereira (2001)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields", "sec_num": "4.2" }, { "text": "We attempt to learn to find translations or transliterations for given phrases on the Web. For this, we make use of language links in Wikipedia to obtain seed data, retrieve mixed-code snippets returned by a search engine, and augment feature values based on external knowledge sources. Our learning process is shown in Figure 1 ", "cite_spans": [], "ref_spans": [ { "start": 320, "end": 328, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Preparing Data for CRF Classifier", "sec_num": "4.3" }, { "text": "In the first stage of the training phase, we extracted Wikipedia English titles and their Chinese counterparts using the language links as the seed data for training. We use the English titles to query a search engine (e.g., Google or Bing) with the target Web page language set to Chinese. This strategy will bias the search engine to return Chinese web pages interspersed with some English phrases. We then automatically labeled each Chinese character in the returned snippets, using the common BIO notation, with B, I, O indicating the beginning, inside, and outside of translations, respectively (e.g., \u652f\u63f4\u5411\uf97e\u6a5f zhiyuan-xiangliang-ji). An additional E tag is used to indicate the occurrences of the given term (e.g., support vector machine). The output of this stage is a set of tagged snippets that can be used to train a statistical sequence classifier for identifying translations. A sample of two tagged snippets, automatically generated from bilingual Wikipedia titles are shown in Figure 3 . The E tags are designed to provide proximity cues for labeling the translation and capture common surface patterns of the phrase and translation in mixed code data. For example, in Figure 3 , the translation \u652f\u6301\u5411\uf97e\u6a5f(zhichi xiangliang ji) is tagged with one B tag and four I tags,", "cite_spans": [], "ref_spans": [ { "start": 988, "end": 996, "text": "Figure 3", "ref_id": "FIGREF3" }, { "start": 1180, "end": 1188, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Retrieving and Tagging Snippets", "sec_num": "4.3.1" }, { "text": "1. \u20261995/O \uf98e/O \u63d0/O \u51fa/O \u7684/O \u652f/B \u6301/I \u5411/I \uf97e/I \u6a5f/I (/O support/E vector/E machine/E\uff0c/O SVM/O)/O \u4ee5/O \u8a13/O \uf996/O \u2026 2. \u2026\u767c/O \u5149/O \u539f/O \uf9e4/O \uf967/O \u540c/O\u3002/O \u5149/B \u901a/I \uf97e/I luminous/E flux/E \u5149/O \u6e90/O \u5728/O \u55ae/O \u4f4d/O \u6642/O \u9593/O \u2026", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Retrieving and Tagging Snippets", "sec_num": "4.3.1" }, { "text": "followed by the left parenthesis and three E tags. The translation \u5149\u901a\uf97e (guangtong liang) is tagged with one B tag and two I tags, immediately followed by two E tags. Such sequences (i.e. B I I I I O E E E, and B I I E E) are two of many common patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": "29" }, { "text": "Note that we do not attempt to produce word alignment information, as done in Lin et al. (2008) . In contrast, we only use the BIO labeling scheme to indicate phrasal translations, leading to a smaller number of parameters required to be estimated during the training process.", "cite_spans": [ { "start": 78, "end": 95, "text": "Lin et al. (2008)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": "29" }, { "text": "We generate translation features using external bilingual resources with the 2 \u03c6 score proposed by Gale and Church (1991) to measure the correlations between an English word and a Chinese character: ( 1)where e is an English word and f is a Chinese character occurring in bilingual phrase pairs. Table 1 with only three entries to explain how the probabilities are calculated. We treat each entry in the dictionary as an event, and calculate the probability of each Chinese character and English word by counting the number of events containing them, as shown in Table 2 . Similarly, we can calculate the joint probability of an English word and a Chinese character by counting their co-occurrences in the dictionary. In Table 3 , we show the contingency table calculated by counting co-occurrences in Bilingual WordNet and NICT termbank for (\u5411 xiang, vector) , (\uf97e liang, vector) , and (\u6a5f ji, machine). The statistical association between an English word (e.g., vector) and its translation (e.g., \u5411 (xiang)) is indicated by the high count of co-occurrences, as well as the lower values of two inverse diagonal cells. From the contingency tables, we can calculate the corresponding 2 \u03c6 scores for \u5411 xiang \uf97e liang, and \u6a5f ji: 0.06530, 0.02880, and 0.09068. ", "cite_spans": [ { "start": 99, "end": 121, "text": "Gale and Church (1991)", "ref_id": "BIBREF23" }, { "start": 842, "end": 859, "text": "(\u5411 xiang, vector)", "ref_id": null }, { "start": 862, "end": 879, "text": "(\uf97e liang, vector)", "ref_id": null } ], "ref_spans": [ { "start": 296, "end": 303, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 563, "end": 570, "text": "Table 2", "ref_id": "TABREF33" }, { "start": 721, "end": 728, "text": "Table 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Generating Translation Features", "sec_num": "4.3.2" }, { "text": "\u03c6 \u2208 = + (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Translation Features", "sec_num": "4.3.2" }, { "text": "where e is a word in the given English phrase E, and f is the Chinese character in a snippet. This feature value is rounded to a whole number in order to limit the number of distinct feature values. In Table 4 , we show the 2 \u03c6 scores of each Chinese character in snippets from searching Google with the given terms, i.e., support vector machine and luminous flux. Notice that there are some noisy feature values in the second example: the Chinese characters in the word \u767c\u5149 (faguang, glow or illuminate) has non-zero 2 \u03c6 scores. However, the tagger potentially can overcome such noise by relying on other features, such as the distance feature (Section 4.3.4). Moreover, in most cases there are multiple snippets for a given term, from which we can confidently identify the translations with higher frequencies. As an example, we", "cite_spans": [], "ref_spans": [ { "start": 202, "end": 209, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Generating Translation Features", "sec_num": "4.3.2" }, { "text": "Learning to Find Translations and Transliterations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Translation Features", "sec_num": "4.3.2" }, { "text": "show two snippets tagged with translation features in Figure 4 . In this example, the translation characters are given feature values ranging from 2 to 7, while non-translation ones are mostly 0.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 62, "text": "Figure 4", "ref_id": "FIGREF12" } ], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": "31" }, { "text": "1. ... 1995/0 \uf98e/0 \u63d0/0 \u51fa/0 \u7684/0 \u652f/7 \u6301/2 \u5411/6 \uf97e/5 \u6a5f/7 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": "31" }, { "text": "(/0 support/E vector/E machine/E\uff0c/0 SVM/0 )/0 \u4ee5/0 \u8a13/0 \uf996/0 ... 2. ... \u767c/0 \u5149/5 \u539f/0 \uf9e4/0 \uf967/0 \u540c/0 \u3002/0 \u5149/5 \u901a/7 \uf97e/5 luminous/E flux/E \u5149/5 \u6e90/0 \u5728/0 \u55ae/0 \u4f4d/0 \u6642/0 \u9593/0 ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": "31" }, { "text": "We generate the additional features related to transliteration using some external knowledge resources. It is important to include transliteration in the feature set, since many named entities or technical terms are transliterated in full or partially into a foreign language. Thus, the translation feature described in Section 4.3.2 alone is not enough. For this, we collect transliterated titles from the entries connected with language links across the English and the Chinese Wikipedia to calculate correlation between the target transliteration characters and English sublexical strings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Transliteration Features", "sec_num": "4.3.3" }, { "text": "We observed that names of persons and geographic locations are mostly transliterated, and that the entries titled with names of persons or locations can be extracted easily from Wikipedia using the categories of each entry. As will be described in Section 5, we extracted Wikipedia articles tagged with categories that match \"Birth in ...\" to find articles describing a person, and categories that matches \"Cities in ...\" and \"Capitals in ...\" to find titles describing a geographic location. We show some named entities in Table 6 . ", "cite_spans": [], "ref_spans": [ { "start": 524, "end": 531, "text": "Table 6", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Generating Transliteration Features", "sec_num": "4.3.3" }, { "text": "\u55ac\u5e03\u65af qiao-bu-si jobs j-o-bs, j-ob-s, jo-b-s \u74ca\u55ac qiong-qiao jonjo j-onjo, jo-njo, jon-jo, jonj-o \u55ac\u745f\u592b qiao-se-fu joseph j-o-seph, j-os-eph, j-ose-ph, j-osep-h, jo-s-eph, jo-se-ph, jo-sep-h, jos-e-ph, ... \u55ac\u51e1\u5c3c qiao-fan-ni giovanni g-i-ovanni, g-io-vanni, g-iov-anni, ..., gio-va-nni, gio-van-ni, gio-vann-i, ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Transliteration Features", "sec_num": "4.3.3" }, { "text": "\u55ac\u51e1\u5c3c qiao-fan-ni gio-van-ni \uf925\u55ac\uf9dd\u7d0d la-qiao-li-na ra-joe-li-na \u5967\u55ac\u4e9e ao-qiao-ya o-cho-a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Transliteration Features", "sec_num": "4.3.3" }, { "text": "After obtaining the transliteration pairs from Wikipedia, we align the Chinese and English syllables. In Chinese, every character always represents one syllable. Nevertheless, the counterpart \"syllables\" in an English word are not as easy to determine. These counterparts are not syllables in the regular sense, for some counterpart \"syllables\" may contain a single consonant. We assume every extracted Chinese and English transliteration pairs contain the same number of syllables, i.e., equal to the number of Chinese characters. We also assume the syllables are transliterated in order. Under these assumptions, we can segment the English words into a number of segments equal to the number of characters in its Chinese transliteration, and align the English segments and Chinese characters in order. For example, as shown in Table 5 , the English name Joseph is transliterated into three Chinese characters, or syllables, \u55ac\u745f\u592b qiao-se-fu, therefore, all possible segmentations include: j-o-seph, j- os-eph, j-ose-ph, j-osep-h, jo-s-eph, jo-se-ph, jo-sep-h, jos-e-ph, ..., etc. We use the Expectation-Maximization (EM) algorithm to estimate the conditional probabilities P( f|e) modeling the correlation between the Romanized Chinese characters and the English counterpart. For Chinese characters that have ambiguous pronunciations, we use the Romanization of the most frequent pronunciation according to the Chinese Electronic Dictionary from Academia Sinica, available for download from the The Association for Computational Linguistics and Chinese Language Processing (ACLCLP). 8 In the E-step, the expectation of the log-likelihood of each segmentation candidates are evaluated using the current estimation of P( f|e). In the M-step, the conditional probability estimations are updated based on the maximum likelihood estimation (MLE) of the E-step. A few examples of the segmentation results are shown in Table 6 . After aligning the syllables in the transliteration pairs, we then calculate the conditional probability of the Romanized Chinese character and its English counterpart. Example output of three Romanized Chinese characters and their top English counterparts is shown in Table 7 .", "cite_spans": [ { "start": 1002, "end": 1079, "text": "os-eph, j-ose-ph, j-osep-h, jo-s-eph, jo-se-ph, jo-sep-h, jos-e-ph, ..., etc.", "ref_id": null }, { "start": 1583, "end": 1584, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 829, "end": 836, "text": "Table 5", "ref_id": "TABREF3" }, { "start": 1912, "end": 1919, "text": "Table 6", "ref_id": "TABREF4" }, { "start": 2191, "end": 2198, "text": "Table 7", "ref_id": "TABREF14" } ], "eq_spans": [], "section": "Generating Transliteration Features", "sec_num": "4.3.3" }, { "text": "Nevertheless, generating transliteration features for each Chinese character (Romanized) tends to produce a lot of false positives. Therefore, we assume that a named entity is transliterated into at least two Chinese characters, and generate the transliteration features of a Chinese character taking into consideration the preceding and following characters. Admittedly, we probably missed some transliteration cases, such as Jean and \u7434 (qin), but that represents a small loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "In general, this strategy works quite well for our purpose. For example, given the character sequence \u55ac\u5e03\u65af(qiao-bu-si) and the term Steve Jobs, to calculate the transliteration score for the Chinese character \u5e03(bu), we calculate the probability of \u55ac\u5e03(qiao-bu) and \u5e03 \u65af(bu-si) being part of transliteration of Steve or Jobs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( | ) ( ( | ), ( | )) ( | ) ( ( | ), (", "eq_num": "| ))" } ], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "P bu steve max P qiao bu steve P bu si steve P bu jobs max P qiao bu jobs P bu si jobs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= \u2212 \u2212 = \u2212 \u2212", "eq_num": "(3)" } ], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "To calculate the conditional probability for the Chinese bi-characters \u55ac\u5e03 qiao-bu given the English term jobs, we generate all substring xy of jobs, into which qiao-bu can be transliterated:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "( | ) ( ( | )|( | ) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "xy jobs P qiao bu jobs argmax P qiao x bu y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "xy jobs denotes string xy is a substring of jobs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2208 \u2212 = \u2208", "eq_num": "(4)" } ], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "With this probabilistic value, we then generate the transliteration feature values in a similar way as described in Section 4.3.2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "( ) 9 ( ( | ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "transliteration e E feat f log argmax P f e \u2208 = +", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "1. ... \u6cd5-fa/0 \u570b-guo/0 \uf9f7-li/0 \u9ad4-ti/2 \u4e3b-zhu/2 \u7fa9-yi/0 \u756b-hua/0 \u5bb6jia/4 \u55ac-qiao/7 \u6cbb-zhi/7 \uff0e/0 \u5e03-bu/8 \uf925-la/8 \u514b-ke/4 (/0 georges/E braque/E)/0 ... 2. ... \u7b2c-di/0 62/0 \u5c46-jie/0 \u827e-ai/3 \u7f8e-mei/3 \u734e-jiang/0 \u9812-ban/0 \u734ejiang/0 \u5178-dian/0 \uf9b6-li/0 \u300b/0(/0 the/0 62nd/0 Emmy/E Award/E )/0 ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "We show two examples of the data tagged with transliteration feature values in Figure 5 . In the first example, given the phrase Georges Braque, the name of a French painter, to find its Chinese transliteration \"\u55ac\u6cbb\uff0e\u5e03\uf925\u514b (qiao-zhi bu-la-ke)\". The respective feature scores for each of the characters in the transliteration are 7 7 0 8 8 4. The symbol \"\uff0e\" with a feature value of zero, is commonly used in Chinese name transliteration to identify the boundary of first and last name in foreign names, and it can be identified as part of the answer by its surrounding transliteration feature scores and the surface pattern. Also in the first example, the Chinese character \u5bb6(jia), the second syllable of \u756b\u5bb6(hua-jia, painter), has a noisy non-zero feature value of four, due to the fact that the English syllable geo is often transliterated into this Chinese syllable jia. In the second example, the given phrase is Emmy Award, where the first part of the phrase Emmy is transliterated into \u827e\u7f8e(ai-mei), and the second part of the phrase Award is translated in to \u734e(jiang). The Chinese characters \u827e and \u7f8e both have a feature value of 3, while all other characters in the example have a feature value of zero. We also show this example tagged with all types of feature values we generate in Table 8 .", "cite_spans": [], "ref_spans": [ { "start": 79, "end": 87, "text": "Figure 5", "ref_id": null }, { "start": 1284, "end": 1291, "text": "Table 8", "ref_id": "TABREF15" } ], "eq_spans": [], "section": "Figure 5. Example of transliteration features given Georges Braque to find the Chinese transliteration \"\u55ac\u6cbb\uff0e\u5e03\uf925\u514b\" and given Emmy Award to find \"\u827e\u7f8e\u734e\"", "sec_num": null }, { "text": "Finally, we generate the distance features and train a CRF model. The distance feature is intended to exploit the fact that translations tend to occur near the source term, as pointed out in Nagata et al. (2001) and Wu et al. (2005) . Therefore, we incorporated the distance as an additional feature type, to impose a soft constraint on the locational relations between a translation and its English counterpart.", "cite_spans": [ { "start": 191, "end": 211, "text": "Nagata et al. (2001)", "ref_id": "BIBREF37" }, { "start": 216, "end": 232, "text": "Wu et al. (2005)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Generating Distance Features", "sec_num": "4.3.4" }, { "text": "An example showing all three kinds of features and labels is shown in Table 8 . This example shows that the given term Emmy Award has a Chinese counterpart that is part transliteration (Emmy with a transliteration \u827e\u7f8e ai-mei) and part translation (Award with the translation \u734e jiang). This is a typical case that our method is designed to handle using both ", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 77, "text": "Table 8", "ref_id": "TABREF15" } ], "eq_spans": [], "section": "Generating Distance Features", "sec_num": "4.3.4" }, { "text": "\u5c46 0 0 12 O \u827e 3 0 11 B (Emmy) \u7f8e 3 0 10 I (Award) \u734e 0 5 9 I \u9812 0 0 8 O (awarding) \u734e 0 0 7 O \u5178 0 0 6 O (ceremony) \uf9b6 0 0 5 O \u300b 0 0 4 O ( 0 0 3 O the 0 0 2 O 62nd 0 0 1 O Emmy 0 0 0 E Award 0 0 0 E ) 0 0 -1 O", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Distance Features", "sec_num": "4.3.4" }, { "text": "Once the CRF model is automatically trained, we attempt to find translations for a given phrase using the procedure in Figure 6 .", "cite_spans": [], "ref_spans": [ { "start": 119, "end": 127, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Runtime Translation Extraction", "sec_num": "4.4" }, { "text": "In", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Runtime Translation Extraction", "sec_num": "4.4" }, { "text": "Step 1, the system submit the given phrase as query to a search engine (SE) to retrieve snippets. Then, for each token in each snippet, we generate three kinds of features (Step 2). This process is exactly the same as in the training phase. In Step 3, we run the CRF model on the snippets to generate labels. Then, in Step 4, we extract the Chinese strings with a sequence of B, I, ..., I tags as translation candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Runtime Translation Extraction", "sec_num": "4.4" }, { "text": "Step 5, we compute the frequency of all of the candidates identified in all snippets, and output the candidate with the highest frequency as output. When there is a tie with multiple candidates with the same highest frequency, one of them is randomly selected as the output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finally, in", "sec_num": null }, { "text": "Procedure FindTranslation(P, SE):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finally, in", "sec_num": null }, { "text": "( 1) 4Extract Chinese tagged with BI sequence as candidates 5Output the candidate with highest redundancy (frequency). (In case of a tie, randomly select one of the most frequent.) Figure 6 . Pseudocode of the runtime phase.", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 189, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Finally, in", "sec_num": null }, { "text": "We extracted the titles of English and Chinese articles that are connected through language links in Wikipedia using the Wikipedia dump created on 2010/08/16 (Google, 2010). We used a short list of stop words based on the rules pointed out by Lin et al. (2008) to exclude titles that are for administrative or other purposes. We obtained a total of 155,310 article pairs, from which we randomly selected 13,150 and 2,181 titles as seeds to obtain the training and test data, respectively, as described in Section 4.3.1. We then used the English-Chinese Bilingual WordNet 9 and NICT terminology bank (terms.nict.gov.tw/download_main.php) to generate translational features, in an effort to cover both common nouns and technical terms. The bilingual WordNet, translated from the original Princeton WordNet 1.6 has 99,642 synset entries, each with multiple lemmas and multiple translations, forming a total of some 850,000 translation pairs. The NICT database has over 1.1 million term translation pairs in 72 categories and covers a wide variety of different fields. See Table 9 for the numbers of entries in each of the 72 categories. For transliterational features, we extracted person or location entries in Wikipedia using such categories as \"Birth in ...\" to find titles for a person, and categories such as \"Cities in ...\" and \"Capitals in ...\" to find titles for a geographic location. A total of some 15,000 bilingual person names and 24,000 bilingual place names were obtained and forced aligned.", "cite_spans": [ { "start": 243, "end": 260, "text": "Lin et al. (2008)", "ref_id": "BIBREF34" } ], "ref_spans": [ { "start": 1069, "end": 1076, "text": "Table 9", "ref_id": "TABREF17" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5." }, { "text": "To compare our method with previous work, we used a similar evaluation procedure as described in Lin et al. (2008) . We ran the system and produced the translations for these 2,181 test data, and we automatically evaluated the results using the metrics of coverage and exact match precision based on the Wikipedia language links. We removed all search snippets from the wikipedia.org domain to ensure a strict separation of training and test datasets. This precision rate is an underestimation since a term may have many alternative translations that do not match exactly with the single reference translation. To obtain a more accurate estimate of the real precision rate, we resorted to manual evaluation.", "cite_spans": [ { "start": 97, "end": 114, "text": "Lin et al. (2008)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "We selected a small part of the 2,181 English phrases and manually evaluated the results. We report the results of automatic evaluation in Section 5.1 and the results of manual evaluation in Section 5.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "on the Web based on Conditional Random Fields", "sec_num": null }, { "text": "In this section, we describe the evaluation based on the set of 2,181 English-Chinese title pairs extracted from Wikipedia as the gold standard and automatically evaluate coverage (applicability) and exact match precision. Coverage is measured by the percentage of titles for which the proposed system produces some translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "When translations were extracted, we selected the most frequent translations as output, and checked for exact match against the reference answer. Table 10 shows the results we obtained as compared to the results reported by Lin et al. (2008) .", "cite_spans": [ { "start": 224, "end": 241, "text": "Lin et al. (2008)", "ref_id": "BIBREF34" } ], "ref_spans": [ { "start": 146, "end": 154, "text": "Table 10", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "We explored the performance differences of the systems employing different set of features. The systems evaluated are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "\u2022 Full: the proposed system trained with all feature types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "\u2022 -TL : the proposed system trained without the transliteration feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "\u2022 -TR : the proposed system trained without the translation feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "\u2022 -TL-TR : the proposed system only using the distance feature. No external knowledge used. \u2022 NICT : the freely available NICT technical term bilingual dictionary with 1,138,653 translation pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "Notice that, although Lin et al. (2008) also used bilingual Wikipedia title pairs for evaluation, they used an earlier snapshot of Wikipedia and worked with full webpages crawled from the Internet without a list of given terms. We worked with the list of English terms given as input, but worked only with search engine snippets. In the previous work, all of the bilingual title pairs extracted from Wikipedia were used for evaluation. In our work, only a portion of the title pairs were used for evaluation and the rest were used for generating the training data. It is often difficult to compare systems with different experimental settings. Nevertheless, the evaluation results seem to indicate that the proposed method compares favorably with the results reported in the previous work.", "cite_spans": [ { "start": 22, "end": 39, "text": "Lin et al. (2008)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "With a given target English term as input, the proposed system uses a search engine to retrieve a relevant portion of limited webpages, and attempts to find the Chinese translation within the retrieved text. The proposed system extracts translations in all cases without being limited by a set of a few surface patterns, and has a significantly higher coverage and precision rate than the previous method that rely on the parenthetic patterns only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "As shown in Table 10 , we found using external knowledge to generate features improves system performance significantly. Adding translation feature (-TL) or transliteration feature (-TR) improves exact match precision by about 6% and 16%, respectively. Due to the fact that many Wikipedia titles are fully or partially transliterated into Chinese, the transliteration feature was found to be more important than the translation feature.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Table 10", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "The results also clearly show that finding translations on the Web has the advantage of better coverage than simply looking up phrases in a terminology bank (with a coverage rate of 24%), or a bilingual dictionary (with a coverage rate of 11%). Although using the NICT terminology bank or LDC bilingual dictionary directly has the worst performance, using them as external knowledge sources improves the performance of the CRF model significantly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "Overall, the full system performed the best, finding translations for 8 out of 10 phrases with an average exact match precision rate of over 40%. Nearly 60% of the exact matches appear in the Top 5 candidates. Leaving out the transliteration feature degraded the precision rate by 16%, far more than leaving out the translation feature. This is to be expected, since English Wikipedia has considerably more named entities with transliterated counterparts in Chinese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "5.1" }, { "text": "In this section, we present two sets of manual evaluation. In Section 5.2.1, we manually evaluate the results produced by the full system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manual Evaluation", "sec_num": "5.2" }, { "text": "Since an English phrase is often translated into several Chinese counterparts, evaluation based on exact match against a single reference answer leads to under-estimation. Therefore, we asked a human judge to examine and mark the output of our full system. The judge was instructed to mark each output as A: correct translation alternative, B: correct translation but with a difference sense from the reference, P: partially correct translation, and E: incorrect translation. Table 11 shows 24 randomly selected translations that do not match the relevant reference translations. Half of the translations (12) are correct translations (A and B), while a third (8) are partially correct translation (P). Notice that it is a common practice to translate only the surname of a foreign person. So, four of the eight partial translations may be considered as correct.", "cite_spans": [], "ref_spans": [ { "start": 476, "end": 484, "text": "Table 11", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Error Analysis on Automatic Evaluation", "sec_num": "5.2.1" }, { "text": "In Table 12 , we show extracted candidates and frequency counts for 8 example terms. Translation candidates are marked using the same A, B, P, and E tags as in Table 11 , plus an additional tag, M, to indicate an exact match. For the given term money laundering, the system extracted 27 exact matches (\u6d17\u9322), and 2 correct alternatives (\u6d17\u9ed1\u9322) and only 1 erroneous output from 30 snippets returned from the search engine. While technical terms like money laundering tend to have literal translations and result in more exact matches, movie titles are often translated into Chinese with completely different meanings. For example, the official Chinese title for the movie, Music and Lyrics in Taiwan is \"K-\u6b4c -\u60c5 \u4eba \" (meaning karaoke-song-lover). Given such a title as input, the system was able to extract 18 partial Learning to Find Translations and Transliterations 41 on the Web based on Conditional Random Fields matches and 2 exact matches base on surface patterns and modest translation feature value for music and \u6b4c(ge, song). For the given term colony, the system extracted \u83cc\uf918(colony of fungi or bacteria), a correct translation with a different sense. Other extracted answers include: transliteration, \u79d1\uf90f\u5c3c\u6d77\u5cf6\u9152\u5e97(Island Colony), the name of a hotel, and the exact-match translation, \u6b96\u6c11\u5730(foreign control territory). For the given term bubble sort, the partial translation \u6392\u5e8f(sort) makes the top-1 translation (with a count of 20), while the top-2 to top-5 are either exact-match or acceptable translations. Note that this learning-based approach to mining translation and transliteration on the Web is an original contribution of our work. Previous works such as Wu et al. (2005) ; Lin et al. (2008) , simply used occurrence statistics to identify translations, which is roughly equivalent to our translational or transliterational features (see Section 4.3.2 and Section 4.3.3) . While Lin et al. used prefixes of 3 letters to provide a makeshift model of transliteration, we model the name-transliteration relations directly using an EM algorithm. Moreover, we also take note of their pattern of appearance to allow more effective extraction of relevant translations with the distance feature (see Section 4.3.4) . It is important to note that combining features inherent in a training data, as well as derived from external knowledge sources in a machine learning model allow us to cover more relevant translations, while filtering out many invalid candidates. ", "cite_spans": [ { "start": 1662, "end": 1678, "text": "Wu et al. (2005)", "ref_id": "BIBREF42" }, { "start": 1681, "end": 1698, "text": "Lin et al. (2008)", "ref_id": "BIBREF34" }, { "start": 1845, "end": 1877, "text": "Section 4.3.2 and Section 4.3.3)", "ref_id": null }, { "start": 2199, "end": 2213, "text": "Section 4.3.4)", "ref_id": null } ], "ref_spans": [ { "start": 3, "end": 11, "text": "Table 12", "ref_id": "TABREF0" }, { "start": 160, "end": 168, "text": "Table 11", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Error Analysis on Automatic Evaluation", "sec_num": "5.2.1" }, { "text": "We have presented a new method for mining translations on the Web for a given term. In our work, we use a set of terms and translations as seeds to obtain mixed-code snippets returned by a search engine, such as Google or Bing. We then automatically convert the snippets into a tagged sequence of tokens, automatically augment the data with features obtained from external knowledge sources, and automatically train a CRF model for sequence labels. At runtime, we submit a query consisting of the given term to a search engine, tag the returned snippets using the trained model, and finally extract and rank the translation candidates for output. Preliminary experiments and evaluations show our method cleanly combining various features, resulting in an integrated, learning-based system capable of finding both term translations and transliterations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6." }, { "text": "Many avenues exist for future research and improvement of our system. For example, existing query expansion methods to retrieve more webpages containing translation for the given phrases could be implemented (Zhang et al., 2005) . Translation features related to word parts (e.g., -lite in the term zeolite) could be used to improve identification of translations. Additionally, an interesting direction to explore is to identify phrase types and length (i.e., base NP and NP prep. NP) and train type-specific CRF models for better results. In addition, natural language processing techniques such as word stemming, word lemmatization, or derivational morphological transformation could also be attempted to improve recall and precision.", "cite_spans": [ { "start": 208, "end": 228, "text": "(Zhang et al., 2005)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6." }, { "text": "Another interesting direction to explore is using a robot to crawl webpages and filter mixed-code data to derive the translation features. With the crawled web pages, we can extract translations offline, without having to work with a search engine and its limited returned snippets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6." }, { "text": "Yet another direction of research would be to enhance the effectiveness of translation features by working on the level of Chinese words instead of characters. For that, we could either use an existing, general-purpose word segmenter or carry out self-organized word segmentation (Sproat & Shih, 1990) to produce word-based translation features. ", "cite_spans": [ { "start": 280, "end": 301, "text": "(Sproat & Shih, 1990)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6." }, { "text": "The term Machine Translation is a standard name for computerized systems responsible for the production of translations from one natural language into another with or without human assistance. It is a sub-field of computational linguistics that investigates the use of computer software to translate text or speech from one natural language to another. Many attempts are being made all over the world to develop machine translation systems for various languages using rule-based as well as statistically based approaches. Development of a full-fledged bilingual machine translation (MT) system for any two natural languages with limited electronic resources and tools is a challenging and demanding task. In order to achieve reasonable translation quality in open source tasks, corpus based machine translation approaches require large amounts of parallel corpora that are not always available, especially for less resourced language pairs. On the other hand, the rule-based machine translation process is extremely time consuming, difficult, and fails to analyze accurately a large corpus of unrestricted text. Even though there has been effort towards building English to Indian language and Indian language to Indian language translation system, unfortunately, we do not have an efficient translation system as of today. The literature shows that there have been many attempts in MT for English to Indian languages and Indian languages to Indian languages. At present, a number of government and private sector projects are working towards developing a full-fledged MT for Indian languages. This paper gives a brief description of the various approaches and major machine translation developments in India.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "MT refers to the use of computers to automate some of the tasks or the entire task of translating between human languages. Development of a full-fledged bilingual MT system for any two natural languages with limited electronic resources and tools is a challenging and demanding task. Many attempts are being made all over the world to develop MT systems for various languages using rule-based as well as statistical-based approaches. MT systems can be designed either specifically for two particular languages, called a bilingual system, or for more than a single pair of languages, called a multilingual system. A bilingual system may be either unidirectional, from one Source Language (SL) into one Target Language (TL), or may be bidirectional. Multilingual systems are usually designed to be bidirectional, but most bilingual systems are unidirectional. MT methodologies are commonly categorized as direct, transfer, and Interlingua. The methodologies differ in the depth of analysis of the SL and the extent to which they attempt to reach a language independent representation of meaning or intent between the source and target languages. Barriers in good quality MT output can be attributed to ambiguity in natural languages. Ambiguity can be classified into two types: structural ambiguity and lexical ambiguity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "India is a linguistically rich area. It has 18 constitutional languages, which are written in 10 different scripts. Hindi is the official language of the Union. Many of the states have their own regional language, which is either Hindi or one of the other constitutional languages. In addition, English is very widely used for media, commerce, science and technology, and education only about 5% of the world's population speaks English as a first language. In such a situation, there is a large market for translation between English and the various Indian languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Even though MT in India started more than two decades ago, it is still an ongoing process. The third section of this paper discusses various approaches used in English to Indian languages and Indian language to Indian language MT systems. The fourth section gives a brief explanation of different MT attempts for English to Indian languages and Indian languages to Indian languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The major changeovers in MT systems are as shown in Figure 1 . The theory of MT pre-dates computers, with philosophers 'Leibniz and Descartes' ideas of using code to relate words between languages in the seventeenth century (Hutchins et al., 1993) . The early 1930s saw the first patents for 'translating machines'. Georges Artsrouni was issued a patent in France in July 1933. He developed a device, which he called a 'cerveau m\u00e9canique' (mechanical brain) that could translate between languages using four components: memory, a keyboard for input, a search method, and an output mechanism. The search method was basically a dictionary look-up in the memory; therefore, Hutchins is reluctant to call it a translation system. to the Apertium system, using a bilingual dictionary and a three-staged process, i.e. first a native speaking human editor of the SL (SL) pre-processed the text, then the machine performed the translation, and finally a native-speaking human editor of the TL post-edited the text (Hutchins et al., 1993; Hutchins et al., 2000) .", "cite_spans": [ { "start": 224, "end": 247, "text": "(Hutchins et al., 1993)", "ref_id": "BIBREF60" }, { "start": 1006, "end": 1029, "text": "(Hutchins et al., 1993;", "ref_id": "BIBREF60" }, { "start": 1030, "end": 1052, "text": "Hutchins et al., 2000)", "ref_id": "BIBREF62" } ], "ref_spans": [ { "start": 52, "end": 60, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "History of MT", "sec_num": "2." }, { "text": "After the birth of computers Electrical Numerical Integrator and Calculator (ENIAC) in 1947, research began on using computers as aids for translating natural languages (Hutchins et al., 2005) . The first public demonstration of MT in the Georgetown-IBM experiment, which proved deceptively promising, encouraged financing of further research in the field. In 1949, Weaver wrote a memorandum, putting forward various proposals (based on the wartime successes in code breaking) on the developments in information theory and speculation about universal principles underlying natural languages (Weaver et al., 1999) . In the decade of optimism, from 1954-1966, researchers encountered many predictions of imminent 'breakthroughs'. In 1966, the Automated Language Processing Advisory Committee (ALPAC) report was submitted, which said that, for 'semantic barriers', there are no straightforward solutions. The ALPAC report committee could not find any \"pressing need for MT\" nor \"an unfulfilled need for translation (ALPAC et al., 1996) \". This report brought MT research to its knees, suspending virtually all research in the United States of America (USA) while some research continued in Canada, France, and Germany (Hutchins et al., 2005) . After the ALPAC report, MT almost was ignored from 1966-1980. In the year 1988, Georgetown-IBM experiment launched \"IBM CANDIDE System,\" where over 60 Russian sentences were translated smoothly into English using 6 rules and a bilingual dictionary consisting of 250 Russian words, with rule-signs assigned to words with more than one meaning. Although Professor Leon Dostert cautioned that this experimental demonstration was only a scientific sample, or \"a Kitty Hawk of electronic translation (Kitty Hawk 1 ),\" a wide variety of MT systems emerged after 1980 from various countries and research continued on more advanced methods and techniques. Those systems mostly were comprised of indirect translations or used an 'interlingua' as an intermediary. In the 1990s, Statistical Machine Translation (SMT) and what is now known as Example-based Machine Translation (EBMT) saw the light of day (IBM, 1954) . At this time the focus of MT began to shift somewhat from pure research to practical application using a hybrid approach. Moving towards the change of the millennium, MT became more readily available to individuals via online services and software for their personal computers.", "cite_spans": [ { "start": 169, "end": 192, "text": "(Hutchins et al., 2005)", "ref_id": "BIBREF61" }, { "start": 591, "end": 612, "text": "(Weaver et al., 1999)", "ref_id": "BIBREF85" }, { "start": 1012, "end": 1032, "text": "(ALPAC et al., 1996)", "ref_id": null }, { "start": 1215, "end": 1238, "text": "(Hutchins et al., 2005)", "ref_id": "BIBREF61" }, { "start": 2134, "end": 2145, "text": "(IBM, 1954)", "ref_id": "BIBREF63" } ], "ref_spans": [], "eq_spans": [], "section": "History of MT", "sec_num": "2." }, { "text": "Generally, MT is classified into seven broad categories: rule-based, statistical-based, hybrid-based, example-based, knowledge-based, principle-based, and online interactive based methods. The first three MT approaches are the most widely used and earliest methods. Literature shows that there have been fruitful attempts using all these approaches for the development of English to Indian languages as well as Indian languages to Indian languages. At present, most of the MT related research is based on statistical and example-based approaches. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MT Approaches", "sec_num": "3." }, { "text": "In the field of MT, the rule-based approach is the first strategy that was developed. A Rule-Based Machine Translation (RBMT) system consists of collection of rules, called grammar rules, a bilingual or multilingual lexicon, and software programs to process the rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule-based Approach", "sec_num": "3.1" }, { "text": "Nevertheless, building RBMT systems entails a huge human effort to code all of the linguistic resources, such as source side part-of-speech taggers and syntactic parsers, bilingual dictionaries, source to target transliteration, TL morphological generator, structural transfer, and reordering rules. Nevertheless, a RBMT system always is extensible and maintainable. Rules play a major role in various stages of translation, such as syntactic processing, semantic interpretation, and contextual processing of language. Generally, rules are written with linguistic knowledge gathered from linguists. Transfer-based MT, Interlingua MT, and dictionary-based MT are the three different approaches that come under the RBMT category. In the case of English to Indian languages and Indian language to Indian language MT systems, there have been fruitful attempts with all four approaches. The main idea behind these rule-based approaches is as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule-based Approach", "sec_num": "3.1" }, { "text": "In the direct translation method, the SL text is analysed structurally up to the morphological level and is designed for a specific source and target language pair (Noone et al., 2003; . The performance of a direct MT system depends on the quality and quantity of the source-target language dictionaries, morphological analysis, text processing software, and word-by-word translation with minor grammatical adjustments on word order and morphology.", "cite_spans": [ { "start": 164, "end": 184, "text": "(Noone et al., 2003;", "ref_id": "BIBREF72" } ], "ref_spans": [], "eq_spans": [], "section": "Direct Translation", "sec_num": "3.1.1" }, { "text": "The next stage of progress in the development of MT systems is the Interlingua approach, where translation is performed by first representing the SL text into an intermediary (semantic) form called Interlingua. The advantage of this approach is that Interlingua is a language independent representation from which translations can be generated to different TLs. Thus, the translation consists of two stages, where the SL is first converted in to the Interlingua (IL) form before translation from the IL to the TL. The main advantage of this Interlingua approach is that the analyzer of the parser for the SL is independent of the generator for the TL. There are two main drawbacks in the Interlingua approach. The first disadvantage is, difficulty in defining the interlingua. The second disadvantage is Interlingua does not take the advantage of similarities between languages, such as translation between Dravidian languages. Nevertheless the advantage of Interlingua is it is economical in situations where translation among multiple languages is involved (Shachi et al., 2001 ).", "cite_spans": [ { "start": 1059, "end": 1079, "text": "(Shachi et al., 2001", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Interlingua Based Translation", "sec_num": "3.1.2" }, { "text": "Starting with the shallowest level at the bottom, direct transfer is made at the word level. Moving upward through syntactic and semantic transfer approaches, the translation occurs on representations of the source sentence structure and meaning, respectively. Finally, at the interlingual level, the notion of transfer is replaced with a single underlying representation called the 'Interlingua'. 'Interlingua' represents both the source and target texts simultaneously. Moving up the triangle reduces the amount of work required to traverse the gap between languages at the cost of increasing the required amount of analysis and synthesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interlingua Based Translation", "sec_num": "3.1.2" }, { "text": "Because of the disadvantage of the Interlingua approach, a better rule-based translation approach was discovered, called the transfer approach. Recently, many research groups have being using this third approach for their MT system, both abroad and in India. On the basis of the structural differences between the source and target language, a transfer system can be broken down into three different stages: i) Analysis, ii) Transfer and iii) Generation. In the first stage, the SL parser is used to produce the syntactic representation of a SL sentence. In the next stage, the result of the first stage is converted into equivalent TL-oriented representations. In the final step of this translation approach, a TL morphological analyzer is used to generate the final TL texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Based Translation", "sec_num": "3.1.3" }, { "text": "The statistical approach comes under Empirical Machine Translation (EMT) systems, which rely on large parallel aligned corpora. Statistical machine translation is a data-oriented statistical framework for translating text from one natural language to another based on the knowledge and statistical models extracted from bilingual corpora. In statistical-based MT, bilingual or multilingual textual corpora of the source and target language or languages are required. A supervised or unsupervised statistical machine learning algorithm is used to build statistical tables from the corpora, and this process is called the learning or training (Zhang et al., 2006) . The statistical tables consist of statistical information, such as the characteristics of well-formed sentences, and the correlation between the languages. During translation, the collected statistical information is used to find the best translation for the input sentences, and this translation step is called the decoding process. There are three different statistical approaches in MT, Word-based Translation, Phrase-based Translation, and Hierarchical phrase based model. The idea behind SMT comes from information theory. A document is translated according to the probability distribution function indicated by p(e|f), which is the Probability of translating a sentence f in the SL F (for example, English) to a sentence e in the TL E (for example, Kannada).", "cite_spans": [ { "start": 641, "end": 661, "text": "(Zhang et al., 2006)", "ref_id": "BIBREF87" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical-based Approach", "sec_num": "3.2" }, { "text": "The problem of modeling the probability distribution p(e|f) has been approached in a number of ways. One intuitive approach is to apply Bayes theorem. That is, if p(f|e) and p(e) indicate translation model and language model, respectively, then the probability distribution p(e|f) \u221e p(f|e)p(e). The translation model p(f|e) is the probability that the source sentence is the translation of the target sentence or the way sentences in E get converted to sentences in F. The language model p(e) is the probability of seeing that TL string or the kind of sentences that are likely in the language E. This decomposition is attractive as it splits the problem into two sub problems. Finding the best translation \u0303 is done by picking the one that gives the highest probability, as shown in Equation 1. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical-based Approach", "sec_num": "3.2" }, { "text": "Even though phrase based models have emerged as the most successful method for SMT, they do not handle syntax in a natural way. Reordering of phrases during translation is typically managed by distortion models in SMT. Nevertheless, this reordering process is entirely unsatisfactory, especially for language pairs that differ a lot in terms of word-order. In the proposed project, the problem of structural differences between source and target languages is overcome successfully with a reordering task. We have also proven that, with the use of morphological information, especially for a morphologically rich language like Kannada, the training data size can be reduced considerably with an improvement in performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical-based Approach", "sec_num": "3.2" }, { "text": "As the name suggests, the words in an input sentence are translated word by word individually, and these words finally are arranged in a specific way to get the target sentence. The alignment between the words in the input and output sentences normally follows certain patterns in word based translation. This approach is the very first attempt in the statistical-based MT system that is comparatively simple and efficient. The main disadvantage of this system is the oversimplified word by word translation of sentences, which may reduce the performance of the translation system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Based Translation", "sec_num": "3.2.1" }, { "text": "A more accurate SMT approach, called phrase-based translation (Koehn et al., 2003) , was introduced, where each source and target sentence is divided into separate phrases instead of words before translation. The alignment between the phrases in the input and output sentences normally follows certain patterns, which is very similar to word based translation. Even though the phrase based models result in better performance than the word based translation, they did not improve the model of sentence order patterns. The alignment model is based on flat reordering patterns, and experiments show that this reordering technique may perform Machine Translation Approaches and Survey for Indian Languages 55 well with local phrase orders but not as well with long sentences and complex orders.", "cite_spans": [ { "start": 62, "end": 82, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase Based Translation", "sec_num": "3.2.2" }, { "text": "By considering the drawback of previous two methods, Chiang (2005) developed a more sophisticated SMT approach, called the hierarchical phrase based model. The advantage of this approach is that hierarchical phrases have recursive structures instead of simple phrases. This higher level of abstraction approach further improved the accuracy of the SMT system.", "cite_spans": [ { "start": 53, "end": 66, "text": "Chiang (2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Phrase Based model", "sec_num": "3.2.3" }, { "text": "By taking the advantage of both statistical and rule-based translation methodologies, a new approach was developed, called hybrid-based approach, which has proven to have better efficiency in the area of MT systems. At present, several governmental and private based MT sectors use this hybrid-based approach to develop translation from source to target language, which is based on both rules and statistics. The hybrid approach can be used in a number of different ways. In some cases, translations are performed in the first stage using a rule-based approach followed by adjusting or correcting the output using statistical information. In the other way, rules are used to pre-process the input data as well as post-process the statistical output of a statistical-based translation system. This technique is better than the previous and has more power, flexibility, and control in translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid-based Translation", "sec_num": "3.3" }, { "text": "Hybrid approaches integrating more than one MT paradigm are receiving increasing attention. The METIS-II MT system is an example of hybridization around the EBMT framework; it avoids the usual need for parallel corpora by using a bilingual dictionary (similar to that found in most RBMT systems) and a monolingual corpus in the TL (Dirix et al., 2005) . An example of hybridization around the rule-based paradigm is given by Oepen. It integrates statistical methods within an RBMT system to choose the best translation from a set of competing hypotheses (translations) generated using rule-based methods (Oepen et al., 2007) .", "cite_spans": [ { "start": 331, "end": 351, "text": "(Dirix et al., 2005)", "ref_id": "BIBREF54" }, { "start": 604, "end": 624, "text": "(Oepen et al., 2007)", "ref_id": "BIBREF73" } ], "ref_spans": [], "eq_spans": [], "section": "Hybrid-based Translation", "sec_num": "3.3" }, { "text": "In SMT, Koehn and Hoang integrate additional annotations at the word-level into the translation models in order to better learn some aspects of the translation that are best explained on a morphological, syntactic, or semantic level (Koehn et al., 2007) . Hybridization around the statistical approach to MT is provided by Groves and Way; they combine both corpus-based methods into a single MT system by incorporating phrases (sub-sentential chunks) from both EBMT and SMT into an SMT system (Groves et al., 2005) . A different hybridization happens when an RBMT system and an SMT system are used in a cascade; Simard proposed an approach, analogous to that by Dugast, using an SMT system as an automatic post-editor of the translations produced by an RBMT system (Simard et al., 2007) (Dugast et al., 2007) .", "cite_spans": [ { "start": 233, "end": 253, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF66" }, { "start": 493, "end": 514, "text": "(Groves et al., 2005)", "ref_id": "BIBREF58" }, { "start": 765, "end": 786, "text": "(Simard et al., 2007)", "ref_id": "BIBREF79" }, { "start": 787, "end": 808, "text": "(Dugast et al., 2007)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Hybrid-based Translation", "sec_num": "3.3" }, { "text": "The example-based translation approach is based on analogical reasoning between two translation examples, proposed by Makoto Nagao in 1984. At run time, an example-based translation is characterized by its use of a bilingual corpus as its main knowledge base. The example-based approach comes under the EMT system, which relies on large parallel aligned corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example-based translation", "sec_num": "3.4" }, { "text": "Example-based translation is essentially translation by analogy. An EBMT system is given a set of sentences in the SL (from which one is translating) and their corresponding translations in the TL, and uses those examples to translate other, similar source-language sentences into the TL. The basic premise is that, if a previously translated sentence occurs again, the same translation is likely to be correct again. EBMT systems are attractive in that they require a minimum of prior knowledge; therefore, they are quickly adaptable to many language pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example-based translation", "sec_num": "3.4" }, { "text": "A restricted form of example-based translation is available commercially, known as a translation memory. In a translation memory, as the user translates text, the translations are added to a database, and when the same sentence occurs again, the previous translation is inserted into the translated document. This saves the user the effort of re-translating that sentence, and is particularly effective when translating a new revision of a previously-translated document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example-based translation", "sec_num": "3.4" }, { "text": "More advanced translation memory systems will also return close but inexact matches on the assumption that editing the translation of the close match will take less time than generating a translation from scratch. ALEPH, wEBMT, English to Turkish, English to Japanese, English to Sanskrit, and PanEBMT are some of the example-based MT systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example-based translation", "sec_num": "3.4" }, { "text": "Knowledge-Based Machine Translation (KBMT) is characterized by a heavy emphasis on functionally complete understanding of the source text prior to the translation into the target text. KBMT does not require total understanding, but assumes that an interpretation engine can achieve successful translation into several languages. KBMT is implemented on the Interlingua architecture; it differs from other interlingual techniques by the depth with which it analyzes the SL and its reliance on explicit knowledge of the world.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-Based MT", "sec_num": "3.5" }, { "text": "KBMT must be supported by world knowledge and by linguistic semantic knowledge about meanings of words and their combinations. Thus, a specific language is needed to represent the meaning of languages. Once the SL is analyzed, it will run through the augmenter. It is the knowledgebase that converts the source representation into an appropriate target representation before synthesizing into the target sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-Based MT", "sec_num": "3.5" }, { "text": "KBMT systems provide high quality translations. Nevertheless, they are quite expensive to produce due to the large amount of knowledge needed to accurately represent sentences in different languages. The English-Vietnamese MT system is one of the examples of KBMTS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "57", "sec_num": null }, { "text": "Principle-Based Machine Translation (PBMT) Systems employ parsing methods based on the Principles & Parameters Theory of Chomsky's Generative Grammar. The parser generates a detailed syntactic structure that contains lexical, phrasal, grammatical, and thematic information. It also focuses on robustness, language-neutral representations, and deep linguistic analyses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Principle-Based MT", "sec_num": "3.6" }, { "text": "In the PBMT, the grammar is thought of as a set of language-independent, interactive well-formed principles and a set of language-dependent parameters. Thus, for a system that uses n languages, one must have n parameter modules and a principles module. Thus, it is well-suited for use with the interlingual architecture.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Principle-Based MT", "sec_num": "3.6" }, { "text": "PBMT parsing methods differ from the rule-based approaches. Although efficient in many circumstances, they have the drawback of language-dependence and increase exponentially in rules if one is using a multilingual translation system. They provide broad coverage of many linguistic phenomena, but lack the deep knowledge about the translation domain that KBMT and EBMT systems employ. Another drawback of current PBMT systems is the lack of the most efficient method for applying the different principles. UNITRAN is one of the examples of PBMT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Principle-Based MT", "sec_num": "3.6" }, { "text": "In this interactive translation system, the user is allowed to suggest the correct translation to the translator online. This approach is very useful in a situation where the context of a word is unclear and there exists many possible meanings for a particular word. In such cases, the structural ambiguity can be solved with the interpretation of the user.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Online Interactive Systems", "sec_num": "3.7" }, { "text": "The first public Russian to English (Manning et al., 2003) MT system was presented at Georgetown University in 1954 with a vocabulary size of around 250 words. Since then, many research projects have been devoted to MT. Nevertheless, as the complexity of the linguistic phenomena involved in the translation process together with the computational limitations of the time were made apparent, enthusiasm faded out quickly. Also, the results of two negative reports, namely 'Bar-Hillel' and 'AL-PAC,' had a dramatic impact on MT research in that decade.", "cite_spans": [ { "start": 36, "end": 58, "text": "(Manning et al., 2003)", "ref_id": "BIBREF68" } ], "ref_spans": [], "eq_spans": [], "section": "Major MT Developments in India: A Literature Survey", "sec_num": "4." }, { "text": "During the 1970s, the focus of MT activity switched from the United States to Canada and Europe, especially due to the growing demands for translations within their multicultural societies. 'Mateo,' a fully-automatic system translating weather forecasts, enjoyed great success in Canada. Meanwhile, the European Commission installed a French-English MT system called 'Systran'. Other research projects, such as 'Eurotra,' 'Ariane,' and 'Susy,' broadened the scope of MT objectives and techniques. The rule-based approaches emerged as the correct path to successful MT quality. Throughout the 1980s, many different types of MT systems appeared with the most prevalent being those using an intermediate semantic language, such as the 'Interlingua' approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Major MT Developments in India: A Literature Survey", "sec_num": "4." }, { "text": "Lately, various researchers have shown better translation quality with the use of phrase translation. Most competitive SMT systems, such as CMU, IBM, ISI, and Google, use phrase-based systems with good results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Major MT Developments in India: A Literature Survey", "sec_num": "4." }, { "text": "In the early 1990s, the progress made by the application of statistical methods to speech recognition, introduced by IBM researchers, was in purely-SMT models (Manning et al., 2003) . The drastic increment in computational power and the increasing availability of written translated texts allowed the development of statistical and other corpus-based MT approaches. Many academic tools turned into useful commercial translation products, and several translation engines were quickly offered in the World Wide Web.", "cite_spans": [ { "start": 159, "end": 181, "text": "(Manning et al., 2003)", "ref_id": "BIBREF68" } ], "ref_spans": [], "eq_spans": [], "section": "Major MT Developments in India: A Literature Survey", "sec_num": "4." }, { "text": "Today, there is a growing demand for high-quality automatic translation. Almost all of the research community has moved towards corpus-based techniques, which have systematically outperformed traditional knowledge-based techniques in most performance comparisons. Every year, more research groups embark on SMT experimentation, and there is regained optimism in regards to future progress within the community.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Major MT Developments in India: A Literature Survey", "sec_num": "4." }, { "text": "MT is an emerging research area in NLP for Indian languages, which started more than a decade ago. There have been number of attempts in MT for English to Indian languages and Indian languages to Indian languages using different approaches. The literature shows that the earliest published work was undertaken by Chakraborty in 1966 (Noone et al., 2003) . Many government and private sector researchers, as well as individuals, are actively involved in the development of MT systems and have generated some reasonable MT systems. Some of these MT systems are in the advanced prototype or technology transfer stage, and the rest have been newly initiated. The main developments in Indian language MT systems are as follows.", "cite_spans": [ { "start": 333, "end": 353, "text": "(Noone et al., 2003)", "ref_id": "BIBREF72" } ], "ref_spans": [], "eq_spans": [], "section": "Major MT Developments in India: A Literature Survey", "sec_num": "4." }, { "text": "ANGLABHARTI is a multilingual machine aided translation project on translation from English to Indian languages, primarily Hindi, which is based on a pattern directed approach (Durgesh et al., 2000; Sinha et al., 1995; Ajai et al., 2009; Manning et al., 2003; Sudip et al., Machine Translation Approaches and Survey for Indian Languages 59 2005) . The strategy in this MT system is better than the transfer approach and lies below the Interlingua approach. In the first stage, a pattern directed parsing is performed on the SL English, which generates a `pseudo-target' that is applicable to a set of Indian languages. Word sense ambiguity in the SL sentence also is resolved by a number of semantic tags. In order to transform the pseudo TL into the corresponding TL, the system uses a separate text generator module. After correcting all ill-formed target sentences, a post-editing package is used make the final corrections. Even though it is a general purpose system, it has been applied mainly in the domain of public health at present. The ANGLABHARTI system is currently implemented from English to Hindi translation called AnglaHindi which is web-enabled (http://anglahindi.iitk.ac.in) and has obtained good domain-specific results for health campaigns, successfully translating many pamphlets and medical booklets. At present, further research work is going on to extend this approach for English to Telugu/Tamil translation. The project is primarily based at IIT-Kanpur, in collaboration with ER&DCI, Noida, and has been funded by TDIL. Professor RMK Sinha, Indian Institute of Technology, Kanpur is leading this MT project.", "cite_spans": [ { "start": 176, "end": 198, "text": "(Durgesh et al., 2000;", "ref_id": null }, { "start": 199, "end": 218, "text": "Sinha et al., 1995;", "ref_id": "BIBREF82" }, { "start": 219, "end": 237, "text": "Ajai et al., 2009;", "ref_id": null }, { "start": 238, "end": 259, "text": "Manning et al., 2003;", "ref_id": "BIBREF68" }, { "start": 260, "end": 345, "text": "Sudip et al., Machine Translation Approaches and Survey for Indian Languages 59 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ANGLABHARTI by Indian Institute of Technology, Kanpur (1991)", "sec_num": "4.1" }, { "text": "The disadvantages of the previous system are solved by introducing the ANGLABHARTI -II MT architecture system (Sinha et al., 2003) . The different approach, a Generalized Example-Base (GEB) for hybridization in addition to a Raw Example-Base (REB), is used to improve the performance of the translation. Compared to the previous approach, this system first attempts a match in REB and GEB before invoking the rule-base at the time of actual usage. Automated pre-editing and paraphrasing steps are further improvements in the proposed new translation approach. The system is designed in a way that various submodules are pipelined in order to achieve more accuracy and robustness.", "cite_spans": [ { "start": 110, "end": 130, "text": "(Sinha et al., 2003)", "ref_id": "BIBREF81" } ], "ref_spans": [], "eq_spans": [], "section": "ANGLABHARTI -II by Indian Institute of Technology, Kanpur (2004)", "sec_num": "4.2" }, { "text": "At present, the ANGLABHARTI technology has been transferred under the ANGLABHARTI Mission into eight different sectors across the country (Sudip et al., 2005) . ", "cite_spans": [ { "start": 138, "end": 158, "text": "(Sudip et al., 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ANGLABHARTI -II by Indian Institute of Technology, Kanpur (2004)", "sec_num": "4.2" }, { "text": "ANUBHARATI is a recently started MT system aimed at translating from Hindi to English (Durgesh et al., 2000; Sinha et al., 1995; Ajai et al., 2009; Sudip et al., 2005) . Similar to the ANGLABHARTI MT system, ANUBHARATI is also based on machine aided translation in which a variation of the example-based approach, called a template or hybrid HEBM, is used. The literature shows that a prototype version of the MT system has been developed and the project is being extended for developing a complete system. The HEBMT approach takes advantage of pattern and example-based approaches by combining the essentials of these methods. One more added advantage of the ANUBHARATI system is that it provides a generic model for translation that is suitable for translation between any two Indian languages pair with a minor addition of modules.", "cite_spans": [ { "start": 86, "end": 108, "text": "(Durgesh et al., 2000;", "ref_id": null }, { "start": 109, "end": 128, "text": "Sinha et al., 1995;", "ref_id": "BIBREF82" }, { "start": 129, "end": 147, "text": "Ajai et al., 2009;", "ref_id": null }, { "start": 148, "end": 167, "text": "Sudip et al., 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ANUBHARATI by Indian Institute of Technology, Kanpur (1995)", "sec_num": "4.3" }, { "text": "ANUBHARATI-II is a revised version of the ANUBHARATI that overcomes most of the drawbacks of the earlier architecture with a varying degree of hybridization of different paradigms (Sudip et al., 2005) . The main intention of this system is to develop Hindi to any other Indian languages, with a generalized hierarchical example-based approach. Nevertheless, while both ANGLABHARTI-I and ANUBHARTI-II did not produce the expected results, both systems have been implemented successfully with good results. Professor RMK Sinha, Indian Institute of Technology, Kanpur is leading this MT project.", "cite_spans": [ { "start": 180, "end": 200, "text": "(Sudip et al., 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ANUBHARATI-II by Indian Institute of Technology, Kanpur (2004)", "sec_num": "4.4" }, { "text": "To utilize the close similarity among Indian languages for MT, another translation system called Anusaaraka (Durgesh et al., 2000; Sudip et al., 2005) , was introduced, which is based on the principles of Paninian Grammar (PG). Anusaaraka is a machine aided translation system that also is used on language access between these languages. At present, this system is applied to children's stories, and an Alpha version of the system has been developed already for language assessors from five regional languages Punjabi, Bengali, Telugu, Kannada, and Marathi into Hindi. The Anusaaraka MT approach mainly consists of two modules (Manning et al., 2003; Bharati et al., 1997) . The first module is called Core Anusaaraka, which is based on language knowledge, and the second one is a domain specific module that is based on statistical knowledge, world knowledge, etc. That is, the idea behind Anusaaraka is different from other systems in that the total load is divided in-to parts. The machine carries out the language-based analysis of the text, and the remaining work, such as knowledge-based analysis or interpretation, is performed by the reader. The Anusaaraka project was funded by TDIL,started at IIT Kanpur, and later shifted mainly to the Centre for Applied Linguistics and ", "cite_spans": [ { "start": 108, "end": 130, "text": "(Durgesh et al., 2000;", "ref_id": null }, { "start": 131, "end": 150, "text": "Sudip et al., 2005)", "ref_id": null }, { "start": 628, "end": 650, "text": "(Manning et al., 2003;", "ref_id": "BIBREF68" }, { "start": 651, "end": 672, "text": "Bharati et al., 1997)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Anusaaraka by Indian Institute of Technology, Kanpur and University of Hyderabad", "sec_num": "4.5" }, { "text": "The Anusaaraka system from English to Hindi preserves the basic principles of information preservation and load distribution of original Anusaaraka (Manning et al., 2003; Bharati et al., 1997) . To analyze the source text, it uses a modified version of the XTAG based super tagger and light dependency analyzer that was developed at the University of Pennsylvania. The advantage of this system is that, after the completion of the source text analysis, the user may read the output and can always move to a simpler output if the system produces the wrong output or fails to produce output.", "cite_spans": [ { "start": 148, "end": 170, "text": "(Manning et al., 2003;", "ref_id": "BIBREF68" }, { "start": 171, "end": 192, "text": "Bharati et al., 1997)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Anusaaraka System from English to Hindi", "sec_num": "4.6" }, { "text": "MaTra is an English to Indian languages (at present Hindi) Human-Assisted translation system based on a transfer approach using a frame-like structured representation that resolves the ambiguities using rule-based and heuristics approaches (Durgesh et al., 2000; Sudip et al., 2005; Manning et al., 2003) . MaTra is an innovative system, which provides an intuitive GUI, where the user visually can inspect the analysis of the system and can provide disambiguation information to produce a single correct translation. Even though the MaTra system is intended to be a general purpose system, it has been applied mainly in the domains of news, annual reports, and technical phrases. MaTra is an ongoing project and the system currently is able to translate domain-specific simple sentences. Current development is towards covering other types of sentences. The Natural Language group of the Knowledge Based Computer Systems (KBCS) division at the National Centre for Software Technology (NCST), Mumbai (currently CDAC, Mumbai) has undertaken the task developing the MaTra system and is funded by TDIL.", "cite_spans": [ { "start": 240, "end": 262, "text": "(Durgesh et al., 2000;", "ref_id": null }, { "start": 263, "end": 282, "text": "Sudip et al., 2005;", "ref_id": null }, { "start": 283, "end": 304, "text": "Manning et al., 2003)", "ref_id": "BIBREF68" } ], "ref_spans": [], "eq_spans": [], "section": "MaTra (2004)", "sec_num": "4.7" }, { "text": "The Mantra MT system is intended to perform translation for the domains of gazette notifications pertaining to government appointments and parliamentary proceeding summaries between English and Indian languages as well as from Indian languages to English, where source and TL grammars are represented using Lexicalized Tree Adjoining Grammar (LTAG) formalism (Durgesh et al., 2000; Sudip et al., 2005) . The added advantage of this system is", "cite_spans": [ { "start": 359, "end": 381, "text": "(Durgesh et al., 2000;", "ref_id": null }, { "start": 382, "end": 401, "text": "Sudip et al., 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "MANTRA by Centre for Development of Advanced Computing, Bangalore (1999)", "sec_num": "4.8" }, { "text": "The KB Chandrasekhar Research Centre of Anna University at Chennai is active in the area of Tamil NLP. A Tamil-Hindi language assessor has been built using the Anusaaraka formalism (Durgesh et al., 2000; Sudip et al., 2005; Manning et al., 2003) . The group has developed a Tamil-Hindi machine aided translation system under the supervision of Prof. CN Krishnan, with a performance of 75%.", "cite_spans": [ { "start": 181, "end": 203, "text": "(Durgesh et al., 2000;", "ref_id": null }, { "start": 204, "end": 223, "text": "Sudip et al., 2005;", "ref_id": null }, { "start": 224, "end": 245, "text": "Manning et al., 2003)", "ref_id": "BIBREF68" } ], "ref_spans": [], "eq_spans": [], "section": "Tamil-Hindi Anusaaraka MT", "sec_num": "4.11" }, { "text": "Recently, the NLP group also developed a prototype of English-Tamil Human Aided MT System (Manning et al., 2003; Dwivedi et al., 2010) . The system mainly consists of three major components: an English morphological analyzer, a mapping unit, and the Tamil language morphological generator.", "cite_spans": [ { "start": 90, "end": 112, "text": "(Manning et al., 2003;", "ref_id": "BIBREF68" }, { "start": 113, "end": 134, "text": "Dwivedi et al., 2010)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "English-Tamil machine Aided Translation system", "sec_num": "4.12" }, { "text": "This project was developed jointly by the Indian Institute of Science, Bangalore, and International Institute of Information Technology, Hyderabad, in collaboration with Carnegie Mellon University based on an example-based approach (Sudip et al., 2005; Dwivedi et al., 2010 ). An experimental system has been released for experiments, trials, and user feedback and is publicly available.", "cite_spans": [ { "start": 232, "end": 252, "text": "(Sudip et al., 2005;", "ref_id": null }, { "start": 253, "end": 273, "text": "Dwivedi et al., 2010", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "SHIVA MT System for English to Hindi", "sec_num": "4.13" }, { "text": "This is a recently started project that also was developed jointly by Indian Institute of Science, Bangalore, and International Institute of Information Technology, Hyderabad, in collaboration with Carnegie Mellon University (Sudip et al., 2005; Dwivedi et al., 2010) . The system follows a hybrid approach by combining both rule and statistical-based approaches. An experimental system for English to Hindi, Marathi, and Telugu is publicly available for experiments, trials, and user feedback. Durgesh et al., 2000; Sudip et al., 2005; Manning et al., 2003; Dwivedi et al., 2010) . The system has inbuilt dictionaries in specific domains and supports post-editing. If the corresponding target word is not present in the lexicon, the system has a facility to translate that source word into the target. The system can run in Windows and a demonstration version of the system is publicly available.", "cite_spans": [ { "start": 225, "end": 245, "text": "(Sudip et al., 2005;", "ref_id": null }, { "start": 246, "end": 267, "text": "Dwivedi et al., 2010)", "ref_id": "BIBREF56" }, { "start": 495, "end": 516, "text": "Durgesh et al., 2000;", "ref_id": null }, { "start": 517, "end": 536, "text": "Sudip et al., 2005;", "ref_id": null }, { "start": 537, "end": 558, "text": "Manning et al., 2003;", "ref_id": "BIBREF68" }, { "start": 559, "end": 580, "text": "Dwivedi et al., 2010)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "SHAKTI MT System for English to Hindi, Marathi and Telugu", "sec_num": "4.14" }, { "text": "A statistical-based English to Indian languages, mainly Hindi, MT system was started by IBM India Research Lab at New Delhi, using the same approach as its existing work on other languages (Durgesh et al., 2000; Manning et al., 2003) .", "cite_spans": [ { "start": 189, "end": 211, "text": "(Durgesh et al., 2000;", "ref_id": null }, { "start": 212, "end": 233, "text": "Manning et al., 2003)", "ref_id": "BIBREF68" } ], "ref_spans": [], "eq_spans": [], "section": "English-Hindi Statistical MT", "sec_num": "4.16" }, { "text": "A rule-based English to Hindi Machine Aided Translation system was developed by Jadavpur University, Kolkata, under the supervision of Prof. Sivaji Bandyopadhyay (Durgesh et al., 2000) . The system uses the transfer based approach and is currently working on domain specific MT system for news sentences.", "cite_spans": [ { "start": 162, "end": 184, "text": "(Durgesh et al., 2000)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "English-Hindi MAT for news sentences", "sec_num": "4.17" }, { "text": "Under the supervision of Prof. Sivaji Bandyopadhyay, a hybrid-based MT system for English to Bengali was developed at Jadavpur University, Kolkata, in 2004 (Dwivedi et al., 2010 . The current version of the system works at the sentence level.", "cite_spans": [ { "start": 139, "end": 155, "text": "Kolkata, in 2004", "ref_id": null }, { "start": 156, "end": 177, "text": "(Dwivedi et al., 2010", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "A hybrid MT system for English to Bengali", "sec_num": "4.18" }, { "text": "In 2004, Prof. Sinha and Prof. Thakur developed a standard Hindi-English MT system called Hinglish by incorporating an additional level in the existing ANGLABHARTI-II and ANUBHARTI-II systems (Dwivedi et al., 2010) . The system produced satisfactory results in more than 90% of the cases, except the case with polysemous verbs.", "cite_spans": [ { "start": 192, "end": 214, "text": "(Dwivedi et al., 2010)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Hinglish MT system", "sec_num": "4.19" }, { "text": "An example-based English to Hindi, Kannada, and Tamil, as well as Kannada to Tamil (Dwivedi et al., 2010) , MT system was developed by Balajapally et al. (2006) . A set of bilingual dictionaries comprised of a sentence dictionary, phrase-dictionary, word-dictionary, and phonetic-dictionary of parallel corpora of sentences, phrases, words, and phonetic mappings of words is used for the MT. A corpus size of 75,000 most commonly used English-{Hindi, Kannada and Tamil} sentence pairs are used for MT.", "cite_spans": [ { "start": 83, "end": 105, "text": "(Dwivedi et al., 2010)", "ref_id": "BIBREF56" }, { "start": 135, "end": 160, "text": "Balajapally et al. (2006)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "English to (Hindi, Kannada, Tamil) and Kannada to Tamil language-pair EBMT system (2006)", "sec_num": "4.20" }, { "text": "A direct word-to-word translation approach, a Punjabi to Hindi MT system, was developed by Josan and Lehal at Punjabi University, Patiala, and reported 92.8% accuracy (Dwivedi et al., 2010) . In addition to the Punjabi-Hindi lexicon and morphological analysis, the system also consists of modules that support word sense disambiguation, transliteration, and post-processing.", "cite_spans": [ { "start": 167, "end": 189, "text": "(Dwivedi et al., 2010)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Punjabi to Hindi MT system (2007)", "sec_num": "4.21" }, { "text": "Consortiums of institutions (including IIIT Hyderabad, University of Hyderabad, CDAC (Noida, Pune), Anna University, KBC, Chennai, IIT Kharagpur, IIT Kanpur, IISc Bangalore, IIIT Alahabad, Tamil University, Jadavpur University) started to develop MT systems among Indian languages, called Sampark and have already released experimental systems for {Punjabi, Urdu, Tamil, Marathi} to Hindi and Tamil-Hindi in 2009 (Dwivedi et al., 2010) .", "cite_spans": [ { "start": 413, "end": 435, "text": "(Dwivedi et al., 2010)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "MT System among Indian language -Sampark (2009)", "sec_num": "4.22" }, { "text": "Using a phrasal example-based approach, Jadavpur University developed a domain-specific translation of English news to Bengali called ANUBAAD, with current system work at the sentence level (Sudip et al., 2005) . Also, the university started to develop a translation system for English news headlines to Bengali using a semantics-example-based approach. Using the same architecture, the university also developed a MT system for English-Hindi, and the system works currently at the simple sentence level. Recently the university also started to develop an Indian languages (Bengali, Manipuri) to English MT system. These translation systems are developing under the supervision of Prof. Sivaji Bandyopadhyay. The university uses these translation systems for guiding students and researchers who work in the MT area.", "cite_spans": [ { "start": 190, "end": 210, "text": "(Sudip et al., 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "English to Bengali (ANUBAAD) and English to Hindi MT System by Jadavpur University", "sec_num": "4.23" }, { "text": "Utkal University, Bhuvaneshwar is working on an English-Oriya MT system OMTrans under the supervision of Prof. Sanghamitra Mohanty (Sudip et al., 2005; Manning et al., 2003) . In addition to the parser and Oriya Morphological Analyser (OMA), the system also consists of an N-gram based word sense disambiguation module.", "cite_spans": [ { "start": 131, "end": 151, "text": "(Sudip et al., 2005;", "ref_id": null }, { "start": 152, "end": 173, "text": "Manning et al., 2003)", "ref_id": "BIBREF68" } ], "ref_spans": [], "eq_spans": [], "section": "Oriya MT System (OMTrans) by Utkal University, Vanivihar", "sec_num": "4.24" }, { "text": "The Department of Mathematics, IIT Delhi, under the supervision of Professor Niladri Chatterjee developed an example-based English-Hindi MT system (Sudip et al., 2005) . They have developed divergence algorithms for identifying the divergence for English to Hindi example-based system and a systematic scheme for retrieval from the English-Hindi example base.", "cite_spans": [ { "start": 147, "end": 167, "text": "(Sudip et al., 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "English-Hindi EBMT system by IIT Delhi", "sec_num": "4.25" }, { "text": "Using the Machine Aided Translation system approach, a domain-specific translation system for translating public health related sentences from English to Hindi was developed (Manning et al., 2003) . The system supports the advantage of post-editing and reportes 60% performance.", "cite_spans": [ { "start": 174, "end": 196, "text": "(Manning et al., 2003)", "ref_id": "BIBREF68" } ], "ref_spans": [], "eq_spans": [], "section": "Machine Aided Translation by Centre for Development of Advanced Computing (CDAC), Noida", "sec_num": "4.26" }, { "text": "Goyal and Lehal of Punjabi University, Patiala, developed a Hindi to Punjabi MT system based on a direct word-to-word translation approach (Goyal et al., 2009; Dwivedi et al., 2010) .", "cite_spans": [ { "start": 139, "end": 159, "text": "(Goyal et al., 2009;", "ref_id": "BIBREF57" }, { "start": 160, "end": 181, "text": "Dwivedi et al., 2010)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Hindi to Punjabi MT system (2009)", "sec_num": "4.27" }, { "text": "The system consists of the following modules: pre-processing, a word-to-word Hindi-Punjabi lexicon, morphological analysis, word sense disambiguation, transliteration, and post-processing. They also have developed an evaluation approach for a Hindi to English translation system and have reported 95% accuracy. Still, work is being carried out to achieve a better system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hindi to Punjabi MT system (2009)", "sec_num": "4.27" }, { "text": "Ruvan Weerasinghe developed an SMT Approach to Sinhala-Tamil Language Translation (Weerasinghe et al., 2011) . This work reports on SMT based translation performed between language pairs, such as the Sinhala-Tamil and English-Sinhala pairs. The experiments results show that current models perform significantly better for the Sinhala-Tamil pair than the English-Sinhala pair and prove that the SMT system works better for languages that are not too distantly related to each other.", "cite_spans": [ { "start": 82, "end": 108, "text": "(Weerasinghe et al., 2011)", "ref_id": "BIBREF86" } ], "ref_spans": [], "eq_spans": [], "section": "A Statistical MT Approach to Sinhala-Tamil Language (2011)", "sec_num": "4.28" }, { "text": "Dr. Vasu Renganathan, University of Pennsylvania, developed an interactive approach for an English-Tamil MT System on the Web (Samir et al., 2010). The system is set on a rule-based approach, containing around five thousand words in the lexicon and a number of transfer rules used for mapping English structures to Tamil structures. This is an interactive system in that users can update this system by adding more words into the lexicon and rules into the rule-base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Interactive Approach for English-Tamil MT System on the Web (2002)", "sec_num": "4.29" }, { "text": "Samir Kr. Borgohain and Shivashankar B. Nair introduced a new MT approach for Pictorially Grounded Language (PGL) based on their pictorial knowledge (Samir et al., 2010) . In this approach, symbols of both the source and the TLs are grounded on a common set of images and animations. PGL is a graphic language and acts as a conventional intermediate language representation. While preserving the inherent meanings of the SL, the translation mechanism can also be scalable into a larger set of languages. The translation system is implemented in such a way that images and objects are tagged with both the source and target language equivalents, which makes the reverse translation much easier.", "cite_spans": [ { "start": 149, "end": 169, "text": "(Samir et al., 2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Translation system using pictorial knowledge representation (2010)", "sec_num": "4.30" }, { "text": "This is an attempt to develop a statistical-based MT for English to Malayalam language by a set of MTech students under the guidance of Dr. K P Soman (Rahul et al., 2009) . In this approach, they showed that a SMT based system can be improved by incorporating the rule-based reordering and morphological information of source and target languages.", "cite_spans": [ { "start": 150, "end": 170, "text": "(Rahul et al., 2009)", "ref_id": "BIBREF75" } ], "ref_spans": [], "eq_spans": [], "section": "Rule-based Reordering and Morphological Processing For English-Malayalam SMT (2009)", "sec_num": "4.31" }, { "text": "A piloted SMT based English to Telugu MT (MT) System called \"enTel\" was developed by Anitha Nalluri and Vijayanand Kommaluri, based on Johns Hopkins University Open Source Architecture (JOSHUA) (Anitha et al., 2011) . A Telugu parallel corpus from the Enabling Minority Language Engineering (EMILLE) developed by CIIL Mysore and English to Telugu Dictionary, developed by Charles Philip Brown, is considered for training the translation system.", "cite_spans": [ { "start": 194, "end": 215, "text": "(Anitha et al., 2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "SMT using Joshua (2011)", "sec_num": "4.32" }, { "text": "The NLP team, including Prashanth Balajapally, Phanindra Bandaru, Ganapathiraju, N. Balakrishnan and Raj Reddy, introduced a multilingual book reader interface for DLI that supports transliteration and good enough translation (Prashanth) based on transliteration, word to word translation and full-text translation for Indian language. This is a simple, inexpensive tool that exploits the similarity between Indian languages. This tool can be useful for beginners who can understand their mother tongue or other Indian languages, but cannot read the script, and for an average reader who has the domain expertise. This tool can be also be used for translating either the documents or the queries in a multilingual search purpose.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual Book Reader", "sec_num": "4.33" }, { "text": "Vamshi Ambati and U Rohini proposed a hybrid approach to EBMT (EBMT) for English to Indian languages that makes use of SMT methods and minimal linguistic resources (Ambati et al., 2007) . Currently work is going on to develop English to Hindi as well as other Indian language translation systems based on manual and a statistical dictionary built from an SMT tool using an example database consisting of source and target parallel sentences.", "cite_spans": [ { "start": 164, "end": 185, "text": "(Ambati et al., 2007)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "A Hybrid Approach to EBMT for English to Indian Languages (2007)", "sec_num": "4.34" }, { "text": "Ananthakrishnan Ramanathan, Pushpak Bhattacharyya, Jayprasad Hegde, Ritesh M. Shah, and M. Sasikumar proposed a new idea to improve the performance of the SMT based MT by incorporating syntactic and morphological processing (Ananthakrishnan). In this contest, they proved that performance of a baseline phrase-based system can be substantially improved by i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SMT by Incorporating Syntactic and Morphological Processing", "sec_num": "4.35" }, { "text": "reordering the source (English) sentence as per target (Hindi) syntax, and (ii) using the suffixes of target (Hindi) words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SMT by Incorporating Syntactic and Morphological Processing", "sec_num": "4.35" }, { "text": "This is a very different approach to MT that is intended for dissemination of information to the deaf people in India and was proposed by Tirthankar Dasgupta, Sandipan Dandpat, and Anupam Basu Harshawardhan et al., 2011) . At present, a prototype version of English to Indian Sign Language has been developed and the ISL syntax is represented based on Lexical Functional Grammar (LFG) formalism.", "cite_spans": [ { "start": 193, "end": 220, "text": "Harshawardhan et al., 2011)", "ref_id": "BIBREF59" } ], "ref_spans": [], "eq_spans": [], "section": "Prototype MT System from Text-To-Indian Sign Language (ISL)", "sec_num": "4.36" }, { "text": "In the proposed work, a different approach that makes use of the karaka relations for sentence comprehension is used in the frame-based translation system for Dravidian languages (Idicula et al., 1999) . Two pattern-directed application-oriented experiments are conducted, and the same meaning representation technique is used in both cases. In the first experiment, translation is done from a free word order language to fixed word order one, where both the source and destination are natural languages. In the second experiment, however, the TL is an artificial language with a rigid syntax. Even though there is a difference in the generation of the target sentence, the results obtained in both experiments are encouraging.", "cite_spans": [ { "start": 179, "end": 201, "text": "(Idicula et al., 1999)", "ref_id": "BIBREF64" } ], "ref_spans": [], "eq_spans": [], "section": "An Adaptable Frame based system for Dravidian language Processing (1999)", "sec_num": "4.37" }, { "text": "CALTS in collaboration with IIIT, Hyderabad; Telugu University, Hyderabad; and Osmania University, Hyderabad developed an English-Telugu and Telugu-Tamil MT system under the supervision of Prof. Rajeev Sangal (CALTS). The English-Telugu system uses an English-Telugu machine aided translation lexicon of size 42000 words and a wordform synthesizer for Telugu. The Telugu-Tamil MT system was developed based on the available resources at CALTS: Telugu Morphological analyzer, Tamil generator, verb sense disambiguator, and Telugu-Tamil machine aided translation dictionary. The performance of the systems is encouraging, and it handles source sentences of varying complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English-Telugu T2T MT and Telugu-Tamil MT System (2004)", "sec_num": "4.38" }, { "text": "R. Mahesh K. Sinha proposed a different strategy for deriving English to Urdu translation using an English to Hindi MT system (R. Mahesh et al., 2009) . In the proposed method, an English-Hindi lexical database is used to collect all possible Hindi words and phrases. These words and phrases are further augmented by including their morphological variations and attaching all possible postpositions. Urdu is structurally very close to Hindi and this augmented list is used to provide mapping from Hindi to Urdu. The advantage of this translation system is that the grammatical analysis of English provides all the necessary information needed for Hindi to Urdu mapping and no part of speech tagging, chunking, or parsing of Hindi has been used for translation.", "cite_spans": [ { "start": 130, "end": 150, "text": "Mahesh et al., 2009)", "ref_id": "BIBREF67" } ], "ref_spans": [], "eq_spans": [], "section": "Developing English-Urdu MT Via Hindi (2009)", "sec_num": "4.39" }, { "text": "Kommaluri Vijayanand, S. Choudhury and Pranab Ratna proposed an automatic bilingual MT for Bengali to Assamese using an example-based approach (Kommaluri et al., 2002) . They used a manually created aligned bilingual corpus by feeding real examples using pseudo code. The quality of the translation was improved by preprocessing the longer input sentences and also via the backtracking techniques. Since the grammatical structure of Bengali and Assamese is very similar, lexical word groups are required.", "cite_spans": [ { "start": 143, "end": 167, "text": "(Kommaluri et al., 2002)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Bengali-Assamese automatic MT system-VAASAANUBAADA (2002)", "sec_num": "4.40" }, { "text": "The Computational Engineering and Networking research centre of Amrita School of Engineering, Coimbatore, proposed an English-Tamil translation system. The system is set on a phrase-based approach by incorporating concept labeling using translation memory of parallel corpora (Harshawardhan et al., 2011 ). The translation system consists of 50,000 English-Tamil parallel sentences, 5000 proverbs, and 1000 idioms and phrases, with a dictionary containing more than 2,00,000 technical words and 100,000 general words. The system has an accuracy of 70%.", "cite_spans": [ { "start": 276, "end": 303, "text": "(Harshawardhan et al., 2011", "ref_id": "BIBREF59" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase based English-Tamil Translation System by Concept Labeling using Translation Memory (2011)", "sec_num": "4.41" }, { "text": "This work is aimed at improving the translation quality of an MT system by simplifying the complex input sentences for an English to Tamil MT system (Poornima et al., 2011) . In order to simplify the complex sentences based on connectives, like relative pronouns or coordinating and subordinating conjunctions, a rule-based technique is proposed. In this approach, a complex sentence is expressed as a list of sub-sentences while the meaning remains unaltered. The simplification task can be used as a preprocessing tool for MT where the initial splitting is based on delimiters and the simplification is based on connectives.", "cite_spans": [ { "start": 149, "end": 172, "text": "(Poornima et al., 2011)", "ref_id": "BIBREF74" } ], "ref_spans": [], "eq_spans": [], "section": "Rule-based Sentence Simplification for English to Tamil MT System (2011)", "sec_num": "4.42" }, { "text": "Using morphology and dependency relations, a Manipuri to English bidirectional SMT system was developed by Thoudam Doren Singh and Sivaji Bandyopadhyay (Doren Singh et al., 2010) . The system uses a domain-specific parallel corpus of 10350 sentences from news for training purposes and the system is tested with 500 sentences.", "cite_spans": [ { "start": 121, "end": 178, "text": "Singh and Sivaji Bandyopadhyay (Doren Singh et al., 2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Manipuri-English Bidirectional SMT Systems (2010)", "sec_num": "4.43" }, { "text": "P.J. Antony, P. Unnikrishnan and Dr. K.P Soman proposed an SMT system for English to Kannada by incorporating syntactic and morphological information (Unnikrishnan et al., 2010) . In order to increase the performance of the translation system, we have introduced a new approach in creating the parallel corpus. The main ideas that we have implemented and proven effective in the English to Kannada SMT system are: (i) reordering the English source sentence according to Dravidian syntax, (ii) using the root suffix separation on both English and Dravidian words, and iii) use of morphological information that substantially reduces the corpus size required for training the system. The results show that significant improvements are possible by incorporating syntactic and morphological information into the corpus. From the experiments we have found that the proposed translation system successfully works for almost all simple sentences in their twelve tense forms and their negatives forms.", "cite_spans": [ { "start": 150, "end": 177, "text": "(Unnikrishnan et al., 2010)", "ref_id": "BIBREF83" } ], "ref_spans": [], "eq_spans": [], "section": "English to Kannada SMT System (2010)", "sec_num": "4.44" }, { "text": "This system is an effort of the English to Indian Language MT (EILMT) consortium. Anuvadaksh is a system that allows translating the text from English to six other Indian languages, i.e. Hindi, Urdu, Oriya, Bangla, Marathi, and Tamil. Anuvadaksh being a consortium based project has a hybrid approach that is designed to work with platform and technology independent modules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anuvadaksh", "sec_num": "4.45" }, { "text": "This system has been developed to facilitate the multi-lingual community, initially in the domain-specific expressions of tourism, and it would subsequently foray into various other domains in a phase-wise manner. It integrates four MT Technologies:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anuvadaksh", "sec_num": "4.45" }, { "text": "Tree-Adjoining-Grammar (TAG) based MT. SMT. Analyze and Generate rules (Anlagen) based MT. Example-based MT (EBMT).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anuvadaksh", "sec_num": "4.45" }, { "text": "Google Translate is a free translation service that provides instant translations between 57 different languages. Google Translate generates a translation by looking for patterns in hundreds of millions of documents to help decide on the best translation. By detecting patterns in documents that have already been translated by human translators, Google Translate makes guesses as to what an appropriate translation should be. This process of seeking patterns in large amounts of text is called \"SMT\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Google Translate", "sec_num": "4.46" }, { "text": "An English to Assamese MT system is in progress (Sudhir et al., 2007) . The following activities are in progress in this direction.", "cite_spans": [ { "start": 48, "end": 69, "text": "(Sudhir et al., 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "English to Assamese MT System", "sec_num": "4.47" }, { "text": "\u2022 The graphical user interface of the MT system has been re-designed. It now allows the display of Assamese text. Modifications have been made in the Java modules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English to Assamese MT System", "sec_num": "4.47" }, { "text": "\u2022 The existing Susha encoding scheme has been used. In addition, a new Assamese font set has been created according to that of Susha font set. The system is now able to display properly consonants, vowels, and matras of Assamese characters properly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English to Assamese MT System", "sec_num": "4.47" }, { "text": "\u2022 The mapping of the Assamese keyboard with that of Roman has been worked out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English to Assamese MT System", "sec_num": "4.47" }, { "text": "\u2022 The process of entering Assamese words (equivalent of English words) in the lexical database (nouns and verbs) is in progress.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English to Assamese MT System", "sec_num": "4.47" }, { "text": "The system developed basically a rule-based approach and relies on a bilingual English to Assamese dictionary. The dictionary-supported generation of Assamese text from English text is a major stage in this MT. Each entry in the dictionary is supplied with inflectional information about the English lexeme and all of its Assamese equivalents. The dictionary is annotated for morphological, syntactic, and partially semantic information. It currently can handle translation of simple sentences from English to Assamese. The dictionary contains around 5000 root words. The system simply translates source language texts to the corresponding target language texts phrase to phrase by means of the bilingual dictionary lookup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English to Assamese MT System", "sec_num": "4.47" }, { "text": "Tamil University, Tanjore, initiated a machine oriented translation from Russian-Tamil during 1983 -1984 under the leadership of Vice-Chancellor Dr. V.I Subramaniam (Sudhir et al., 2007) . It was taken up as an experimental project to study and compare Tamil with Russian in order to translate Russian scientific text into Tamil. A team consisting of a linguist, a Russian language scholar, and a computer scientist were entrusted to work on this project. During the preliminary survey, both Russian SL and Tamil were compared thoroughly for their style, syntax, morphological level, etc.", "cite_spans": [ { "start": 73, "end": 98, "text": "Russian-Tamil during 1983", "ref_id": null }, { "start": 99, "end": 104, "text": "-1984", "ref_id": "BIBREF4" }, { "start": 165, "end": 186, "text": "(Sudhir et al., 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Tamil University MT System", "sec_num": "4.48" }, { "text": "Bharathidasan University, Tamilnadu, is working on translation between languages belonging to the same family, such as Tamil-Malayalam translation (Sudhir et al., 2007) . The MT consists of the following modules that are in progress.", "cite_spans": [ { "start": 147, "end": 168, "text": "(Sudhir et al., 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Tamil-Malayalam MT System", "sec_num": "4.49" }, { "text": "Lexical database-This will be a bilingual dictionary of root words. All the noun roots and verb roots are collected. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tamil-Malayalam MT System", "sec_num": "4.49" }, { "text": "This survey described machine translation (MT) techniques in a longitudinal and latitudinal way with an emphasis on the MT development for Indian languages. Additionally, we tried to describe briefly the different existing approaches that have been used to develop MT systems. From the survey, we found that almost all existing Indian language MT projects are based on a statistical and hybrid approach. We also identified the following two reasons that most of the developed MT systems for Indian languages have followed the statistical and hybrid approach. The first reason is, since Indian languages are morphologically rich in features and agglutinative in nature, rule-based approaches have failed in many situations for developing full-fledged MT systems. Second the general benefits of statistical and hybrid approaches have encouraged researchers to choose these approaches to develop MT systems for Indian languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5." }, { "text": "In psychology and in common use, emotion is an aspect of a person's mental state of being, normally based on or tied to the person's internal (physical) and external (social) sensory feeling (Zhang et al., 2008) . Emotions, of course, are not linguistic features. Nevertheless, the most convenient access we have to them is through language (Strapparava & Valitutti, 2004) . The identification of emotion expressed in the text with respect to the reader or writer is a challenging task (Yang et al., 2009) . A wide range of Natural Language Processing (NLP) tasks, from tracking users' emotion about products/events/politics as expressed in online forums or news to customer relationship management, use emotional information.", "cite_spans": [ { "start": 191, "end": 211, "text": "(Zhang et al., 2008)", "ref_id": "BIBREF128" }, { "start": 341, "end": 372, "text": "(Strapparava & Valitutti, 2004)", "ref_id": null }, { "start": 486, "end": 505, "text": "(Yang et al., 2009)", "ref_id": "BIBREF126" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Currently, emails, blogs, chat rooms, online forums, and even Twitter are being considered as effective communication substrates to analyze the reaction of emotional catalysts. A blog is a communicative and informative repository of text-based emotional content in the Web 2.0 (Yang et al., 2007) . In particular, blog posts contain instant views, updated views, or influenced views regarding single or multiple topics. Many blogs act as online diaries of the bloggers for reporting the blogger's daily activities and surroundings. Sometimes, the blog posts are annotated by other bloggers. In addition, a large collection of blog data is suitable for any machine learning framework.", "cite_spans": [ { "start": 277, "end": 296, "text": "(Yang et al., 2007)", "ref_id": "BIBREF125" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "It has been observed that three major components are crucial in determining the emotional slants from different perspectives: Emotional Expression, Holder, and Topic. Thus, the determination of the emotion holder and topics from the text helps us track and distinguish users' emotions separately on the same or different topics. Emotional expression (word or phrase) is the subjective counterpart that can be expressed by a directly affective word (\"John is really happy enough\") or using some indirect notion (\"Dream of music is in their eyes and hearts\"). The source or holder of an emotional expression is the speaker, the writer, or the experiencer (Wiebe et al., 2005) . Extraction of the emotion holder is important in discriminating between emotions that are viewed from different perspectives (Seki, 2007) . By grouping opinion holders of different stances on diverse social and political issues, we can gain better understanding of the relationships among countries or among organizations (Kim & Hovy, 2006) . Topic, however, is the real world object, event, or abstract entity that is the primary subject of the emotion or opinion intended by the holder (Stoyanov & Cardie, 2008a) . Topic depends on the context in which its associated emotional expression occurs (Stoyanov & Cardie, 2008b Rashed said that he was remembering this beautiful comic while reading your poem.", "cite_spans": [ { "start": 653, "end": 673, "text": "(Wiebe et al., 2005)", "ref_id": "BIBREF124" }, { "start": 801, "end": 813, "text": "(Seki, 2007)", "ref_id": "BIBREF117" }, { "start": 998, "end": 1016, "text": "(Kim & Hovy, 2006)", "ref_id": "BIBREF107" }, { "start": 1164, "end": 1190, "text": "(Stoyanov & Cardie, 2008a)", "ref_id": "BIBREF120" }, { "start": 1274, "end": 1299, "text": "(Stoyanov & Cardie, 2008b", "ref_id": "BIBREF121" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Emotional Expression: \u09b8\u09c1 n\u09b0 \u09c7\u0995\u09d7\u09a4\u09c1 \u0995 (beautiful comic), Holder: < writer, \u09b0\u09be\u09c7\u09b6\u09a6 (Rashed) >, Topic: \u0995\u09bf\u09ac\u09a4\u09be (poem).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In the above example, along with the emotional expression and topic, the writer of the blog post is also considered as a default holder according to our assumption, which is based on the nested source hypothesis (Wiebe et al., 2005) . Sometimes, the emotional sentences may or may not contain a direct clue for the emotional expression. There are certain example sentences that contain an emotional expression without a holder (Tar Abhinoy ta satyoi khub", "cite_spans": [ { "start": 212, "end": 232, "text": "(Wiebe et al., 2005)", "ref_id": "BIBREF124" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Emotion Holder, and Topic 81 akorshoniyo chilo [His acting was really attractive]). Nevertheless, the sentence contains a topic (Tar Abhinoy [His acting]). Sometimes, even the emotional expressions represent the potential topics. For example, the Bengali sentence, \"Ami Ramer doohkho koshte kende pheli.\" [I fall into cry on the sorrow of Ram] contains the text \"doohkho koshte\" [sorrow] that is treated as both the emotional expression and the topic. With the above examples and problems in mind, we hypothesize that the notion of user-topic co-references will facilitate both the manual and automatic identification of emotional views. Presently, we have assumed that the holder and topic are emotion co-referent if they share the same emotional expression.", "cite_spans": [ { "start": 8, "end": 15, "text": "Holder,", "ref_id": null }, { "start": 16, "end": 28, "text": "and Topic 81", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The present task deals with the identification of users' emotions on different topics from an annotated Bengali blog corpus (Das & Bandyopadhyay, 2010a) . Each sentence of the corpus is annotated with the emotional components, such as emotional expression (word/phrase), intensity (high, general, and low), associated holder, topic(s), and sentential tag of Ekman's six emotion classes (anger, disgust, fear, happy, sad, and surprise) .", "cite_spans": [ { "start": 124, "end": 152, "text": "(Das & Bandyopadhyay, 2010a)", "ref_id": "BIBREF95" }, { "start": 386, "end": 434, "text": "(anger, disgust, fear, happy, sad, and surprise)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this project, a simple rule-based baseline system is developed for identifying the emotional expressions, holders, and topics. The expressions are identified from shallow parsed sentences using Bengali WordNet Affect lists (Das & Bandyopadhyay, 2010b) . A simple part-of-speech (POS) tag-based pattern matching technique is employed to identify the emotion holders and topics with respect to the emotional expressions. The presence of emotion holders and topics in the immediate neighborhood, shallow chunks that refer to their corresponding emotional expressions, gives the co-reference clues for the baseline system. The co-reference among the emotional expressions, holders, and topics is measured using Krippendorff's (2004) \u03b1 metric. The error analysis suggests that the rich morphology and free phrase order nature of Bengali restricts the baseline system in capturing the holder and topic as well as disambiguating them in complex, compound, and passive sentences.", "cite_spans": [ { "start": 226, "end": 254, "text": "(Das & Bandyopadhyay, 2010b)", "ref_id": "BIBREF96" }, { "start": 710, "end": 731, "text": "Krippendorff's (2004)", "ref_id": "BIBREF110" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Thus, a Support Vector Machine (SVM) (Joachims, 1998) based supervised classifier is employed as well for co-reference identification. In this classifier, each of the input vectors containing emotional expression, associated holder, and topic is prepared from each of the annotated Bengali blog sentences. The feature vector is prepared based on the information present in the sentences containing lexical, syntactic, semantic, rhetorical, and overlapping features (word, part-of speech (POS), and Named Entity (NE)). Considering each of the input vectors as a unit to be coded in terms of the values of a variable, the standard Krippendorff's (2004) \u03b1 metric produces a satisfactory score that outperforms the baseline system on the test set. This observation suggests that the adoption of error handling features, along with the features for syntax, semantics, and rhetorical structure, improves the performance of the co-reference identification reasonably. Different types of error cases have been analyzed, and we employed different rule-based post-processing techniques to solve the error cases. The rest of this paper is organized as follows. Section 2 describes the related work. The baseline system is described in Section 3. The supervised framework with feature analysis is discussed in Section 4. Experiments and associated results are specified in Section 5. The error analysis and post processing techniques are discussed in Section 6. Finally, Section 7 concludes the paper.", "cite_spans": [ { "start": 37, "end": 53, "text": "(Joachims, 1998)", "ref_id": "BIBREF105" }, { "start": 629, "end": 650, "text": "Krippendorff's (2004)", "ref_id": "BIBREF110" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The current trend in the emotion analysis area is exploring machine learning techniques (Sebastiani, 2002; Mishne & Rijke, 2006 ) that consider the problem as text categorization or analogous to topic classification, which underscores the difference between machine learning methods and human-produced baseline models (Alm et al., 2005) . The affective text shared task on news headlines for emotion and valence level identification at SemEval 2007 has drawn focus to this field (Strapparava & Mihalcea, 2007) . In order to estimate affects in text, the model proposed by Neviarouskaya et al. (2007) processes symbolic cues and employs natural language processing techniques.", "cite_spans": [ { "start": 88, "end": 106, "text": "(Sebastiani, 2002;", "ref_id": "BIBREF116" }, { "start": 107, "end": 127, "text": "Mishne & Rijke, 2006", "ref_id": "BIBREF112" }, { "start": 318, "end": 336, "text": "(Alm et al., 2005)", "ref_id": "BIBREF88" }, { "start": 479, "end": 509, "text": "(Strapparava & Mihalcea, 2007)", "ref_id": "BIBREF122" }, { "start": 572, "end": 599, "text": "Neviarouskaya et al. (2007)", "ref_id": "BIBREF113" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Prior work in identification of opinion holders has sometimes identified only a single opinion per sentence (Bethard et al., 2004) and sometimes several opinions (Choi, 2005) . Identification of opinion holders for Question Answering with a supporting annotation task was attempted in Wiebe et al. (2005) . Before that, another work on labeling the arguments of the verbs with their semantic roles using a novel frame matching technique was carried out in Swier and Stevenson (2004) . Based on the traditional perspectives, another work discussed in Hu et al. (2006) uses an emotion knowledge base for extracting the emotion holder. The machine learning based classification task for \"not holder,\" \"weak holder,\" \"medium holder,\" or \"strong holder\" is described in Evans (2007) . Kim and Hovy (2006) identified the opinion holder with the topic from media text using semantic role labeling. An anaphor resolution based opinion holder identification method exploiting lexical and syntactic information from online news documents was attempted by Kim et al. (2007) . The syntactic models of identifying the emotion holder for English emotional verbs were developed in Das and Bandyopadhyay (2010d) .", "cite_spans": [ { "start": 108, "end": 130, "text": "(Bethard et al., 2004)", "ref_id": "BIBREF91" }, { "start": 162, "end": 174, "text": "(Choi, 2005)", "ref_id": "BIBREF92" }, { "start": 285, "end": 304, "text": "Wiebe et al. (2005)", "ref_id": "BIBREF124" }, { "start": 456, "end": 482, "text": "Swier and Stevenson (2004)", "ref_id": "BIBREF123" }, { "start": 550, "end": 566, "text": "Hu et al. (2006)", "ref_id": "BIBREF104" }, { "start": 765, "end": 777, "text": "Evans (2007)", "ref_id": "BIBREF103" }, { "start": 780, "end": 799, "text": "Kim and Hovy (2006)", "ref_id": "BIBREF107" }, { "start": 1045, "end": 1062, "text": "Kim et al. (2007)", "ref_id": "BIBREF106" }, { "start": 1166, "end": 1195, "text": "Das and Bandyopadhyay (2010d)", "ref_id": "BIBREF98" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "In the related field of opinion topic extraction, different researchers have contributed their efforts (Kobayashi et al., 2004; Nasukawa et al., 2003; Popescu & Etzioni, 2005) . Nevertheless, these works are based on lexicon look up and are applied to the domain of product reviews. The topic annotation task on the MPQA corpus is described in Stoyanov and Cardie (2008) .", "cite_spans": [ { "start": 103, "end": 127, "text": "(Kobayashi et al., 2004;", "ref_id": "BIBREF108" }, { "start": 128, "end": 150, "text": "Nasukawa et al., 2003;", "ref_id": "BIBREF127" }, { "start": 151, "end": 175, "text": "Popescu & Etzioni, 2005)", "ref_id": "BIBREF115" }, { "start": 344, "end": 370, "text": "Stoyanov and Cardie (2008)", "ref_id": "BIBREF121" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "The method of identifying an opinion with its holder and topic from online news is described in Kim and Hovy (2006) . The model extracts opinion topics associated with a specific argument position for subjective expressions signaled by verbs and adjectives. Similarly, the verb based argument extraction and associated topic identification is considered Emotion Holder, and Topic 83 in the present system. Nevertheless, opinion topic identification differs from topic segmentation (Choi, 2000) . The opinion topics are not necessarily spatially coherent as there may be two opinions in the same sentence on different topics, as well as opinions on the same topic that are separated by opinions that do not share that topic (Stoyanov & Cardie, 2008) .", "cite_spans": [ { "start": 96, "end": 115, "text": "Kim and Hovy (2006)", "ref_id": "BIBREF107" }, { "start": 362, "end": 369, "text": "Holder,", "ref_id": null }, { "start": 370, "end": 382, "text": "and Topic 83", "ref_id": null }, { "start": 481, "end": 493, "text": "(Choi, 2000)", "ref_id": null }, { "start": 723, "end": 748, "text": "(Stoyanov & Cardie, 2008)", "ref_id": "BIBREF121" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "The authors established such a hypothesis by applying the technique of co-reference identification for topic annotation. In the case of our present system, the building of fine-grained topic knowledge based on the rhetorical structure and segmentation of topics using different types of lexical, syntactic, and overlapping features substantially reduces the problem of emotion topic distinction. It must be mentioned that the proposed method obtains a moderately more reliable alpha score in comparison to some related results in Stoyanov and Cardie (2008a) .", "cite_spans": [ { "start": 530, "end": 557, "text": "Stoyanov and Cardie (2008a)", "ref_id": "BIBREF120" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Moreover, all of the aforementioned works have been attempted for English. Recent study shows that non-native English speakers support the growing use of the Internet 1 . In addition to that, a rapidly growing number of web users from multilingual communities have focused attention on improving multilingual search engines in respect to sentiment or emotion. This raises the demand for emotion analysis for languages other than English. Bengali is the sixth most popular language in the World 2 , second in India, and the national language in Bangladesh, but it is less computerized than English. Works on emotion analysis in Bengali have started recently (Das & Bandyopadhyay, 2009a; 2010a) . The comparative evaluation of the features on equivalent domains for Bengali and English language can be found in Das and Bandyopadhyay (2009b) . To the best of our knowledge, at present, no such user-topic co-reference analysis of emotion has been conducted for Bengali or for other Indian languages. Thus, we believe that this work would meet the demands of user-topic focused emotion analysis systems.", "cite_spans": [ { "start": 657, "end": 685, "text": "(Das & Bandyopadhyay, 2009a;", "ref_id": "BIBREF93" }, { "start": 686, "end": 692, "text": "2010a)", "ref_id": "BIBREF95" }, { "start": 809, "end": 838, "text": "Das and Bandyopadhyay (2009b)", "ref_id": "BIBREF94" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "A simple rule-based system has been designed to identify the emotional expression, holder, and topic from the sentences and their co-references. A simple neighboring chunk consideration approach that assumes that emotional expression, holder, and topic appear as neighboring chunks in a sentence has been introduced to identify the co-reference among the three components. The details of the system are as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline System", "sec_num": "3." }, { "text": "The blog sentences are passed through an open sourced Bengali shallow parser 3 . This shallow parser gives different morphological information (root, lexical category of the root, gender, number, person, case, vibhakti, tam, suffixes, etc.) that help in identifying the lexical patterns of the emotional expressions. The shallow parsed sentences are preprocessed to generate simplified lexical patterns (as shown below). We search through each of the component words from the chunks in the Bengali WordNet Affect lists (Das & Bandyopadhyay, 2010b) . If any word present in a chunk is an emotion word (e.g. \u09c7\u0995\u09d7\u09a4\u09c1 \u0995 koutuk 'comic'), all of the words present in that extracted chunk are treated as the candidate seeds for an emotional expression. Identification of an emotional expression containing a single emotion word is straightforward. Nevertheless, we include all of the words of a chunk in order to identify long emotional expressions. Consecutive words that appear in a chunk and contain at least one emotion word also form an emotional expression. An example of a shallow parsed result follows.", "cite_spans": [ { "start": 77, "end": 78, "text": "3", "ref_id": "BIBREF130" }, { "start": 143, "end": 240, "text": "(root, lexical category of the root, gender, number, person, case, vibhakti, tam, suffixes, etc.)", "ref_id": null }, { "start": 519, "end": 547, "text": "(Das & Bandyopadhyay, 2010b)", "ref_id": "BIBREF96" } ], "ref_spans": [], "eq_spans": [], "section": "Identifying Emotional Expression:", "sec_num": null }, { "text": "((JJP \u09b8\u09c1 n\u09b0 'sundar' [beautiful] JJ )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identifying Emotional Expression:", "sec_num": null }, { "text": "( NP \u09c7\u0995\u09d7\u09a4\u09c1 \u0995\u099f\u09be 'koutukta' [comic] NN Different synonyms for a Bengali verb having the same sense are separated using \",\" and different senses are separated using \";\" in the dictionary. The synonyms, including similar senses of the target verb, were extracted from the dictionary and yielded a set called the English Equivalent Synset (EES). In the above example, two English Equivalent Synsets (EES)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English Equivalent Synset Identification:", "sec_num": null }, { "text": "are extracted for the conjunct verb \u0986\u09a8n ananda \u0995\u09b0\u09be kara 'enjoy'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English Equivalent Synset Identification:", "sec_num": null }, { "text": "It also has been found that each of the English Equivalent Synsets (EES) occurs in each separate class of English VerbNet (Kipper-Schuler, 2005) . VerbNet associates the semantics of a verb with its syntactic frames and combines traditional lexical semantic information, such as thematic roles and semantic predicates, with selectional restrictions. Member verbs in the same VerbNet class share common syntactic frames; thus, they are believed to have the same syntactic behavior. The VerbNet files containing member verbs and possible subcategorization frames are stored in XML file format. Hence, the XML files of VerbNet were pre-processed to build up a general list that contains all verbs, their classes, and possible subcategorization frames (primary as well as secondary). This preprocessed list was searched to extract the present subcategorization frames for each verb (e.g. love) of the English Equivalent Synsets (EES) (e.g. love) corresponding to the Bengali verb. These extracted subcategorization frames are believed to be the valid set of argument structures for the Bengali verbs (Banerjee et al., 2010) .", "cite_spans": [ { "start": 122, "end": 144, "text": "(Kipper-Schuler, 2005)", "ref_id": "BIBREF109" }, { "start": 1096, "end": 1119, "text": "(Banerjee et al., 2010)", "ref_id": "BIBREF90" } ], "ref_spans": [], "eq_spans": [], "section": "English Equivalent Frame Identification:", "sec_num": null }, { "text": "Frame Matching: On the other hand, the shallow parsed Bengali sentences are passed through a rule based phrasal-head extraction module to identify the phrase level argument structures of the sentences corresponding to the position of the verbs. The extracted head part of every phrase from a parsed sentence is considered as a component of its sentential argument structure. If an acquired argument structure for a Bengali emotional sentence is matched with any of the available extracted frames of English VerbNet, the thematic role based holder (Experiencer, Agent, Actor, Beneficiary, etc.) and topic (Topic, Theme, Event, etc.) information associated with the English frame syntax is mapped to the appropriate slot of the acquired Bengali argument structure. Tag conversion routines were developed to transform the POS of the system generated argument structures into the POS of the VerbNet frames. The phrase level similarity between these two languages helps in identifying the subcategorization frames (Banerjee et al., 2009) . An example follows: The argument structure contains a sentential complement \"S\" started by \u09c7\u09af -je with DET type POS. The argument structure is acquired for the Bengali conjunct verb a\u09a8\u09c1 \u09ad\u09ac \u0995\u09b0\u09be anubhab kara 'feel'. One of the extracted VerbNet frame syntax containing -that type sentential complement for the equivalent English verb feel is as [ < S-that (Sentential -that Complement)>]. As the acquired argument structure matches the extracted VerbNet frame syntax, the holder related roles (e.g. Experiencer) associated with the VerbNet frame was mapped to the equivalent phrase in the acquired argument structure of the Bengali sentence. The phrase (\u09b0\u09be\u09c7\u09b6\u09a6) is now considered as a candidate of emotion holder. Additionally, the sentential complement portion is also passed through the syntactic model for obtaining any implicit emotion holders. The case markers in Bengali are required to identify the emotion holders as the case markers give the useful hints to capture the selectional restrictions that play a key role in distinguishing the emotion holders from other valid alternatives.", "cite_spans": [ { "start": 547, "end": 593, "text": "(Experiencer, Agent, Actor, Beneficiary, etc.)", "ref_id": null }, { "start": 604, "end": 631, "text": "(Topic, Theme, Event, etc.)", "ref_id": null }, { "start": 1009, "end": 1032, "text": "(Banerjee et al., 2009)", "ref_id": "BIBREF89" } ], "ref_spans": [], "eq_spans": [], "section": "English Equivalent Frame Identification:", "sec_num": null }, { "text": "\u09b0\u09be\u09c7\u09b6\u09a6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English Equivalent Frame Identification:", "sec_num": null }, { "text": "Emotion/Affect Words (EW): The presence of a word in the Bengali WordNet Affect lists (Das & Bandyopadhyay, 2010b) identifies the emotion/affect words. The tagged affect words are considered as both lexical and semantic features in the case of handling the emotional expressions.", "cite_spans": [ { "start": 86, "end": 114, "text": "(Das & Bandyopadhyay, 2010b)", "ref_id": "BIBREF96" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Features", "sec_num": "4.3" }, { "text": "The Bengali SentiWordNet was developed by replacing each word entry in the synonymous set of the English SentiWordNet (Esuli & Sebastiani, 2006) by its possible Bengali synsets using the English to Bengali bilingual dictionary that was developed as part of the EILMT project 8 . The chunks containing JJ (adjective) and RB (adverb) tagged elements were considered to be intensifiers. If the intensifier was found in the SentiWordNet, then the positive and negative scores of the intensifier were retrieved from the SentiWordNet. The intensifier is classified into the list of positive (pos) (INTFpos) or negative (neg) (INTFneg), for which the average retrieved score is higher. The intensifiers play an important role in identifying the lexical association among the component words of an emotional expression and linking the emotion components based on their POS. (Das & Bandyopadhyay, 2010a) , have been considered as semantic features for the emotional expressions.", "cite_spans": [ { "start": 118, "end": 144, "text": "(Esuli & Sebastiani, 2006)", "ref_id": "BIBREF102" }, { "start": 866, "end": 894, "text": "(Das & Bandyopadhyay, 2010a)", "ref_id": "BIBREF95" } ], "ref_spans": [], "eq_spans": [], "section": "Intensifiers (INTF):", "sec_num": null }, { "text": "The present task acquires the rhetorical components, such as locus, nucleus, and satellite (Mann & Thompson, 1988) , from a sentence, as these rhetorical clues help in identifying the individual topic spans. The part of the text span containing an annotated emotional expression is considered as locus. Primarily, the separation of nucleus from satellite is done based on the punctuation marks (,), (!), (?). Frequently used discourse markers (\u09c7\u09af\u09c7\u09b9\u09a4\u09c1 jehetu 'as,' \u09c7\u09af\u09ae\u09a8 jemon 'e.g.,' \u0995\u09be\u09b0\u09a3 karon 'because,' \u09ae\u09be\u09c7\u09a8 mane 'means' ) and causal verbs (\u0998\u099f\u09be\u09df ghotay 'caused') also act as useful clues if they are explicitly specified in the text. Stoyanov and Cardie (2008a) mentioned that the topic depends on the context in which its associated emotional expression occurs. If any word of an emotional expression co-occurs with any word element of the nucleus or satellite in the same shallow chunk, the feature is considered a common rhetoric similarity. Otherwise, the feature is considered a distinctive rhetoric similarity. The features aim to separate emotion topics from non-emotion topics as well as the individual topic from an overlapped topic region.", "cite_spans": [ { "start": 91, "end": 114, "text": "(Mann & Thompson, 1988)", "ref_id": "BIBREF111" }, { "start": 636, "end": 663, "text": "Stoyanov and Cardie (2008a)", "ref_id": "BIBREF120" } ], "ref_spans": [], "eq_spans": [], "section": "Rhetoric Features", "sec_num": "4.4" }, { "text": "Word Overlap: This feature is true if any two topic spans contain a common word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overlapping Features", "sec_num": "4.5" }, { "text": "Part-of-Speech Overlap: The verb, noun, adjective, and adverb are considered as overlapping informative constituents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overlapping Features", "sec_num": "4.5" }, { "text": "NP Co-reference: This binary feature is True if the two chunks contain NPs that are determined to be co-referent by applying a rule of common rhetoric similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overlapping Features", "sec_num": "4.5" }, { "text": "considered based on a threshold of 0.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overlapping Features", "sec_num": "4.5" }, { "text": "The metric, nominal alpha produced an \u03b1 score of 0.53 between the annotated and system generated data. Generally, the alpha \u03b1 score aims to probabilistically capture the agreement of annotated data and separate it from the chance of agreement. The baseline score achieved for the overall agreement was 0.53, which is below the generally accepted level, while \u03b1 for the supervised system was 0.63, which is moderately acceptable and reliable. The scores of \u03b1 for the baseline system and supervised system, along with some important features, their combinations, and pruning steps, are shown in Table 1 . The \u03b1 score loses its probabilistic interpretation due to the way it is adapted to the problem of co-reference classification. It is observed that the score of \u03b1 increased rapidly while considering the syntactic, rhetorical, and overlapping features. The overlapping features also cause problems because of the free phrase order characteristics of the Bengali language. The overlapping context of emotional expression and topic generates errors. Nevertheless, the application of Named Entities (NEs) reduces the problem of distinguishing holder and topic. ", "cite_spans": [], "ref_spans": [ { "start": 593, "end": 600, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Overlapping Features", "sec_num": "4.5" }, { "text": "The error analysis was conducted on the development set of 630 sentences. We incorporated different rule-based post processing techniques for handling the error cases, and the system achieved an alpha score of 0.67. Four types of error cases were identified, and four different rules were proposed to reduce the error cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6." }, { "text": "Emotion Holder, and Topic 93 Case 1: Appositive Use: The implicit emotion holders may be present in a sentence. (e.g.", "cite_spans": [ { "start": 8, "end": 15, "text": "Holder,", "ref_id": null }, { "start": 16, "end": 28, "text": "and Topic 93", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6." }, { "text": "\u09b0\u09be\u09ae 'Ram' in the case of \u09b0\u09be\u09c7\u09ae\u09b0 \u09b8\u09c1 \u0996 'Ram's pleasure'). The identification of the emotion holder at the sentence level requires the knowledge of two basic constraints (implicit and explicit) separately. The explicit constraints identify the single prominent emotion holder that is directly involved with the emotional expression, whereas the implicit constraints identify all direct and indirect nested sources as emotion holders. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6." }, { "text": "We considered the suffixes that are determined from the shallow parsed phrases to identify the appositive cases. In the above example, the appositive case (e.g. \u09b0\u09be\u09c7\u09ae\u09b0 \u09b8\u09c1 \u0996 (Ram's pleasure)) is also identified and placed in the vector by removing the inflectional suffix (-e\u09b0 -er in this case). Sometimes, the vibhakti and tam information also play effective roles in identifying emotion holders.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution:", "sec_num": null }, { "text": "Case 2: Anaphoric Presence of Holders: Another similar problem is identified in the above example. The emotion holders are sometimes referred to via anaphors. Sometimes, the candidate anaphors are linked with the emotional expressions instead of the actual emotion holders. The actual emotion holder \u09c7\u0997\u09a6\u09c1 \u099a\u09be\u099a\u09be 'Gedu ChaCha' expresses the emotion in a clause that is represented by the anaphor \u0986\u09bf\u09ae ami 'I' in another clause.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution:", "sec_num": null }, { "text": "The sentences of user comments in the adopted blog corpus contain a special default phrasal pattern that helps in identifying the emotion holders ([ ] e.g. \u09c7\u0997\u09a6\u09c1 \u099a\u09be\u099a\u09be \u09ac\u09c7\u09b2: (Gedu ChaCha bole), \u09b0\u09be\u09c7\u09b6\u09a6 \u09ac\u09c7\u09b2\u09c7\u099b\u09a8 (Rashed bolechen), and \u09b8\u09be\u09df\u09a8 \u09ac\u09c7\u09b2\u09c7\u099b (Sayan bolechhe)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution:", "sec_num": null }, { "text": "Hence, if a pronoun is present with an emotional expression, the preceding Named Entities of such a default phrasal pattern are considered as the emotion holders.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution:", "sec_num": null }, { "text": "The complex or compound sentences contain more than one clause, and each of the clauses may contain individual emotional expressions. The holders and topics associated with the emotional expressions in all of the clauses need fine-grained study of the sentential structures. The following example shows that two emotional expressions (\u09a6\u09c1 \u0983\u0996 dookkha 'sorrow' and \u0986\u09a8n ananda 'happy') contain two different Though Gedu Chacha has sorrow, Chachi lives happily with all. Solution: As the complex or compound sentences contain more than one clause and each of the clauses contains individual emotional expressions, we consider the sentential rhetorical structure. Instead of identifying rhetorical relations (Mann & Thompson, 1988) , the present task acquires the rhetorical components, such as locus, nucleus, and satellite from a sentence, as these rhetoric clues help in identifying the individual holder and topics associated in each clause of the sentence. The part of the text span containing the emotional expression is considered as locus. Primarily, the separation of nucleus from satellite is done based on the punctuation marks (,), (!), (?). Frequently used discourse markers (\u09c7\u09af\u09c7\u09b9\u09a4\u09c1 jehetu 'as,' \u09c7\u09af\u09ae\u09a8 jemon 'e.g.,' \u0995\u09be\u09b0\u09a3 karon 'because,' \u09ae\u09be\u09c7\u09a8 mane 'means') and causal verbs (\u0998\u099f\u09be\u09df ghotay 'caused') also are useful clues if they are explicitly specified in the text and present in a manually prepared seed list. If any word in the emotional expression co-occurs with any word element of the nucleus or satellite in the same chunk, the feature is considered a common rhetoric similarity.", "cite_spans": [ { "start": 702, "end": 725, "text": "(Mann & Thompson, 1988)", "ref_id": "BIBREF111" } ], "ref_spans": [], "eq_spans": [], "section": "Case 3: Multiple Holders and Topics:", "sec_num": null }, { "text": "Otherwise, the feature is considered a distinctive rhetoric similarity. The chunks identified by the syntactic system as the holder and topic and tagged as common rhetoric similarity are only considered for each of the clauses of a sentence. For this reason, all possible holders and topics associated to all of the clauses of a sentence are identified by the syntactic system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case 3: Multiple Holders and Topics:", "sec_num": null }, { "text": "Overlapping Topic Spans: It is observed that the emotion topics containing single word tokens are identified more easily than multi word topics. Sometimes, the emotion related topics coexist with other potential non-emotional topics. As the topics may consist of multi-word strings, the text spans denoting the topic spans create problems in identifying emotion topic span from other non-emotional topic spans. In the following example, the emotional expression \u0986\u09a8n ananda 'enjoy' is related to the topic \u0997\u09be\u09a8 gan 'song' and \u09bf\u099f\u09bf\u09ad TV 'television'. The baseline system additionally captures \u09aci boi 'book' that is a potential but non-emotion topic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case 4:", "sec_num": null }, { "text": "Emotion Holder, and Topic 95 \u09a4\u09c1 \u09bf\u09ae \u09c7\u09a4\u09be \u09aci \u09aa\u09dc\u09c7\u09a4i \u09a8\u09be, e\u0996\u09a8 \u09c7\u09a6\u0996\u09bf\u099b \u09a4\u09c1 \u09bf\u09ae \u0997\u09be\u09a8, \u09bf\u099f\u09bf\u09ad \u09c7\u09a4o (tumi) (to) (boi) (portei) (na), (ekhon) (dekhchi) (tumi) (gan), (TV) (teo) \u0986\u09a8n \u09aa\u09beo\u09a8\u09be\u0964 (ananda) (paona)", "cite_spans": [ { "start": 8, "end": 15, "text": "Holder,", "ref_id": null }, { "start": 16, "end": 28, "text": "and Topic 95", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Case 4:", "sec_num": null }, { "text": "You never used to read books; now we notice that you also don't enjoy song/ television.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case 4:", "sec_num": null }, { "text": "The topic of an opinion depends on the context in which its associated opinion expression occurs (Stoyanov & Cardie, 2008a) . The common rhetoric similarity feature helps the syntactic system by aiming to separate emotion topics from non-emotion topics and to separate the overlapping possibilities of discrete emotion topic spans from non-topical contiguous regions. If the identified topic chunks are tagged with common rhetoric similarity, the chunks are classified as emotional topics and separated from non-topical elements in a sentence. The improvements at some important steps by incorporating the rule based post-processing techniques are shown in Table 2 . It is observed that the simple rules have substantially reduced errors and have improved the performance of the system satisfactorily. The application of the post-processing techniques also achieves an alpha score of 0.6721 on the test set. ", "cite_spans": [ { "start": 97, "end": 123, "text": "(Stoyanov & Cardie, 2008a)", "ref_id": "BIBREF120" } ], "ref_spans": [ { "start": 657, "end": 664, "text": "Table 2", "ref_id": "TABREF33" } ], "eq_spans": [], "section": "Solution:", "sec_num": null }, { "text": "The automatic extraction of emotional expressions, sentential emotion holders, and topics from Bengali blog data is accomplished in the present task. The supervised implementation of the system shows improvement over the rule-based baseline because the rule-based system fails to capture the implicit textual clues whereas the supervised system captures the clues in terms of combined features. The evaluation of the co-reference using Krippendorff's alpha is helpful in diagnosing the importance of the three emotional components. The rule-based post-processing techniques for reducing the error cases have shown substantial improvement in the performance of the system. From the overall analysis, it is observed that the identification of emotional co-reference is helpful in identifying user-topic relations. The handling of metaphors and their impact in detecting sentence level emotion is not considered. Future analysis concerning the time based emotional change can be used for topic model representation. The need for co-reference requires that the presence of indirect affective clues can also be traced with the help of the holder and topic. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "The Taiwan Mandarin Spoken Wordlist has been publicly distributed and can be freely downloaded from the website http://mmc.sinica.edu.tw/resources_e_01.htm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://en.wikipedia.org/wiki/Wikipedia:Database_download 2 http://wiki.freebase.com/wiki/WEX 3 http://wordnet.princeton.edu/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://BOW.sinica.edu.tw/ 5 http://www.aclclp.org.tw/doc/bw_agr_e.PDF 6 http://terms.nict.gov.tw/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2006T13", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.aclclp.org.tw/use_ced.php", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.aclclp.org.tw/doc/bw_agr_e.PDF", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Joseph Z.Chang et al.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Kitty Hawk, North Carolina, USA was the site for the world's first successful powered human flight by the Wright brothers. \"Kitty Hawk\" references generally meant a break-through success in its early stages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.internetworldstats.com/stats.htm 2 http://www.ethnologue.com/ethno_docs/distribution.asp?by=size 3 http://ltrc.iiit.ac.in/showfile.php?filename=downloads/shallow_parser.php", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "www.amarblog.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://chasen-org/~taku/software/yamcha/ 6 http://chasen.org/~taku/software/TinySVM/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://home.uchicago.edu/~cbs2/banglainstruction.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "English to Indian Languages Machine Translation (EILMT) is a TDIL project undertaken by the consortium of different premier institutes and sponsored by MCIT, Govt. of India.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The author is grateful to the useful comments provided by two anonymous reviewers of the International Journal of Computational Linguistics and Chinese Language Processing. The author also sincerely thanks to the team members who have been working on the corpus data along the years. The study presented in this article is funded by the National Digital Archives Project and the National Science Council under Grant NSC-100-2410-H-001-093.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "that the system can also preserve the formatting of input Word documents across the translation. After the successful development of MANTRA-Rajyasabha, language pairs like Hindi-English and Hindi-Bengali translation already have started using the Mantra approach. The Mantra project is being developed under the supervision of Dr. Hemant Darbari and is funded by TDIL and the Department of Official Languages, Ministry of Home Affairs, Government of India.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "Using the Universal Clause Structure Grammar (UCSG) formalism, the Computer and Information Sciences Department at the University of Hyderabad, under the supervision of Prof. K. Narayana Murthy, developed a domain-specific English- Kannada MT system (Durgesh et al., 2000; Sudip et al., 2005; Manning et al., 2003) . This UCSG-based system is based on a transfer-based approach and has been applied to the translation of government circulars. The system work is done at the sentence level and requires post-editing. At its first step of translation, the source (English) sentence is analysed and parsed using UCSG parser (developed by Dr. K. Narayana Murthy). Then, using translation rules, an English-Kannada bilingual dictionary, and network based Kannada Morphological Generator (developed by Dr. K. Narayana Murthy), the system translates in-to the Kannada language. This project has been funded by government of Karnataka and work is going to improve the performance of the system. Later, the same approach was applied for English-Telugu translation.", "cite_spans": [ { "start": 232, "end": 272, "text": "Kannada MT system (Durgesh et al., 2000;", "ref_id": null }, { "start": 273, "end": 292, "text": "Sudip et al., 2005;", "ref_id": null }, { "start": 293, "end": 314, "text": "Manning et al., 2003)", "ref_id": "BIBREF68" } ], "ref_spans": [], "eq_spans": [], "section": "UCSG-based English-Kannada MT by University of Hyderabad", "sec_num": "4.9" }, { "text": "Universal Networking Language (UNL) MT between English, Hindi, and Marathi is based on the Interlingua approach (Durgesh et al., 2000; Sudip et al., 2005; Manning et al., 2003) . Under the supervision of Prof. Pushpak Bhattacharya, IIT Bombay is the Indian participant in UNL, which is an international project of the United Nations University, aimed at developing an Interlingua for all major human languages in the world. In the UNL based MT, the knowledge of the SL is captured or converted into UNL form and reconverted from UNL to the TL, like Hindi and Marathi. The SL information is represented sentence by sentence which is later converted into a hypergraph having concepts as nodes and relations as directed arcs (Shachi et al., 2002) . The document knowledge is expressed in three dimensions as word knowledge, conceptual knowledge, and attritute labels.Suffix database-Inflectional suffixes, derivative suffixes, plural markers, tense markers, sariyai, case suffixes, relative participle markers, verbal participle markers, etc will be compiled.Morphological Analyzer-This is designed to analyze the constituents of the words. It will help to segment the words into stems and inflectional markers.Syntactic Analyzer-The syntactic analyzer will find the syntactic category, like Verbal 2004English to Hindi IISc-Bangalore, IIIT Hyderabad, and Carnegie", "cite_spans": [ { "start": 112, "end": 134, "text": "(Durgesh et al., 2000;", "ref_id": null }, { "start": 135, "end": 154, "text": "Sudip et al., 2005;", "ref_id": null }, { "start": 155, "end": 176, "text": "Manning et al., 2003)", "ref_id": "BIBREF68" }, { "start": 722, "end": 743, "text": "(Shachi et al., 2002)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "UNL-based MT between English, Hindi and Marathi by Indian Institute of Technology, Mumbai", "sec_num": "4.10" }, { "text": "Each of the sentences is passed through a Named Entity Recognizer (Ekbal & Bandyopadhyay, 2008) to identify the named entities in that sentence.If any word is tagged as a named entity (NE), a feature is assigned for either emotion holder or topic. If, however, the word is present in satellite and not tagged as an emotion holder (EH) feature, the word is selected as a potential candidate for topic. This distinguishing feature is considered for identifying the holder and topic separately from an NE overlapped context.", "cite_spans": [ { "start": 66, "end": 95, "text": "(Ekbal & Bandyopadhyay, 2008)", "ref_id": "BIBREF100" } ], "ref_spans": [], "eq_spans": [], "section": "Named Entity (NE):", "sec_num": null }, { "text": "The combination of multiple features in comparison with a single feature generally shows a reasonable performance enhancement of any classification system. The impact of different features and their combinations was measured on the development set of 630 sentences. Different unigram and bi-gram context features (word and POS tag level) and their combinations were generated from the training corpus as well. We added each feature into the active feature list one at a time to see if the inclusion of a feature in the existing feature set improved the F-Score of the system on the development set. The final active feature set was applied to the test data. During the SVM-based training phase, the current token word with the three previous and three following words and their corresponding POS, along with negation or intensifier, were selected as context features for that word. We used Krippendorff's (2004) alpha (as discussed in Section 3) for measuring the performance of the system. The importance of incorporating the features was examined through Information Gain (InfoGain). All of the results were obtained by the 10 fold cross validation method.", "cite_spans": [ { "start": 890, "end": 911, "text": "Krippendorff's (2004)", "ref_id": "BIBREF110" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5." }, { "text": "This decision technique was used to measure the importance of a feature (X) with respect to the class attribute (Y). Formally, the information gain of a feature X with respect to a class attribute Y is the reduction in uncertainty about the value of Y when we know the value of X:where X and Y are discrete variables taking values {x 1 , x 2 ,....,x m } and {y 1 ,y 2 ,....,y n }, respectively. The Entropy(Y) is defined as:The conditional entropy of Y given X is defined as:The features with high Information Gain (InfoGain) reduce the uncertainty about a class to the minimum. In our experiment on the development set, all of the features except the features for the causal verbs and distinctive rhetoric similarity achieved a high Information Gain (InfoGain ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Information Gain Based Pruning (IGBP):", "sec_num": null }, { "text": "Please send application to:The ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "To Register\uff1a", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Some frequency based differences between spoken and written Swedish", "authors": [ { "first": "J", "middle": [], "last": "Allwood", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the XVIth Scandinavian Conference of Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Allwood, J. (1998). Some frequency based differences between spoken and written Swedish. In Proceedings of the XVIth Scandinavian Conference of Linguistics, Department of Linguistics, University of Turku.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The HCRC Map Task Corpus", "authors": [ { "first": "A", "middle": [ "H" ], "last": "Anderson", "suffix": "" }, { "first": "M", "middle": [], "last": "Bader", "suffix": "" }, { "first": "E", "middle": [ "G" ], "last": "Bard", "suffix": "" }, { "first": "E", "middle": [], "last": "Boyle", "suffix": "" }, { "first": "G", "middle": [], "last": "Doherty", "suffix": "" }, { "first": "S", "middle": [], "last": "Garrod", "suffix": "" }, { "first": "S", "middle": [], "last": "Isard", "suffix": "" }, { "first": "J", "middle": [], "last": "Kowtko", "suffix": "" }, { "first": "J", "middle": [], "last": "Mcallister", "suffix": "" }, { "first": "J", "middle": [], "last": "Miller", "suffix": "" }, { "first": "C", "middle": [], "last": "Sotillo", "suffix": "" }, { "first": "H", "middle": [ "S" ], "last": "Thompson", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiner", "suffix": "" } ], "year": 1991, "venue": "", "volume": "24", "issue": "", "pages": "351--366", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anderson, A. H., Bader, M., Bard, E. G., Boyle, E., Doherty, G., Garrod, S., Isard, S., Kowtko, J., McAllister, J., Miller, J., Sotillo, C., Thompson, H. S., & Weiner, R. (1991). The HCRC Map Task Corpus. Language and Speech, 24(4), 351-366.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Word Frequency Distribution", "authors": [ { "first": "R", "middle": [ "H" ], "last": "Baayan", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baayan, R. H. (2001). Word Frequency Distribution. Kluwer Academic Publishers. Dordrecht/Boston/London.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Praat: doing phonetics by computer", "authors": [ { "first": "P", "middle": [], "last": "Boersma", "suffix": "" }, { "first": "D", "middle": [], "last": "Weenink", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boersma, P. & Weenink, D. (2012). Praat: doing phonetics by computer. http://www.fon.hum.uva/praat 5.3.16.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A frequency count of 190,000 words in the London-Lund Corpus of English Conversation", "authors": [ { "first": "G", "middle": [ "D" ], "last": "Brown", "suffix": "" } ], "year": 1984, "venue": "Behavior Research Methods, Instruments, & Computers", "volume": "16", "issue": "6", "pages": "502--532", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, G. D. (1984). A frequency count of 190,000 words in the London-Lund Corpus of English Conversation. Behavior Research Methods, Instruments, & Computers, 16(6), 502-532.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Spoken Grammar of Chinese", "authors": [ { "first": "Y", "middle": [ "R" ], "last": "Chao", "suffix": "" } ], "year": 1965, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chao, Y. R. (1965). A Spoken Grammar of Chinese. University of California Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The SINICA CORPUS\": Design methodology for balanced corpora", "authors": [ { "first": "K.-J", "middle": [], "last": "Chen", "suffix": "" }, { "first": "C.-R", "middle": [], "last": "Huang", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the Eleventh Pacific Asia Conference on Language, Information and Computation", "volume": "", "issue": "", "pages": "167--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, K.-J. & Huang, C.-R. (1996). The SINICA CORPUS\": Design methodology for balanced corpora. In Proceedings of the Eleventh Pacific Asia Conference on Language, Information and Computation, 167-176.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Sinica Corpus 3.0. The Chinese Knowledge Information Processing Group -technical report 98-04", "authors": [ { "first": "", "middle": [], "last": "Ckip", "suffix": "" } ], "year": 1998, "venue": "Academia Sinica", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "CKIP. (1998). The Sinica Corpus 3.0. The Chinese Knowledge Information Processing Group -technical report 98-04. Academia Sinica. (In Chinese)", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Spoken corpus design. Literary and Linguistic Computing", "authors": [ { "first": "S", "middle": [], "last": "Crowdy", "suffix": "" } ], "year": 1993, "venue": "", "volume": "8", "issue": "", "pages": "259--265", "other_ids": {}, "num": null, "urls": [], "raw_text": "Crowdy, S. (1993). Spoken corpus design. Literary and Linguistic Computing, 8(4), 259-265.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The words and sounds of telephone conversations", "authors": [ { "first": "N", "middle": [], "last": "French", "suffix": "" }, { "first": "C", "middle": [ "W" ], "last": "Carter", "suffix": "" }, { "first": "W", "middle": [], "last": "Koenig", "suffix": "" } ], "year": 1930, "venue": "Bell System Tech Journal", "volume": "9", "issue": "", "pages": "290--324", "other_ids": {}, "num": null, "urls": [], "raw_text": "French, N., Carter, C. W., & Koenig, W. (1930). The words and sounds of telephone conversations. Bell System Tech Journal, 9, 290-324.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Handbook of Standards and Resources for Spoken Language Systems", "authors": [ { "first": "D", "middle": [], "last": "Gibbon", "suffix": "" }, { "first": "R", "middle": [], "last": "Moore", "suffix": "" }, { "first": "", "middle": [], "last": "Winski", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gibbon, D., Moore, R., & Winski, R. (Eds.) (1997). Handbook of Standards and Resources for Spoken Language Systems. Mouton de Gruyter.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Application of the word frequency concept to aphasia", "authors": [ { "first": "D", "middle": [], "last": "Howes", "suffix": "" } ], "year": 1964, "venue": "Ciba Foundation Symposium", "volume": "", "issue": "", "pages": "47--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Howes, D. (1964). Application of the word frequency concept to aphasia. In A. V. S. de Reuck and M. O'Connor, Disorders of Language (Ciba Foundation Symposium). London: Churchill, 47-75.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A word count of spoken English", "authors": [ { "first": "D", "middle": [], "last": "Howes", "suffix": "" } ], "year": 1966, "venue": "Journal of Verbal Learning and Verbal Behavior", "volume": "5", "issue": "6", "pages": "572--606", "other_ids": {}, "num": null, "urls": [], "raw_text": "Howes, D. (1966). A word count of spoken English. Journal of Verbal Learning and Verbal Behavior, 5(6), 572-606.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The use of spoken and written corpora in the teaching of language and linguistics", "authors": [ { "first": "G", "middle": [], "last": "Knowles", "suffix": "" } ], "year": 1990, "venue": "Literary and Linguistic Computing", "volume": "5", "issue": "1", "pages": "45--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knowles, G. (1990). The use of spoken and written corpora in the teaching of language and linguistics. Literary and Linguistic Computing, 5(1), 45-48.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Word Frequencies in Written and Spoken English -Based on the British National Corpus", "authors": [ { "first": "G", "middle": [], "last": "Leech", "suffix": "" }, { "first": "P", "middle": [], "last": "Rayson", "suffix": "" }, { "first": "A", "middle": [], "last": "Wilson", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leech, G., Rayson, P., & Wilson, A. (2001). Word Frequencies in Written and Spoken English -Based on the British National Corpus. Pearson Education Limited.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "What constitutes a basic vocabulary for spoken communication?", "authors": [ { "first": "M", "middle": [], "last": "Mccarthy", "suffix": "" } ], "year": 1999, "venue": "Studies in English Language and Literature", "volume": "1", "issue": "", "pages": "233--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "McCarthy, M. (1999). What constitutes a basic vocabulary for spoken communication? Studies in English Language and Literature, 1, 233-249.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The American National Corpus: Overall Goals and the First Release", "authors": [ { "first": "R", "middle": [], "last": "Reppen", "suffix": "" }, { "first": "N", "middle": [], "last": "Ide", "suffix": "" } ], "year": 2004, "venue": "Journal of English Linguistics", "volume": "32", "issue": "", "pages": "105--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reppen, R. & Ide, N. (2004). The American National Corpus: Overall Goals and the First Release. Journal of English Linguistics, 32, 105-113.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Discourse Markers", "authors": [ { "first": "D", "middle": [], "last": "Schriffrin", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schriffrin, D. (1988). Discourse Markers. Cambridge University Press.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Cross-language information access to multilingual collections on the internet", "authors": [ { "first": "G.-W", "middle": [], "last": "Bian", "suffix": "" }, { "first": "H.-H", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2000, "venue": "Journal of the American Society for Information Science", "volume": "51", "issue": "3", "pages": "281--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bian, G.-W., & Chen, H.-H. (2000). Cross-language information access to multilingual collections on the internet. Journal of the American Society for Information Science, 51(3), 281-296.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Base noun phrase translation using web data and the em algorithm", "authors": [ { "first": "Y", "middle": [], "last": "Cao", "suffix": "" }, { "first": "H", "middle": [], "last": "Li", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 19th international conference on computational linguistics", "volume": "1", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cao, Y., & Li, H. (2002). Base noun phrase translation using web data and the em algorithm. In Proceedings of the 19th international conference on computational linguistics, volume 1, 1-7.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Learning to find translations and transliterations on the web", "authors": [ { "first": "J", "middle": [ "Z" ], "last": "Chang", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Chang", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Jang", "suffix": "" }, { "first": ".-S", "middle": [], "last": "", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th annual meeting of the association for computational linguistics", "volume": "2", "issue": "", "pages": "130--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, J. Z., Chang, J. S., & Jang, R. J.-S. (2012). Learning to find translations and transliterations on the web. In Proceedings of the 50th annual meeting of the association for computational linguistics, volume 2, 130-134.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Translating unknown queries with web corpora for cross-language information retrieval", "authors": [ { "first": "P.-J", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "J.-W", "middle": [], "last": "Teng", "suffix": "" }, { "first": "R.-C", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J.-H", "middle": [], "last": "Wang", "suffix": "" }, { "first": "W.-H", "middle": [], "last": "Lu", "suffix": "" }, { "first": "L.-F", "middle": [], "last": "Chien", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 27th annual international acm sigir conference on research and development in information retrieval", "volume": "", "issue": "", "pages": "146--153", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cheng, P.-J., Teng, J.-W., Chen, R.-C., Wang, J.-H., Lu, W.-H., & Chien, L.-F. (2004). Translating unknown queries with web corpora for cross-language information retrieval. In Proceedings of the 27th annual international acm sigir conference on research and development in information retrieval, 146-153.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Wordnet: An electronic lexical database", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, C. (1998). Wordnet: An electronic lexical database. MIT Press.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Identifying word correspondence in parallel texts", "authors": [ { "first": "W", "middle": [ "A" ], "last": "Gale", "suffix": "" }, { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the workshop on speech and natural language", "volume": "", "issue": "", "pages": "152--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gale, W. A., & Church, K. W. (1991). Identifying word correspondence in parallel texts. In Proceedings of the workshop on speech and natural language, 152-157.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Internet encyclopaedias go head to head", "authors": [ { "first": "J", "middle": [], "last": "Giles", "suffix": "" } ], "year": 2005, "venue": "Freebase data dumps", "volume": "438", "issue": "", "pages": "900--901", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giles, J. (2005). Internet encyclopaedias go head to head. Nature, 438(7070), 900-901. Google. (2010). Freebase data dumps (August 16th, 2010 ed.).", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Systems and methods for using anchor text as parallel corpora for cross-language information retrieval", "authors": [ { "first": "L", "middle": [], "last": "Gravano", "suffix": "" }, { "first": "M", "middle": [ "H" ], "last": "Henzinger", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gravano, L., & Henzinger, M. H. (2006). Systems and methods for using anchor text as parallel corpora for cross-language information retrieval (No. 7146358).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Sinica bow: integrating bilingual wordnet and sumo ontology", "authors": [ { "first": "C.-R", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2003, "venue": "Proceedings of 2003 International Conference on Natural Language Processing and Knowledge Engineering", "volume": "", "issue": "", "pages": "825--826", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C.-R. (2003). Sinica bow: integrating bilingual wordnet and sumo ontology. In Proceedings of 2003 International Conference on Natural Language Processing and Knowledge Engineering, 825-826.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Automatic extraction of named entity translingual equivalence based on multi-feature cost minimization", "authors": [ { "first": "F", "middle": [], "last": "Huang", "suffix": "" }, { "first": "S", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "A", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the acl 2003 workshop on multilingual and mixed-language named entity recognition", "volume": "15", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, F., Vogel, S., & Waibel, A. (2003). Automatic extraction of named entity translingual equivalence based on multi-feature cost minimization. In Proceedings of the acl 2003 workshop on multilingual and mixed-language named entity recognition, 15, 9-16.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Machine transliteration. Computational Linguistics", "authors": [ { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "J", "middle": [], "last": "Graehl", "suffix": "" } ], "year": 1998, "venue": "", "volume": "24", "issue": "", "pages": "599--612", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knight, K., & Graehl, J. (1998). Machine transliteration. Computational Linguistics, 24(4), 599-612.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Feature-rich statistical translation of noun phrases", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st annual meeting on association for computational linguistics", "volume": "1", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, P., & Knight, K. (2003). Feature-rich statistical translation of noun phrases. In Proceedings of the 41st annual meeting on association for computational linguistics, volume 1, 311-318.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "An algorithm for finding noun phrase correspondences in bilingual corpora", "authors": [ { "first": "J", "middle": [], "last": "Kupiec", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 31st annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "17--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kupiec, J. (1993). An algorithm for finding noun phrase correspondences in bilingual corpora. In Proceedings of the 31st annual meeting on association for computational linguistics, 17-22.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Chinet: a chinese name finder system for document triage", "authors": [ { "first": "K", "middle": [], "last": "Kwok", "suffix": "" }, { "first": "P", "middle": [], "last": "Deng", "suffix": "" }, { "first": "N", "middle": [], "last": "Dinstl", "suffix": "" }, { "first": "H", "middle": [], "last": "Sun", "suffix": "" }, { "first": "W", "middle": [], "last": "Xu", "suffix": "" }, { "first": "P", "middle": [], "last": "Peng", "suffix": "" }, { "first": "J", "middle": [], "last": "Doyon", "suffix": "" } ], "year": 2005, "venue": "Proceedings of 2005 international conference on intelligence analysis", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kwok, K., Deng, P., Dinstl, N., Sun, H., Xu, W., Peng, P., & Doyon., J. (2005). Chinet: a chinese name finder system for document triage. In Proceedings of 2005 international conference on intelligence analysis.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "J", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "F", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the eighteenth international conference on machine learning", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lafferty, J. D., McCallum, A., & Pereira, F. C. N. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the eighteenth international conference on machine learning, 282-289.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Translating chinese romanized name into Chinese idiographic characters via corpus and web validation", "authors": [ { "first": "Y", "middle": [], "last": "Li", "suffix": "" }, { "first": "G", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 2005, "venue": "Proceedings of coria 2005", "volume": "", "issue": "", "pages": "323--338", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Y., & Grefenstette, G. (2005). Translating chinese romanized name into Chinese idiographic characters via corpus and web validation. In Proceedings of coria 2005, 323-338.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Mining parenthetical translations from the web by word alignment", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" }, { "first": "S", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "B", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "M", "middle": [], "last": "Pa\u015fca", "suffix": "" } ], "year": 2008, "venue": "Proceedings of acl-08: Hlt, 994-1002. Learning to Find Translations and Transliterations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, D., Zhao, S., Van Durme, B., & Pa\u015fca, M. (2008). Mining parenthetical translations from the web by word alignment. In Proceedings of acl-08: Hlt, 994-1002. Learning to Find Translations and Transliterations 45", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Anchor text mining for translation of web queries: A transitive translation approach", "authors": [ { "first": "W.-H", "middle": [], "last": "Lu", "suffix": "" }, { "first": "L.-F", "middle": [], "last": "Chien", "suffix": "" }, { "first": "H.-J", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "ACM Trans. Inf. Syst", "volume": "22", "issue": "2", "pages": "242--269", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lu, W.-H., Chien, L.-F., & Lee, H.-J. (2004). Anchor text mining for translation of web queries: A transitive translation approach. ACM Trans. Inf. Syst., 22(2), 242-269.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Models of translational equivalence among words", "authors": [ { "first": "I", "middle": [ "D" ], "last": "Melamed", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "2", "pages": "221--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melamed, I. D. (2000). Models of translational equivalence among words. Computational Linguistics, 26(2), 221-249.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Using the web as a bilingual dictionary", "authors": [ { "first": "M", "middle": [], "last": "Nagata", "suffix": "" }, { "first": "T", "middle": [], "last": "Saito", "suffix": "" }, { "first": "K", "middle": [], "last": "Suzuki", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the workshop on data-driven methods in machine translation", "volume": "14", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nagata, M., Saito, T., & Suzuki, K. (2001). Using the web as a bilingual dictionary. In Proceedings of the workshop on data-driven methods in machine translation, volume 14, 1-8.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Finding ideographic representations of Japanese names written in latin script via language identification and corpus validation", "authors": [ { "first": "Y", "middle": [], "last": "Qu", "suffix": "" }, { "first": "G", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd annual meeting on association for computational linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qu, Y., & Grefenstette, G. (2004). Finding ideographic representations of Japanese names written in latin script via language identification and corpus validation. In Proceedings of the 42nd annual meeting on association for computational linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Translation and technology", "authors": [ { "first": "C", "middle": [ "K" ], "last": "Quah", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quah, C. K. (2006). Translation and technology. Palgrave Macmillan.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Translating collocations for bilingual lexicons: a statistical approach", "authors": [ { "first": "F", "middle": [], "last": "Smadja", "suffix": "" }, { "first": "K", "middle": [ "R" ], "last": "Mckeown", "suffix": "" }, { "first": "V", "middle": [], "last": "Hatzivassiloglou", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Smadja, F., McKeown, K. R., & Hatzivassiloglou, V. (1996). Translating collocations for bilingual lexicons: a statistical approach. Computational Linguistics, 22(1), 1-38.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "A statistical method for finding word boundaries in Chinese text", "authors": [ { "first": "R", "middle": [ "W" ], "last": "Sproat", "suffix": "" }, { "first": "C", "middle": [], "last": "Shih", "suffix": "" } ], "year": 1990, "venue": "Computer Processing of Chinese and Oriental Languages", "volume": "4", "issue": "4", "pages": "336--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat, R. W., & Shih, C. (1990). A statistical method for finding word boundaries in Chinese text. Computer Processing of Chinese and Oriental Languages, 4(4), 336-351.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Learning source-target surface patterns for web-based terminology translation", "authors": [ { "first": "J.-C", "middle": [], "last": "Wu", "suffix": "" }, { "first": "T", "middle": [], "last": "Lin", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Chang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the acl 2005 on interactive poster and demonstration sessions", "volume": "", "issue": "", "pages": "37--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, J.-C., Lin, T., & Chang, J. S. (2005). Learning source-target surface patterns for web-based terminology translation. In Proceedings of the acl 2005 on interactive poster and demonstration sessions, 37-40.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Mining translations of oov terms from the web through cross-lingual query expansion", "authors": [ { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "F", "middle": [], "last": "Huang", "suffix": "" }, { "first": "S", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 28th annual international acm sigir conference on research and development in information retrieval", "volume": "", "issue": "", "pages": "669--670", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Y., Huang, F., & Vogel, S. (2005). Mining translations of oov terms from the web through cross-lingual query expansion. In Proceedings of the 28th annual international acm sigir conference on research and development in information retrieval, 669-670.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Language and Machines: Computers in Translation and Linguistics. A report by the Automatic Language Processing Advisory Committee", "authors": [ { "first": "Alpac", "middle": [], "last": "Reference", "suffix": "" } ], "year": 1966, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reference ALPAC. (1966). Language and Machines: Computers in Translation and Linguistics. A report by the Automatic Language Processing Advisory Committee (Tech. Rep. No. Publication 1416), 2101 Constitution Avenue, Washington D.C., 20418 USA: National Academy of Sciences, National Research Council.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "A Hybrid Approach to EBMT for Indian Languages", "authors": [ { "first": "V", "middle": [], "last": "Ambati", "suffix": "" }, { "first": "U", "middle": [], "last": "Rohini", "suffix": "" } ], "year": 2007, "venue": "ICON", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ambati, V., & Rohini, U. (2007). A Hybrid Approach to EBMT for Indian Languages. ICON 2007.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Translation Resources, Services and Tools for Indian Languages", "authors": [ { "first": "S", "middle": [], "last": "Badodekar", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Badodekar, S. (2003). Translation Resources, Services and Tools for Indian Languages. Computer Science and Engineering Department, Indian Institute of Technology, Mumbai, 400019, India.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Multilingual Book Reader: Transliteration, Word-to-Word Translation and Full-text Translation", "authors": [ { "first": "P", "middle": [], "last": "Balajapally", "suffix": "" }, { "first": "P", "middle": [], "last": "Bandaru", "suffix": "" }, { "first": "M", "middle": [], "last": "Ganapathiraju", "suffix": "" }, { "first": "N", "middle": [], "last": "Balakrishnan", "suffix": "" }, { "first": "R", "middle": [], "last": "Reddy", "suffix": "" } ], "year": 2006, "venue": "VAVA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Balajapally, P., Bandaru, P., Ganapathiraju, M., Balakrishnan, N., & Reddy, R. (2006). Multilingual Book Reader: Transliteration, Word-to-Word Translation and Full-text Translation. In VAVA 2006.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "ANUSAARAKA: Machine Translation in Stages", "authors": [ { "first": "A", "middle": [], "last": "Bharati", "suffix": "" }, { "first": "V", "middle": [], "last": "Chaitanya", "suffix": "" }, { "first": "A", "middle": [ "P" ], "last": "Kulkarni", "suffix": "" }, { "first": "R", "middle": [], "last": "Sangal", "suffix": "" } ], "year": 1997, "venue": "A Quarterly in Artificial Intelligence", "volume": "10", "issue": "3", "pages": "22--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bharati, A., Chaitanya, V., Kulkarni, A. P., & Sangal, R. (1997). ANUSAARAKA: Machine Translation in Stages. A Quarterly in Artificial Intelligence, 10(3), 22-25.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Towards a Pictorially Grounded Language for Machine-Aided Translation", "authors": [ { "first": "S", "middle": [ "K" ], "last": "Borgohain", "suffix": "" }, { "first": "S", "middle": [ "B" ], "last": "Nair", "suffix": "" } ], "year": 2010, "venue": "International Journal on Asian Language Processing", "volume": "20", "issue": "3", "pages": "87--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Borgohain, S. K., & Nair, S. B. (2010). Towards a Pictorially Grounded Language for Machine-Aided Translation. International Journal on Asian Language Processing, 20 (3), 87-109.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "CALTS in collaboration with, IIIT Hyderabad. English-Telugu T2T Machine Translation and Telugu-Tamil Machine translation System. Indo-German Workshop on Language technologies", "authors": [], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "CALTS in collaboration with, IIIT Hyderabad. English-Telugu T2T Machine Translation and Telugu-Tamil Machine translation System. Indo-German Workshop on Language technologies, AU-KBC Research Centre, Chennai, 2004 . www.au-kbc.org/dfki/igws/Machine_Translation.ppt.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "An English to Indian Sign Language Machine Translation System", "authors": [ { "first": "T", "middle": [], "last": "Dasgupta", "suffix": "" }, { "first": "A", "middle": [], "last": "Basu", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dasgupta, T., & Basu, A. (2008). An English to Indian Sign Language Machine Translation System, www.cse.iitd.ac.in/embedded/assistech/Proceedings/P17.pdf.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Prototype Machine Translation System From Text-To-Indian Sign Language", "authors": [ { "first": "T", "middle": [], "last": "Dasgupta", "suffix": "" }, { "first": "S", "middle": [], "last": "Dandpat", "suffix": "" }, { "first": "A", "middle": [], "last": "Basu", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages", "volume": "", "issue": "", "pages": "19--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dasgupta, T., Dandpat, S., & Basu, A. (2008). Prototype Machine Translation System From Text-To-Indian Sign Language. In Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages, 19-26.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Interlingua-based English-Hindi Machine Translation and Language Divergence", "authors": [ { "first": "S", "middle": [], "last": "Dave", "suffix": "" }, { "first": "J", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "P", "middle": [], "last": "Bhattacharya", "suffix": "" } ], "year": 2001, "venue": "Journal of Machine Translation", "volume": "16", "issue": "4", "pages": "251--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dave, S., Parikh, J., & Bhattacharya, P. (2001). Interlingua-based English-Hindi Machine Translation and Language Divergence. Journal of Machine Translation, 16(4), 251-304.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Metis II: Example-based machine translation using monolingual corpora -system description", "authors": [ { "first": "P", "middle": [], "last": "Dirix", "suffix": "" }, { "first": "I", "middle": [], "last": "Schuurman", "suffix": "" }, { "first": "V", "middle": [], "last": "Vandeghinste", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 2nd Workshop on Example-Based Machine Translation", "volume": "", "issue": "", "pages": "43--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dirix, P., Schuurman, I., & Vandeghinste V. (2005). Metis II: Example-based machine translation using monolingual corpora -system description. In Proceedings of the 2nd Workshop on Example-Based Machine Translation, 43-50.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Statistical post-editing on SYSTRAN's rule-based translation system", "authors": [ { "first": "L", "middle": [], "last": "Dugast", "suffix": "" }, { "first": "J", "middle": [], "last": "Senellart", "suffix": "" }, { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Second Workshop on SMT", "volume": "", "issue": "", "pages": "220--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dugast, L., Senellart, J., & Koehn, P. (2007). Statistical post-editing on SYSTRAN's rule-based translation system. In Proceedings of the Second Workshop on SMT, 220-223.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Machine Translation System in Indian Perspectives", "authors": [ { "first": "S", "middle": [ "K" ], "last": "Dwivedi", "suffix": "" }, { "first": "P", "middle": [ "P" ], "last": "Sukhadeve", "suffix": "" } ], "year": 2010, "venue": "Journal of Computer Science", "volume": "6", "issue": "10", "pages": "1111--1116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dwivedi, S. K., & Sukhadeve, P. P. (2010). Machine Translation System in Indian Perspectives. Journal of Computer Science, 6(10), 1111-1116.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Evaluation of Hindi to Punjabi Machine Translation System", "authors": [ { "first": "V", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "G", "middle": [ "S" ], "last": "Lehal", "suffix": "" } ], "year": 2009, "venue": "IJCSI International Journal of Computer Science", "volume": "4", "issue": "1", "pages": "36--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goyal, V., & Lehal, G. S. (2009). Evaluation of Hindi to Punjabi Machine Translation System. IJCSI International Journal of Computer Science, 4(1), 36-39.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Hybrid example-based SMT: the best of both worlds", "authors": [ { "first": "D", "middle": [], "last": "Groves", "suffix": "" }, { "first": "A", "middle": [], "last": "Way", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Workshop on Building and Using Parallel Texts", "volume": "", "issue": "", "pages": "183--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Groves, D. & Way, A. (2005). Hybrid example-based SMT: the best of both worlds. In Proceedings of the ACL Workshop on Building and Using Parallel Texts, 183-190.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Phrase based English -Tamil Translation System by Concept Labeling using Translation Memory", "authors": [ { "first": "R", "middle": [], "last": "Harshawardhan", "suffix": "" }, { "first": "M", "middle": [ "S" ], "last": "Augustine", "suffix": "" }, { "first": "K", "middle": [ "P" ], "last": "Soman", "suffix": "" } ], "year": 2011, "venue": "International Journal of Computer Applications", "volume": "20", "issue": "3", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harshawardhan, R., Augustine, M. S., & Soman, K. P. (2011). Phrase based English -Tamil Translation System by Concept Labeling using Translation Memory. International Journal of Computer Applications (0975 -8887), 20(3), 1-6.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "The first MT patents", "authors": [ { "first": "J", "middle": [], "last": "Hutchins", "suffix": "" } ], "year": 1993, "venue": "MT News International", "volume": "", "issue": "", "pages": "14--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hutchins, J. (1993). The first MT patents. MT News International, 14-15.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "The history of machine translation in a nutshell", "authors": [ { "first": "J", "middle": [], "last": "Hutchins", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hutchins, J. (2005). The history of machine translation in a nutshell. http://www.hutchinsweb.me.uk/Nutshell-2005.pdf.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Petr Petrovich Troyanskii (1854-1950): A forgotten pioneer of mechanical translation. Machine translation", "authors": [ { "first": "W", "middle": [ "J" ], "last": "Hutchins", "suffix": "" }, { "first": "E", "middle": [], "last": "Lovtskii", "suffix": "" } ], "year": 2000, "venue": "", "volume": "15", "issue": "", "pages": "187--221", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hutchins, W. J., & Lovtskii, E. (2000). Petr Petrovich Troyanskii (1854-1950): A forgotten pioneer of mechanical translation. Machine translation, 15(3), 187-221.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "IBM Archives online: Press release", "authors": [ { "first": "", "middle": [], "last": "Ibm", "suffix": "" } ], "year": 1954, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "IBM. (1954). 701 Translator. IBM Archives online: Press release January 8th 1954, http://www-03.ibm.com/ibm/history/exhibits/701/701-translator.html.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Design and Development of an Adaptable Frame-based System for Dravidian Language", "authors": [ { "first": "S", "middle": [ "M" ], "last": "Idicula", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Idicula, S. M. (1999). Design and Development of an Adaptable Frame-based System for Dravidian Language. Ph.D thesis, Department of Computer Science, COCHIN University of Science and Technology.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Machine Aided Translation Systems: The Indian Scenario", "authors": [ { "first": "A", "middle": [], "last": "Jain", "suffix": "" } ], "year": 2009, "venue": "", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jain, A. (2009). Machine Aided Translation Systems: The Indian Scenario. 2(6), 2009. www.iitk.ac.in/infocell/Archive/dirnov2/ techno_machine.html.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Factored translation models", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "H", "middle": [], "last": "Hoang", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods. In NLP and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "868--876", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, P. & Hoang, H. (2007). Factored translation models. In Proceedings of the 2007 Joint Conference on Empirical Methods. In NLP and Computational Natural Language Learning, 868-876.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "Developing English-Urdu Machine Translation Via Hindi", "authors": [ { "first": "R", "middle": [], "last": "Mahesh", "suffix": "" }, { "first": "K", "middle": [], "last": "Sinha", "suffix": "" } ], "year": 2009, "venue": "Third Workshop on Computational Approaches to Arabic Scriptbased Languages (CAASL3)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahesh, R., & Sinha, K. (2009). Developing English-Urdu Machine Translation Via Hindi. In Third Workshop on Computational Approaches to Arabic Scriptbased Languages (CAASL3), MT Summit XII, Ottawa, Canada.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "Foundations of Statistical NLP", "authors": [ { "first": "C", "middle": [], "last": "Manning", "suffix": "" }, { "first": "H", "middle": [], "last": "Schutze", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT/NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, C., & Schutze, H. (2003). Foundations of Statistical NLP. Proceedings of HLT/NAACL.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Sanskrit Karaka Analyzer for Machine Translation", "authors": [ { "first": "S", "middle": [ "K" ], "last": "Mishra", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mishra, S. K. (2007). Sanskrit Karaka Analyzer for Machine Translation. PhD. Thesis, Jawaharlal Nehru University.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "SMT using Joshua: An approach to build 'enTel' system. Language in India, Special Volume:Problems of Parsing in Indian Languages", "authors": [ { "first": "A", "middle": [], "last": "Nalluri", "suffix": "" }, { "first": "V", "middle": [], "last": "Kommaluri", "suffix": "" } ], "year": 2011, "venue": "", "volume": "11", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nalluri, A., & Kommaluri, V. (2011). SMT using Joshua: An approach to build 'enTel' system. Language in India, Special Volume:Problems of Parsing in Indian Languages, 11(5), 1-6. www.languageinindia.com.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "Use of Machine Translation in India: Current Status", "authors": [ { "first": "S", "middle": [], "last": "Naskar", "suffix": "" }, { "first": "S", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2005, "venue": "Proceedings of MT SUMMIT X", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naskar, S., & Bandyopadhyay, S. (2005). Use of Machine Translation in India: Current Status. In Proceedings of MT SUMMIT X; September 13-15, 2005, Phuket, Thailand.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "Machine Translation -A Transfer Approach, A project report", "authors": [ { "first": "G", "middle": [], "last": "Noone", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noone, G. (2003). Machine Translation -A Transfer Approach, A project report, www.scss.tcd.ie/undergraduate/bacsll/bacsll_web/nooneg0203.pdf.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Towards hybrid quality-oriented machine translation on linguistics and probabilities in MT", "authors": [ { "first": "S", "middle": [], "last": "Oepen", "suffix": "" }, { "first": "E", "middle": [], "last": "Velldal", "suffix": "" }, { "first": "J", "middle": [ "T" ], "last": "L\u00f8nning", "suffix": "" }, { "first": "P", "middle": [], "last": "Meurer", "suffix": "" }, { "first": "V", "middle": [], "last": "Rosen", "suffix": "" }, { "first": "D", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 11th Conference on Theoretical and Methodological Issues in Machine Translation", "volume": "", "issue": "", "pages": "144--153", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oepen, S., Velldal, E., L\u00f8nning, J. T., Meurer, P., Rosen, V., & Flickinger, D. (2007). Towards hybrid quality-oriented machine translation on linguistics and probabilities in MT. In Proceedings of the 11th Conference on Theoretical and Methodological Issues in Machine Translation, 144-153.", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "Rule-based Sentence Simplification for English to Tamil Machine Translation System", "authors": [ { "first": "C", "middle": [], "last": "Poornima", "suffix": "" }, { "first": "V", "middle": [], "last": "Dhanalakshmi", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Kumar", "suffix": "" }, { "first": "K", "middle": [ "P" ], "last": "Soman", "suffix": "" } ], "year": 2011, "venue": "International Journal of Computer Applications", "volume": "", "issue": "8", "pages": "38--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Poornima, C., Dhanalakshmi, V., Kumar M. A., & Soman, K. P. (2011). Rule-based Sentence Simplification for English to Tamil Machine Translation System. International Journal of Computer Applications (0975 -8887), 25(8), 38-42.", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "Rule-based Reordering and Morphological Processing For English-Malayalam SMT", "authors": [ { "first": "C", "middle": [], "last": "Rahul", "suffix": "" }, { "first": "K", "middle": [], "last": "Dinunath", "suffix": "" }, { "first": "R", "middle": [], "last": "Ravindran", "suffix": "" }, { "first": "K", "middle": [ "P" ], "last": "Soman", "suffix": "" } ], "year": 2009, "venue": "International Conference on Advances in Computing, Control, and Telecommunication Technologies", "volume": "", "issue": "", "pages": "458--460", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rahul, C., Dinunath, K., Ravindran, R., & Soman, K. P. (2009). Rule-based Reordering and Morphological Processing For English-Malayalam SMT. International Conference on Advances in Computing, Control, and Telecommunication Technologies, 458-460.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Simple Syntactic and Morphological Processing Can Help English-Hindi SMT", "authors": [ { "first": "A", "middle": [], "last": "Ramanathan", "suffix": "" }, { "first": "P", "middle": [], "last": "Bhattacharyya", "suffix": "" }, { "first": "J", "middle": [], "last": "Hegde", "suffix": "" }, { "first": "R", "middle": [ "M" ], "last": "Shah", "suffix": "" }, { "first": "M", "middle": [], "last": "Sasikumar", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramanathan, A., Bhattacharyya, P., Hegde, J., Shah, R. M., & Sasikumar, M. (2008). Simple Syntactic and Morphological Processing Can Help English-Hindi SMT. In IJCNLP 2008.", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "Machine Translation in India: A Brief Survey", "authors": [ { "first": "M", "middle": [ "D" ], "last": "Rao", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rao, M. D. (2000). Machine Translation in India: A Brief Survey. www.elda.org/en/proj/scalla/SCALLA2001/SCALLA2001Rao.pdf.", "links": null }, "BIBREF78": { "ref_id": "b78", "title": "An Interactive Approach to Development of English-Tamil Machine Translation System on the Web", "authors": [ { "first": "V", "middle": [], "last": "Renganathan", "suffix": "" } ], "year": 2002, "venue": "Tamil Internet", "volume": "", "issue": "", "pages": "68--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Renganathan, V. (2002). An Interactive Approach to Development of English-Tamil Machine Translation System on the Web. Tamil Internet 2002, California, USA. 68-73. www.infitt.org/ti2002/hubs/ conference/papers.html.", "links": null }, "BIBREF79": { "ref_id": "b79", "title": "Rule-based translation with statistical phrase-based post-editing", "authors": [ { "first": "M", "middle": [], "last": "Simard", "suffix": "" }, { "first": "N", "middle": [], "last": "Ueffing", "suffix": "" }, { "first": "P", "middle": [], "last": "Isabelle", "suffix": "" }, { "first": "R", "middle": [], "last": "Kuhn", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Second Workshop on SMT", "volume": "", "issue": "", "pages": "203--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simard, M., Ueffing, N., Isabelle, P., & Kuhn, R. (2007). Rule-based translation with statistical phrase-based post-editing. In Proceedings of the Second Workshop on SMT, 203-206.", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "Manipuri-English Bidirectional SMT Systems using Morphology and Dependency Relations", "authors": [ { "first": "T", "middle": [ "D" ], "last": "Singh", "suffix": "" }, { "first": "S", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2010, "venue": "Proceedings of SSST-4, Fourth Workshop on Syntax and Structure in Statistical Translation", "volume": "", "issue": "", "pages": "83--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Singh, T. D., & Bandyopadhyay, S. (2010). Manipuri-English Bidirectional SMT Systems using Morphology and Dependency Relations. In Proceedings of SSST-4, Fourth Workshop on Syntax and Structure in Statistical Translation, 83-91, COLING 2010, Beijing.", "links": null }, "BIBREF81": { "ref_id": "b81", "title": "AnglaHindi: An English to Hindi Machine-Aided Translation System", "authors": [ { "first": "R", "middle": [ "M K" ], "last": "Sinha", "suffix": "" }, { "first": "A", "middle": [], "last": "Jain", "suffix": "" } ], "year": 2003, "venue": "MT Summit IX", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sinha, R. M. K. & Jain, A. (2003). AnglaHindi: An English to Hindi Machine-Aided Translation System. In MT Summit IX, New Orleans, Louisiana, USA, September, 2003.", "links": null }, "BIBREF82": { "ref_id": "b82", "title": "ANGLABHARTI: a multilingual machine aided translation project on translation from English to Indian languages", "authors": [ { "first": "R", "middle": [ "M K" ], "last": "Sinha", "suffix": "" }, { "first": "K", "middle": [], "last": "Sivaraman", "suffix": "" }, { "first": "A", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "R", "middle": [], "last": "Jain", "suffix": "" }, { "first": "R", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "A", "middle": [], "last": "Jain", "suffix": "" } ], "year": 1995, "venue": "IEEE International Conference on: Systems, Man and Cybernetics", "volume": "", "issue": "", "pages": "1609--1614", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sinha, R. M. K., Sivaraman, K., Agrawal, A., Jain, R., Srivastava, R. & Jain, A. (1995). ANGLABHARTI: a multilingual machine aided translation project on translation from English to Indian languages. IEEE International Conference on: Systems, Man and Cybernetics, 1995. Intelligent Systems for the 21st Century, 1609-1614.", "links": null }, "BIBREF83": { "ref_id": "b83", "title": "A Novel Approach for English to South Dravidian Language SMT System", "authors": [ { "first": "P", "middle": [], "last": "Unnikrishnan", "suffix": "" }, { "first": "P", "middle": [ "J" ], "last": "Antony", "suffix": "" }, { "first": "K", "middle": [ "P" ], "last": "Soman", "suffix": "" } ], "year": 2010, "venue": "International Journal on Computer Science and Engineering (IJCSE)", "volume": "", "issue": "08", "pages": "2749--2759", "other_ids": {}, "num": null, "urls": [], "raw_text": "Unnikrishnan, P., Antony, P. J., & Soman, K. P. (2010). A Novel Approach for English to South Dravidian Language SMT System. International Journal on Computer Science and Engineering (IJCSE), 02(08), 2749-2759.", "links": null }, "BIBREF84": { "ref_id": "b84", "title": "Vaasaanubaada Automatic Machine Translation Of Bilingual Bengali -Assamese News Texts", "authors": [ { "first": "K", "middle": [], "last": "Vijayanand", "suffix": "" }, { "first": "S", "middle": [], "last": "Choudhury", "suffix": "" }, { "first": "P", "middle": [], "last": "Ratna", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vijayanand, K., Choudhury, S., & Ratna, P. (2002). Vaasaanubaada Automatic Machine Translation Of Bilingual Bengali -Assamese News Texts. Language Engineering Conference, University of Hyderabad, India.", "links": null }, "BIBREF85": { "ref_id": "b85", "title": "Warren Weaver Memorandum", "authors": [ { "first": "W", "middle": [], "last": "Weaver", "suffix": "" } ], "year": 1949, "venue": "MT News International", "volume": "", "issue": "22", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weaver, W. (1999). Warren Weaver Memorandum, July 1949. MT News International, no. 22, July 1999, 5-6, 15.", "links": null }, "BIBREF86": { "ref_id": "b86", "title": "A SMT Approach to Sinhala-Tamil Language Translation", "authors": [ { "first": "R", "middle": [], "last": "Weerasinghe", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weerasinghe, R. (2011). A SMT Approach to Sinhala-Tamil Language Translation. citeseerx.ist.psu.edu /viewdoc/summary?doi= 10.1.1.78.7481, 2011.", "links": null }, "BIBREF87": { "ref_id": "b87", "title": "Chinese-English SMT by Parsing", "authors": [ { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Y. (2006). Chinese-English SMT by Parsing. www.cl.cam.ac.uk/~yz360/ mscthesis.pdf.", "links": null }, "BIBREF88": { "ref_id": "b88", "title": "Emotions from text: machine learning for text-based emotion prediction", "authors": [ { "first": "C", "middle": [ "O" ], "last": "Alm", "suffix": "" }, { "first": "D", "middle": [], "last": "Roth", "suffix": "" }, { "first": "R", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 2005, "venue": "HLT-EMNLP", "volume": "", "issue": "", "pages": "579--586", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alm, C. O., Roth, D., & Sproat, R. (2005). Emotions from text: machine learning for text-based emotion prediction. HLT-EMNLP, 579-586.", "links": null }, "BIBREF89": { "ref_id": "b89", "title": "Bengali Verb Subcategorization Frame Acquisition -A Baseline Model", "authors": [ { "first": "S", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "D", "middle": [], "last": "Das", "suffix": "" }, { "first": "S", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "76--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Banerjee, S., Das, D., & Bandyopadhyay, S. (2009). Bengali Verb Subcategorization Frame Acquisition -A Baseline Model. ACL-IJCNLP-2009, ALR-7 Workshop, 76-83.", "links": null }, "BIBREF90": { "ref_id": "b90", "title": "Classification of Verbs -Towards Developing a Bengali Verb Subcategorization Lexicon. GWC", "authors": [ { "first": "S", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "D", "middle": [], "last": "Das", "suffix": "" }, { "first": "S", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "76--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Banerjee, S., Das, D., & Bandyopadhyay, S. (2010). Classification of Verbs -Towards Developing a Bengali Verb Subcategorization Lexicon. GWC, 76-83.", "links": null }, "BIBREF91": { "ref_id": "b91", "title": "Automatic Extraction of Opinion Propositions and their Holders", "authors": [ { "first": "S", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "H", "middle": [], "last": "Yu", "suffix": "" }, { "first": "A", "middle": [], "last": "Thornton", "suffix": "" }, { "first": "V", "middle": [], "last": "Hatzivassiloglou", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2004, "venue": "AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bethard, S., Yu, H., Thornton, A., Hatzivassiloglou, V., & Jurafsky, D. (2004). Automatic Extraction of Opinion Propositions and their Holders, In AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications.", "links": null }, "BIBREF92": { "ref_id": "b92", "title": "Identifying Sources of Opinions with Conditional Random Fields and Extraction Patterns", "authors": [ { "first": "Y", "middle": [], "last": "Choi", "suffix": "" }, { "first": "C", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "E", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "S", "middle": [], "last": "Patwardhan", "suffix": "" } ], "year": 2005, "venue": "Proceedings of HLT/EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Choi, Y., Cardie, C., Riloff, E., & Patwardhan, S. (2005). Identifying Sources of Opinions with Conditional Random Fields and Extraction Patterns. In Proceedings of HLT/EMNLP.", "links": null }, "BIBREF93": { "ref_id": "b93", "title": "Word to Sentence Level Emotion Tagging for Bengali Blogs", "authors": [ { "first": "D", "middle": [], "last": "Das", "suffix": "" }, { "first": "S", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "149--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Das, D., & Bandyopadhyay, S. (2009a). Word to Sentence Level Emotion Tagging for Bengali Blogs. ACL-IJCNLP 2009, 149-152.", "links": null }, "BIBREF94": { "ref_id": "b94", "title": "Emotion Tagging -A Comparative Study on Bengali and English Blogs. ICON-09", "authors": [ { "first": "D", "middle": [], "last": "Das", "suffix": "" }, { "first": "S", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "177--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Das, D., & Bandyopadhyay, S. (2009b). Emotion Tagging -A Comparative Study on Bengali and English Blogs. ICON-09. 177-184.", "links": null }, "BIBREF95": { "ref_id": "b95", "title": "Labeling Emotion in Bengali Blog Corpus -A Fine Grained Tagging at Sentence Level", "authors": [ { "first": "D", "middle": [], "last": "Das", "suffix": "" }, { "first": "S", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2010, "venue": "COLING-2010", "volume": "8", "issue": "", "pages": "47--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Das, D., & Bandyopadhyay, S. (2010a). Labeling Emotion in Bengali Blog Corpus -A Fine Grained Tagging at Sentence Level. ALR8, COLING-2010, 47-55.", "links": null }, "BIBREF96": { "ref_id": "b96", "title": "Developing Bengali WordNet Affect for Analyzing Emotion", "authors": [ { "first": "D", "middle": [], "last": "Das", "suffix": "" }, { "first": "S", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "35--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Das, D., & Bandyopadhyay, S. (2010b). Developing Bengali WordNet Affect for Analyzing Emotion. ICCPOL-2010, 35-40.", "links": null }, "BIBREF97": { "ref_id": "b97", "title": "Sentence Level Emotion Tagging on Blog and News Corpora", "authors": [ { "first": "D", "middle": [], "last": "Das", "suffix": "" }, { "first": "S", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2010, "venue": "Journal of Intelligent System", "volume": "19", "issue": "2", "pages": "125--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Das, D., & Bandyopadhyay, S. (2010c). Sentence Level Emotion Tagging on Blog and News Corpora. Journal of Intelligent System (JIS), 19 (2), 125-134.", "links": null }, "BIBREF98": { "ref_id": "b98", "title": "Emotion Holder for Emotional Verbs -The role of Subject and Syntax", "authors": [ { "first": "D", "middle": [], "last": "Das", "suffix": "" }, { "first": "S", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2010, "venue": "LNCS", "volume": "6008", "issue": "", "pages": "385--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Das, D., & Bandyopadhyay, S. (2010d). Emotion Holder for Emotional Verbs -The role of Subject and Syntax. In CICLing, A. Gelbukh (Ed.), LNCS 6008, 385-393.", "links": null }, "BIBREF99": { "ref_id": "b99", "title": "Emotion Co-referencing -Emotional Expression, Holder, and Topic 97", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emotion Co-referencing -Emotional Expression, Holder, and Topic 97", "links": null }, "BIBREF100": { "ref_id": "b100", "title": "Named Entity Recognition using Appropriate Unlabeled Data, Post-processing and Voting", "authors": [ { "first": "A", "middle": [], "last": "Ekbal", "suffix": "" }, { "first": "S", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2008, "venue": "Informatica Journal of Computing and Informatics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ekbal, A., & Bandyopadhyay, S. (2008). Named Entity Recognition using Appropriate Unlabeled Data, Post-processing and Voting. In Informatica Journal of Computing and Informatics, ACTA Press.", "links": null }, "BIBREF101": { "ref_id": "b101", "title": "Facial expression and emotion", "authors": [ { "first": "P", "middle": [], "last": "Ekman", "suffix": "" } ], "year": 1993, "venue": "American Psychologist", "volume": "48", "issue": "4", "pages": "384--392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ekman, P. (1993). Facial expression and emotion. American Psychologist, 48(4), 384-392.", "links": null }, "BIBREF102": { "ref_id": "b102", "title": "SENTIWORDNET: A Publicly Available Lexical Resource for Opinion Mining, Language Resource and Evaluation Campaign", "authors": [ { "first": "A", "middle": [], "last": "Esuli", "suffix": "" }, { "first": "F", "middle": [], "last": "Sebastiani", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Esuli, A., & Sebastiani, F. (2006). SENTIWORDNET: A Publicly Available Lexical Resource for Opinion Mining, Language Resource and Evaluation Campaign.", "links": null }, "BIBREF103": { "ref_id": "b103", "title": "A low-resources approach to Opinion Analysis: Machine Learning and Simple Approaches", "authors": [ { "first": "D", "middle": [ "K" ], "last": "Evans", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evans, D. K. (2007). A low-resources approach to Opinion Analysis: Machine Learning and Simple Approaches, NTCIR.", "links": null }, "BIBREF104": { "ref_id": "b104", "title": "Model of Emotional Holder", "authors": [ { "first": "J", "middle": [], "last": "Hu", "suffix": "" }, { "first": "C", "middle": [], "last": "Guan", "suffix": "" }, { "first": "M", "middle": [], "last": "Wang", "suffix": "" }, { "first": "F", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2006, "venue": "PRIMA 2006", "volume": "4088", "issue": "", "pages": "534--539", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hu, J., Guan, C., Wang, M., & Lin, F. (2006). Model of Emotional Holder. In Shi, Z.-Z., Sadananda, R. (eds.) PRIMA 2006. LNCS (LNAI), 4088, 534-539.", "links": null }, "BIBREF105": { "ref_id": "b105", "title": "Text Categorization with Support Machines: Learning with Many Relevant Features", "authors": [ { "first": "T", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 1998, "venue": "European Conference on Machine Learning", "volume": "", "issue": "", "pages": "137--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joachims, T. (1998). Text Categorization with Support Machines: Learning with Many Relevant Features. In European Conference on Machine Learning, 137-142.", "links": null }, "BIBREF106": { "ref_id": "b106", "title": "Identifying Opinion Holders in Opinion Text from Online Newspapers", "authors": [ { "first": "Y", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Y", "middle": [], "last": "Jung", "suffix": "" }, { "first": "S.-H", "middle": [], "last": "Myaeng", "suffix": "" } ], "year": 2007, "venue": "2007 IEEE International Conference on Granular Computing", "volume": "", "issue": "", "pages": "699--702", "other_ids": { "DOI": [ "10.1109/GrC.2007.45" ] }, "num": null, "urls": [], "raw_text": "Kim, Y., Jung, Y., & Myaeng, S.-H. (2007). Identifying Opinion Holders in Opinion Text from Online Newspapers. In 2007 IEEE International Conference on Granular Computing, 699-702, doi:10.1109/GrC.2007.45.", "links": null }, "BIBREF107": { "ref_id": "b107", "title": "Extracting Opinions, Opinion Holders, and Topics Expressed in Online News Media Text", "authors": [ { "first": "S", "middle": [ "M" ], "last": "Kim", "suffix": "" }, { "first": "E", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim, S. M., & Hovy, E. (2006). Extracting Opinions, Opinion Holders, and Topics Expressed in Online News Media Text. In Proceedings of the Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF108": { "ref_id": "b108", "title": "Collecting evaluative expressions for opinion extraction", "authors": [ { "first": "N", "middle": [], "last": "Kobayashi", "suffix": "" }, { "first": "K", "middle": [], "last": "Inui", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" }, { "first": "K", "middle": [], "last": "Tateishi", "suffix": "" }, { "first": "T", "middle": [], "last": "Fukushima", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kobayashi, N., Inui, K., Matsumoto, Y., Tateishi, K., & Fukushima, T. (2004). Collecting evaluative expressions for opinion extraction. IJCNLP.", "links": null }, "BIBREF109": { "ref_id": "b109", "title": "VerbNet: A broad-coverage, comprehensive verb lexicon", "authors": [ { "first": "K", "middle": [], "last": "Kipper-Schuler", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kipper-Schuler, K. (2005). VerbNet: A broad-coverage, comprehensive verb lexicon. Ph.D. thesis, Computer and Information Science Dept., University of Pennsylvania.", "links": null }, "BIBREF110": { "ref_id": "b110", "title": "Content analysis: An introduction to its methodology", "authors": [ { "first": "K", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krippendorff, K. (2004). Content analysis: An introduction to its methodology. Thousand Oaks, CA: Sage.", "links": null }, "BIBREF111": { "ref_id": "b111", "title": "Rhetorical Structure Theory: Toward a Functional Theory of Text Organization", "authors": [ { "first": "W", "middle": [ "C" ], "last": "Mann", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Thompson", "suffix": "" } ], "year": 1988, "venue": "TEXT", "volume": "8", "issue": "", "pages": "243--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mann, W. C., & Thompson, S. A. (1988). Rhetorical Structure Theory: Toward a Functional Theory of Text Organization, TEXT 8, 243-281.", "links": null }, "BIBREF112": { "ref_id": "b112", "title": "Capturing Global Mood Levels using Blog Posts", "authors": [ { "first": "G", "middle": [], "last": "Mishne", "suffix": "" }, { "first": "", "middle": [], "last": "Rijke", "suffix": "" }, { "first": "M", "middle": [], "last": "De", "suffix": "" } ], "year": 2006, "venue": "Proceedings of AAAI, Spring Symposium on Computational Approaches to Analysing Weblogs", "volume": "", "issue": "", "pages": "145--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mishne, G. & Rijke, de M. (2006). Capturing Global Mood Levels using Blog Posts. In Proceedings of AAAI, Spring Symposium on Computational Approaches to Analysing Weblogs, 145-152.", "links": null }, "BIBREF113": { "ref_id": "b113", "title": "Narrowing the Social Gap among People Involved in Global Dialog: Automatic Emotion Detection in Blog Posts", "authors": [ { "first": "A", "middle": [], "last": "Neviarouskaya", "suffix": "" }, { "first": "H", "middle": [], "last": "Prendinger", "suffix": "" }, { "first": "M", "middle": [], "last": "Ishizuka", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Neviarouskaya, A., Prendinger, H., & Ishizuka, M. (2007). Narrowing the Social Gap among People Involved in Global Dialog: Automatic Emotion Detection in Blog Posts, ICWSM.", "links": null }, "BIBREF114": { "ref_id": "b114", "title": "Improving machine learning approaches to co-reference resolution", "authors": [ { "first": "V", "middle": [], "last": "Ng", "suffix": "" }, { "first": "C", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ng, V., & Cardie, C. (2002). Improving machine learning approaches to co-reference resolution. In Proceedings of ACL.", "links": null }, "BIBREF115": { "ref_id": "b115", "title": "Extracting product features and opinions from reviews", "authors": [ { "first": "A", "middle": [], "last": "Popescu", "suffix": "" }, { "first": "O", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2005, "venue": "Proceedings of HLT/EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Popescu, A., & Etzioni, O. (2005). Extracting product features and opinions from reviews. In Proceedings of HLT/EMNLP.", "links": null }, "BIBREF116": { "ref_id": "b116", "title": "Machine learning in automated text categorization", "authors": [ { "first": "F", "middle": [], "last": "Sebastiani", "suffix": "" } ], "year": 2002, "venue": "ACM Computing Surveys", "volume": "34", "issue": "1", "pages": "1--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastiani, F. (2002). Machine learning in automated text categorization. ACM Computing Surveys, 34(1), 1-47.", "links": null }, "BIBREF117": { "ref_id": "b117", "title": "Opinion Holder Extraction from Author and Authority Viewpoints", "authors": [ { "first": "Y", "middle": [], "last": "Seki", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the SIGIR'07", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seki, Y. (2007). Opinion Holder Extraction from Author and Authority Viewpoints. In Proceedings of the SIGIR'07, ACM 978-1-59593-597-7/07/0007.", "links": null }, "BIBREF119": { "ref_id": "b119", "title": "A machine learning approach to co-reference resolution of noun phrases", "authors": [ { "first": "W", "middle": [], "last": "Soon", "suffix": "" }, { "first": "H", "middle": [], "last": "Ng", "suffix": "" }, { "first": "D", "middle": [], "last": "Lim", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "4", "pages": "521--544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Soon, W., Ng, H., & Lim, D. (2001). A machine learning approach to co-reference resolution of noun phrases. Computational Linguistics, 27(4), 521-544.", "links": null }, "BIBREF120": { "ref_id": "b120", "title": "Annotating topics of opinions", "authors": [ { "first": "V", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "C", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2008, "venue": "Proceedings of Language Resource and Evaluation Campaign", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stoyanov, V., & Cardie, C. (2008a). Annotating topics of opinions. In Proceedings of Language Resource and Evaluation Campaign.", "links": null }, "BIBREF121": { "ref_id": "b121", "title": "Topic Identification for Fine-Grained Opinion Analysis", "authors": [ { "first": "V", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "C", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "817--824", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stoyanov, V., & Cardie, C. (2008b). Topic Identification for Fine-Grained Opinion Analysis. Coling 2008, 817-824.", "links": null }, "BIBREF122": { "ref_id": "b122", "title": "SemEval-2007 Task 14: Affective Text, ACL", "authors": [ { "first": "C", "middle": [], "last": "Strapparava", "suffix": "" }, { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Strapparava, C., & Mihalcea, R. (2007). SemEval-2007 Task 14: Affective Text, ACL.", "links": null }, "BIBREF123": { "ref_id": "b123", "title": "Unsupervised Semantic Role Labelling", "authors": [ { "first": "R", "middle": [ "S" ], "last": "Swier", "suffix": "" }, { "first": "S", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2004, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swier, R. S., & Stevenson, S. (2004). Unsupervised Semantic Role Labelling. EMNLP.", "links": null }, "BIBREF124": { "ref_id": "b124", "title": "Annotating expressions of opinions and emotions in language", "authors": [ { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "T", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "C", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2005, "venue": "Language Resources and Evaluation", "volume": "1", "issue": "2", "pages": "1--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wiebe, J., Wilson, T., & Cardie, C. (2005). Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 1(2), 1-54.", "links": null }, "BIBREF125": { "ref_id": "b125", "title": "Emotion classification Using Web Blog Corpora", "authors": [ { "first": "C", "middle": [], "last": "Yang", "suffix": "" }, { "first": "K", "middle": [ "H" ], "last": "Lin", "suffix": "" }, { "first": ".-Y", "middle": [], "last": "Chen", "suffix": "" }, { "first": "H.-H", "middle": [], "last": "", "suffix": "" } ], "year": 2007, "venue": "ACM International Conference on Web Intelligence", "volume": "", "issue": "", "pages": "275--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, C., Lin, K. H.-Y., & Chen, H.-H. (2007). Emotion classification Using Web Blog Corpora. In Proceedings of the IEEE, WIC, ACM International Conference on Web Intelligence, 275-278.", "links": null }, "BIBREF126": { "ref_id": "b126", "title": "Writer Meets Reader: Emotion Analysis of Social Media from both the Writer's and Reader's Perspectives", "authors": [ { "first": "C", "middle": [], "last": "Yang", "suffix": "" }, { "first": "K", "middle": [ "H" ], "last": "Lin", "suffix": "" }, { "first": ".-Y", "middle": [], "last": "Chen", "suffix": "" }, { "first": "H", "middle": [ "H" ], "last": "", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology", "volume": "", "issue": "", "pages": "287--290", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, C., Lin, K. H.-Y., & Chen, H. H. (2009). Writer Meets Reader: Emotion Analysis of Social Media from both the Writer's and Reader's Perspectives. In Proceedings of the IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, 287-290.", "links": null }, "BIBREF127": { "ref_id": "b127", "title": "Sentiment analyzer: Extracting sentiments about a given topic using natural language processing techniques", "authors": [ { "first": "J", "middle": [], "last": "Yi", "suffix": "" }, { "first": "T", "middle": [], "last": "Nasukawa", "suffix": "" }, { "first": "R", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "W", "middle": [], "last": "Niblack", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ICDM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi, J., Nasukawa, T., Bunescu, R., & Niblack, W. (2003). Sentiment analyzer: Extracting sentiments about a given topic using natural language processing techniques. In Proceedings of ICDM.", "links": null }, "BIBREF128": { "ref_id": "b128", "title": "A preliminary research of Chinese emotion classification model", "authors": [ { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Z", "middle": [], "last": "Li", "suffix": "" }, { "first": "F", "middle": [], "last": "Ren", "suffix": "" }, { "first": "S", "middle": [], "last": "Kuroiwa", "suffix": "" } ], "year": 2008, "venue": "IJCSNS", "volume": "8", "issue": "11", "pages": "127--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Y., Li, Z., Ren, F., & Kuroiwa, S. (2008). A preliminary research of Chinese emotion classification model. IJCSNS, 8(11), 127-132.", "links": null }, "BIBREF129": { "ref_id": "b129", "title": "Activities\uff1a 1. Holding the Republic of China Computational Linguistics Conference (ROCLING) annually. 2. Facilitating and promoting academic research, seminars, training, discussions, comparative evaluations and other activities related to computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Activities\uff1a 1. Holding the Republic of China Computational Linguistics Conference (ROCLING) annually. 2. Facilitating and promoting academic research, seminars, training, discussions, comparative evaluations and other activities related to computational linguistics.", "links": null }, "BIBREF130": { "ref_id": "b130", "title": "Collecting information and materials on recent developments in the field of computational linguistics, domestically and internationally", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collecting information and materials on recent developments in the field of computational linguistics, domestically and internationally.", "links": null }, "BIBREF131": { "ref_id": "b131", "title": "Publishing pertinent journals, proceedings and newsletters", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Publishing pertinent journals, proceedings and newsletters.", "links": null }, "BIBREF132": { "ref_id": "b132", "title": "Setting of the Chinese-language technical terminology and symbols related to computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Setting of the Chinese-language technical terminology and symbols related to computational linguistics.", "links": null }, "BIBREF133": { "ref_id": "b133", "title": "Maintaining contact with international computational linguistics academic organizations", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maintaining contact with international computational linguistics academic organizations.", "links": null }, "BIBREF134": { "ref_id": "b134", "title": "Dealing with various other matters related to the development of computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dealing with various other matters related to the development of computational linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Time-aligned transcription.", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Word distribution.", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "Syllable distribution.", "uris": null, "num": null }, "FIGREF6": { "type_str": "figure", "text": "(a) \u8840\u6db2\u5b78\u6aa2\u9a57(hematology) -\u767d\u8840\u7403\u5206\uf9d0 (b) [\u5df4\uf989\u6700\u7f8e\u7684\u6a4b] \u4e9e\uf98c\u5c71\u5927\u4e09\u4e16\u6a4b Pont Alexandre III (c) \u80f0\u5cf6\u7d20\u6cf5\u7684\uf9f6\u5e8a\u61c9\u7528\u53ca\u8b77\uf9e4\u9032\u5c55 progress on nursing of clinical application of insulin pump (d) \u570b\u5916\u7d44\u7e54\u7f8e\u570b\u8077\u68d2\u5927\uf997\u76df (Major League Baseball\uff0c\u7c21\u7a31: MLB\uff0c\u6216\u5927\uf997 \u76df)", "uris": null, "num": null }, "FIGREF7": { "type_str": "figure", "text": "a CRF model for classifying translations (Section 4.3.4) Outline of the training phase.", "uris": null, "num": null }, "FIGREF8": { "type_str": "figure", "text": "Simplified view of HMM and CRF.", "uris": null, "num": null }, "FIGREF9": { "type_str": "figure", "text": ".", "uris": null, "num": null }, "FIGREF10": { "type_str": "figure", "text": "Examples of tagged snippets for title pairs \"support vector machine\", \"\u652f\u6301\u5411\uf97e\u6a5f\" and \"luminous flux\", \"\u5149\u901a\uf97e\".", "uris": null, "num": null }, "FIGREF12": { "type_str": "figure", "text": "Example of two snippets tagged with translation features given the terms \"support vector machine\" and \"luminous flux\".", "uris": null, "num": null }, "FIGREF13": { "type_str": "figure", "text": "The proposal Russian Petr Petrovich Troyanskii patented in September 1933 bears a Major changeovers in MT Systems.", "uris": null, "num": null }, "FIGREF14": { "type_str": "figure", "text": "Figure 2shows the classification of MT in Natural language Processing (NLP). Classification of MT System.", "uris": null, "num": null }, "FIGREF15": { "type_str": "figure", "text": "The main intention of this bifurcation is to develop Machine Aided Translation (MAT) systems for English to twelve Indian regional languages. These include MT from English to Marathi & Konkani (IIT, Mumbai): English to Asamiya and Manipuri (IIT, Guwahati): English to Bangla (CDAC, Kolkata): English to Urdu, Sindhi & Kashmiri (CDAC-GIST group, Pune): English to Malyalam (CDAC, Thiruvananthpuram): English to Punjabi (Thapar Institute of Engineering and Technology-TIET, Patiala) English to Sanskrit (Jawaharlal Nehru University -JNU, New Delhi): and English to Oriya (Utkal University, Bhuvaneshwar).", "uris": null, "num": null }, "FIGREF16": { "type_str": "figure", "text": "six emotions, based on the type of the component word of the emotional expression present in the Bengali WordNet Affect lists.", "uris": null, "num": null }, "FIGREF17": { "type_str": "figure", "text": "NNPC/NN/NNC/PRP> {}], is considered for capturing clues about an emotion holder. The components of the Common_Portion are assembled after the first occurring POS tags of types NNP, NNC, or PRP in the POS tagged sentence until the verb POS, like VBZ or VM, is reached. The remaining components present in the sentence after the verb are appended to the common portion (Common_Portion).", "uris": null, "num": null }, "TABREF0": { "text": "", "content": "
MCDC60 (37F, 23M)1 hourFree conversationStrangers
MTCC58 (33F, 25M)20 minutesTopic-oriented ConversationFriends/ relatives
MMTC52 (28F, 24M)7 minutesMap task dialogueFriends/ relatives
", "num": null, "type_str": "table", "html": null }, "TABREF1": { "text": "", "content": "
CorpusWord tokensWord typesSyllable tokensSyllable types with tonesSyllable types without tones
TMC Corpus405,435 16,683 607,0081,076390
", "num": null, "type_str": "table", "html": null }, "TABREF2": { "text": "", "content": "
Vocabulary sizeTMC corpusTokensTokens per typeVocabulary sizeSinica corpusTokensTokens per type
1,00084.43% 342,3063421,00059.78% 2,935,763 2,936
2,00089.87% 364,3641822,00068.69% 3,373,419 1,687
3,00092.53% 375,1341253,00073.53% 3,610,951 1,204
4,00094.20% 381,919954,00076.77% 3,770,449943
5,00095.34% 386,559775,00079.17% 3,887,992778
6,00096.21% 390,081656,00081.03% 3,979,351663
7,00096.95% 393,058567,00082.55% 4,054,042579
8,00097.44% 395,058498,00083.82% 4,116,688515
9,00097.94% 397,058449,00084.92% 4,170,368463
10,00098.35% 398,7524010,00085.87% 4,217,103422
100%405,43524100%4,767,04886
", "num": null, "type_str": "table", "html": null }, "TABREF3": { "text": "", "content": "
TMC CorpusCoverage Sinica CorpusCoverage
Verb23.30% Noun53.05%
Noun22.05% Verb36.78%
Adverb20.01% Adverb3.01%
Pronoun9.98% Determinative2.60%
Determinative6.42% Foreign words2.28%
Preposition5.19% Adjective1.24%
Conjunction4.66% Conjunction0.35%
Others8.39% Others0.69%
", "num": null, "type_str": "table", "html": null }, "TABREF4": { "text": "", "content": "
Discourse particles34,84249711
Discourse markers16,51691,835
Fillers/feedback words6,3386598
", "num": null, "type_str": "table", "html": null }, "TABREF5": { "text": "", "content": "
\u4e2dNg20660.51 122310.26 \u53eaDa6840.170.14
Word POS \u7684 T \u7684 DE \u5927 VH \u662f SHI \u80fd D \u4e00 Neu \u8457 Di \u5728 P \u5979 Nh \u6709 V_2 \u90a3 Nep \u500b Nf \u4e0a Ncd \u6211 Nh \u4f46 Cbb \uf967 D \uf98e Nf \u9019 Nep \u9084 D \uf9ba Di \u53ef\u4ee5 D \u4ed6 Nh \u6642 Ng \u4e5f D \u6700 Dfa \u5c31 D \u81ea\u5df1 Nh \u4eba Na \u70ba P \u90fd D \uf92d D \uf96f VE \u6240 D \u800c Cbb \u4ed6\u5011 Nh \u6211\u5011 Nh \u5404 NesTMC tokens 1976 15778 1926 13999 1907 13397 1901 7429 1869 7092 1848 6991 1768 6705 1697 6677 1650 6330 1644 5453 1641 5301 1633 5260 1628 4827 1579 4694 1573 4473 1566 4419 1518 4414 1500 4242 1454TMC % 0.49 11580 Sinica tokens 3.89 28582 0.48 11577 3.45 84014 0.47 11125 3.30 58388 0.47 11026 1.83 56769 0.46 10776 1.75 45823 0.46 10740 1.72 41077 0.44 10619 1.65 40332 0.42 10242 1.65 39014 0.41 10127 1.56 33659 0.41 9698 1.34 31873 0.40 9671 1.31 30025 0.40 9565 1.30 29646 0.40 9416 1.19 29211 0.39 9069 1.16 24269 0.39 9026 1.10 20403 0.39 8992 1.09 19625 0.37 8873 1.09 18452 0.37 8818 1.05 18152 0.36 8651Sinica % 0.24 \u8207 Word POS P 0.24 \u6c92\u6709 VJ 6.00 \u4e0a Ng 1.76 \u53ef D 0.23 \u5247 D 1.22 \u70ba VG 0.23 \u53f0\u7063 Nc 1.19 \u6216 Caa 0.23 \u537b D 0.96 \u597d VH 0.23 \u5730 DE 0.86 \u7b49 Cab 0.22 \u4e26 Cbb 0.85 \u53c8 D 0.21 \u4f4d Nf 0.82 \u5c07 D 0.21 \u5f97 DE 0.71 \u5f8c Ng 0.20 \u53bb D 0.67 \u56e0\u70ba Cbb 0.20 \u5462 T 0.63 \u65bc P 0.20 \u5b78\u751f Na 0.62 \u7531 P 0.20 \u8868\u793a VE 0.61 \u5f9e P 0.19 \u5230 P 0.51 \uf901 D 0.19 \u516c\u53f8 Nc 0.43 \u88ab P 0.19 \u5c07 P 0.41 \u624d Da 0.19 \u5982\u679c Cbb 0.39 \u5df2 D 0.18 \u793e\u6703 Na 0.38 \u8005 Na 0.18 \u770b VCTMC tokens 665 651 1339 1337 646 1300 633 1296 630 1273 620 1264 618 1197 615 1161 609 1160 604 1115 593 1030 593 1001 592 989 572 971 569 953 568 877 563 863 563 850 562TMC % 0.16 0.16 0.33 0.33 0.16 0.32 0.16 0.32 0.16 0.31 0.15 0.31 0.15 0.30 0.15 0.29 0.15 0.29 0.15 0.28 0.15 0.25 0.15 0.25 0.15 0.24 0.14 0.24 0.14 0.24 0.14 0.22 0.14 0.21 0.14 0.21 0.14Sinica tokensSinica 0.14 % 0.14 0.18 0.14 0.18 0.13 0.18 0.13 0.17 0.13 0.17 0.13 0.17 0.13 0.17 0.13 0.16 0.12 0.16 0.12 0.16 0.12 0.16 0.12 0.15 0.11 0.15 0.11 0.15 0.11 0.15 0.11 0.15 0.11 0.15 0.15 0.11
\u4f60Nh41001.01 172980.36 \u6bcfNes8410.210.15
\uf9baT38820.96 159580.33 \u6b21Nf8400.210.15
\u8981D34350.85 159550.33 \u628aP8370.210.15
\u4e4bDE34120.84 158930.33 \u4e09Neu8340.210.15
\u6703D33980.84 140660.30 \uf9fd\u9ebcNep8320.210.14
\u5c0dP31730.78 139440.29 \u554f\u984cNa8140.200.14
\u53caCaa31240.77 137580.29 \u5176Nep8010.200.14
\u548cCaa29320.72 135850.28 \u8b93VL7820.190.14
\u8207Caa28320.70 134450.28 \u6b64Nep7480.180.14
\u4ee5P22760.56 131720.28 \u505aVC7210.180.14
\u5f88Dfa21890.54 130130.27 \u518dD7160.180.14
\u7a2eNf20880.52 122630.26 \u6240\u4ee5Cbb7080.170.14
", "num": null, "type_str": "table", "html": null }, "TABREF9": { "text": "", "content": "
ChineseEnglish
\u793e\u4ea4\u5de5\u7a0bsocial engineering
\u793e\u7fa4\u7db2\uf937social network
\u793e\u7fa4\u5a92\u9ad4social media
wCount(w) P(w)P( w )efCount(e,f) P(e,f)
\u793e31.000.00social\u793e31.00
\u7fa420.670.33social\u7fa420.67
\u4ea410.330.67social\u4ea410.33
\u7db210.330.67network\u793e10.33
social31.000.00network\u7fa410.33
media10.330.67network\u4ea400.00
network10.330.67network\u7db210.33
In our case, the 2
", "num": null, "type_str": "table", "html": null }, "TABREF10": { "text": "", "content": "
vectorvectormachine
\u54117939,960\uf97e76821,907\u6a5f3,38128,566
971,975,6421221,963,6954911,954,054
", "num": null, "type_str": "table", "html": null }, "TABREF11": { "text": "Example 2 \u03c6 scores.", "content": "
supportvectormachineluminousflux
\u63d00.000000.000000.00000\u767c0.004320.00000
\u51fa0.000000.000000.00000\u51490.010286.0E-06
\u76840.000000.000000.00000\u539f0.000000.00000
\u652f0.090750.000000.00000\uf9e40.000000.00000
\u63010.000580.000000.00000\uf9671.4E-060.00000
\u54110.000000.065300.00000\u51490.010286.0E-06
\uf97e0.000000.028800.00000\u901a0.000000.06410
\u6a5f0.000000.000000.09067\uf97e0.000000.00793
To generate features for each token, we calculate the following logarithmic value of 2 \u03c6 :
translation feat( ) 9 f2 log argmax e f ( ( , ))
e E
", "num": null, "type_str": "table", "html": null }, "TABREF12": { "text": "", "content": "
ChineseChineseEnglishPossible
TransliterationRomanizationNamed EntitySegmentations
", "num": null, "type_str": "table", "html": null }, "TABREF13": { "text": "", "content": "
ChineseChineseEnglish
TransliterationRomanizationSyllables
\u55ac\u5e03\u65afqiao-bu-sijo-b-s
\u74ca\u55acqiong-qiaojon-jo
\u55ac\u745f\u592bqiao-se-fujo-se-ph
", "num": null, "type_str": "table", "html": null }, "TABREF14": { "text": "", "content": "
Rom. Chinese English Tr.Cnt(f,e)P(f|e)
qiaogeo1400.38
jo660.18
joe410.11
bub10900.58
bu3010.16
br1220.07
sis56260.69
es2920.04
st2260.03
", "num": null, "type_str": "table", "html": null }, "TABREF15": { "text": "Learning to Find Translations and Transliterations35 on the Web based on Conditional Random Fieldstranslational and transliterational features. Finally, we use the labeled data with three kinds features to train a CRF model.", "content": "
wordTRTLdistance label
\u7b2c0014O
620013O
(62nd)
", "num": null, "type_str": "table", "html": null }, "TABREF17": { "text": "", "content": "
CategoryCount CategoryCount
Pharmacy1,673 Material Science (Polymer)3,422
Bacterial Immunology2,063 Material Science (Ceramics)2,292
Phylogenetic1,756 Agricultural Machinery3,060
Psychopathology1,067 Science Education5,289
Psychology5,741 Industrial Engineering5,400
Physics/Chemistry Equipments17,279 Astronomy6,091
Comparative Anatomy6,013 Music2,922
Education2,198 Food Science and Technology35,666
Sociology2,825 Foreign Names57,054
Human Anatomy5,796 Mineralogy28,032
Pathology7,307 Lab Animal and Comparative Medicine8,220
Sports1,708 Dance10,564
Soil Science1,240 Statistic7,370
Forestry7,954 Meteorology20,061
Fertilizer Science1,155 Animal Husbandry21,466
Hydraulic Engineering4,601 Mining and Metallurgical Engineering13,914
Electronic Engineering7,627 Computer101,389
Agricultural Promotion669 Textile Science and Technology2,2761
Accounting4,884 Meteorology17,789
Civil Engineering16,745 Endocrinology2,577
Aeronautics and Astronautics23,751 Chemical Engineering22,386
Electrical Engineering20,058 Communications Engineering16,899
Engineering Graphics4,766 Biology (Plants)42,730
Mathematics16,708 Mechanism and Machine Theory2,085
Foundry5,314 Shipbuilding Engineering30,701
Mechanical Engineering35369 Physics22,077
Earth Science30673 Zoology29,586
Geology22780 Marine37,329
Marketing1667 Chemistry (Compound)19,258
Veterinary Medicine24,990 Fish29,730
Nuclear Energy38,462 Economics8,891
Production Automation2,560 Marine Geology31,015
Surveying14,371 Power Engineering69,546
Ecology7,495 Chemistry (Others)25,273
Mechanics10,716 Administration3,743
Materials Science (Metal)7,665 Journalism and Communication4,419
", "num": null, "type_str": "table", "html": null }, "TABREF18": { "text": "Ch : the results reported in the Lin et al. paper for their system targeting Chinese parenthetical translations.", "content": "
Learning to Find Translations and Transliterations39
on the Web based on Conditional Random Fields
Table 10. Automatic evaluation results.
systemcoverageexact matchtop5 exact match
Full (En-Ch)80.4%43.0%56.4%
-TL83.9%27.5%40.2%
-TR81.2%37.4%50.3%
-TL-TR83.2%21.1%32.8%
LIN En-Ch59.6%27.9%not reported
LIN Ch-En70.8%36.4%not reported
LDC (En-Ch)10.8%4.8%N/A
NICT (En-Ch)24.2%32.1%N/A
", "num": null, "type_str": "table", "html": null }, "TABREF19": { "text": "", "content": "
English WikiChinese WikiExtracted
Pope Celestine IV\uf96c\u840a\u65af\u5ef7\u56db\u4e16\ufa00\u840a\u65af\u5ef7\u56db\u4e16A
Huaneng Power International\u83ef\u80fd\u570b\u969b\u83ef\u80fd\u570b\u969b\u96fb\uf98aA
Shangrao\u4e0a\u9952\u5e02\u4e0a\u9952A
Aurora University\u9707\u65e6\u5927\u5b78\u5967\uf90f\uf925\u5927\u5b78A
Fujian\u798f\u5efa\uf96d\u798f\u5efaA
Dream Theater\u5922\u5287\u5834\u5922\u5287\u5834\u5408\u5531\u5718A
Coturnix\u9d89\u5c6c\u9d6a\u9d89A
Waste\u5783\u573e\u5ee2\u7269A
Allyl alcohol\u70ef\u4e19\u9187\u4e19\u70ef\u9187A
Machine\u6a5f\u68b0\u5de5\u5177\u6a5fA
Colony\u6b96\u6c11\u5730\u83cc\uf918B
Collateral\uf918\u65e5\uf970\u795e\u62b5\u62bcB
Ludwig Erhard\uf937\u5fb7\u7dad\u5e0c\uff0e\u827e\u54c8\u5fb7\u827e\u54c8\u5fb7P
John Woo\u5433\u5b87\u68ee\u7d04\u7ff0P
Osman I\u5967\u65af\u66fc\u4e00\u4e16\u5967\u65af\u66fcP
Itumeleng Khune\u4f0a\u5716\u6885\uf9d4\uff0e\u5eab\u5167\u5eab\u5167P
NaphthoquinoneP
Base analog\u9e7c\u57fa\uf9d0\u4f3c\u7269\u9e7c\u57fa\uf9d0P
Chinese Paladin\u4ed9\u528d\u5947\u4fe0\u50b3\u795e\u528dP
Bubble sort\u5192\u6ce1\u6392\u5e8f\u6392\u5e8fP
The Love Suicides at Sonezaki\u66fe\u6839\u5d0e\u60c5\u6b7b\u590f\u76ee\u6f31\u77f3E
Survivor's Law II\uf9d8\u653f\u65b0\u4eba\u738bII\uf90a\u77f3\uf97c\u7de3E
Phichit\u6279\u96c6\u5e9c\uf929\u5bb6\u5ead\u4e3b\u5a66E
Ammonium\u92a8\u904e\uf9ce\u9178\u92a8E
", "num": null, "type_str": "table", "html": null }, "TABREF20": { "text": "", "content": "
given termfreqcandidate
money laundering27\u6d17\u9322M
2\u6d17\u9ed1\u9322A
1\u6d17\u9322\u5ba3\u50b3E
Music and Lyrics18\u6b4c\u60c5\u4ebaP
2K\u6b4c\u60c5\u4ebaM
flyback transformer14\u8b8a\u58d3\u5668P
3\u56de\u6383\u8b8a\u58d3\u5668M
2\u8fd4\u99b3\u5f0f\u8b8a\u58d3\u5668A
2\u8fd4\u99b3\u8b8a\u58d3\u5668A
colony15\u83cc\uf918B
2\u79d1\uf90f\u5c3c\u6d77\u5cf6\u9152\u5e97B
2\u6b96\u6c11\u5730M
Osman I8\u5967\u65af\u66fcP
5\u5967\u65af\u66fc\u4e00\u4e16M
bubble sort20\u6392\u5e8fP
19\u6ce1\u6392\u5e8fA
17\u6c23\u6ce1\u6392\u5e8fM
9\u6ce1\u6cab\u6392\u5e8fA
4\u6ce1\u6ce1\u6392\u5e8fA
", "num": null, "type_str": "table", "html": null }, "TABREF21": { "text": "Vol. 18, No. 1, March 2013, pp. 47-78 47 \u00a9 The Association for Computational Linguistics and Chinese Language Processing", "content": "", "num": null, "type_str": "table", "html": null }, "TABREF23": { "text": "Translation Studies (CALTS), Department of Humanities and Social Sciences, University of Hyderabad. At present, the Language Technology Research Centre (LTRC) at IIIT Hyderabad is developing an English to Hindi MT system using the architecture of the Anusaaraka approach. This Anusaaraka project is being developed under the supervision of Prof. Rajeev Sangal and Prof. G U Rao.", "content": "
", "num": null, "type_str": "table", "html": null }, "TABREF24": { "text": "Anuvadak 5.0 English to Hindi software is a general-purpose tool developed by the private sector company Super Infosoft Pvt Ltd., Delhi, under the supervision of Mrs. Anjali Rowchoudhury (", "content": "
", "num": null, "type_str": "table", "html": null }, "TABREF26": { "text": "). For example, the following Bengali sentence shows the emotional expression, its associated holder, and topic.", "content": "
\u09b0\u09be\u09c7\u09b6\u09a6 \u09ac\u09c7\u09b2\u09c7\u099b\u09a8 \u0986\u09aa\u09a8\u09be\u09b0 \u0995\u09bf\u09ac\u09a4\u09be\u099f\u09be \u09aa\u09dc\u09c7\u09a4 \u09bf\u0997\u09c7\u09df\u09a4\u09be\u09b0 ei
(Rashed) (bolechen) (apnar) (kobitata) (porte) (giye) (tar) (ei)
\u09b8\u09c1 n\u09b0\u09c7\u0995\u09d7\u09a4\u09c1 \u0995\u099f\u09be\u09ae\u09c7\u09a8\u09aa\u09dc\u09bf\u099b\u09c7\u09b2\u09be\u0964
(sundar) (koutukta)(mone)(porchilo).
", "num": null, "type_str": "table", "html": null }, "TABREF27": { "text": "The words tagged as main verb (VM) and belonging to the verb group chunk (VGNF) in the corpus are identified (e.g. \u09ad\u09be\u09c7\u09b2\u09be\u09ac\u09be\u09b8\u09be bhalobasa 'love') as simple verbs from the shallow parsed sentences. In cases of compound or conjunct verbs, patterns like {[XXX] (NN) [YYY] (VM)} are retrieved (e.g. VGNF {[\u0986\u09a8n ananda] (NN) [\u0995\u09b0\u09be kara] (VM)} means enjoy). The light verbs [YYY] tagged with 'VM' generally occur in any inflected form.Different suffixes may be attached to a simple verb or light verb depending on various features, like tense, aspect, and person. An in-house Bengali stemmer with an accuracy of 97.09% used a suffix list to identify the stem forms of the simple and light verbs.", "content": "
Verb Identification:
", "num": null, "type_str": "table", "html": null }, "TABREF29": { "text": "Reduplicated words (\u09b8n \u09b8n sanda sanda [doubt with fear]) and Idioms (\u09a4\u09be\u09c7\u09b8\u09b0 \u0998\u09b0 taser ghar [weakly built], \u0997\u09c3 \u09b9\u09a6\u09be\u09b9 grrihadaho [family disturbance]), which were annotated in the Bengali emotion blog corpus", "content": "
Multiword Expressions:
", "num": null, "type_str": "table", "html": null }, "TABREF30": { "text": "", "content": "
Features
", "num": null, "type_str": "table", "html": null }, "TABREF31": { "text": "The following example contains the emotion holder \u09a8\u09be\u09b8\u09bf\u09b0\u09a8 \u09b8\u09c1 \u09b2\u09a4\u09be\u09a8\u09be (Nasreen Sultana) based on implicit constraints.", "content": "
Holder: < \u09c7\u0997\u09a6\u09c1 \u099a\u09be\u099a\u09be, \u09a8\u09be\u09b8\u09bf\u09b0\u09a8 \u09b8\u09c1 \u09b2\u09a4\u09be\u09a8\u09be >
\u09c7\u0997\u09a6\u09c1 \u099a\u09be\u099a\u09be\u09ac\u09c7\u09b2, \u09a8\u09be \u09c7\u0997\u09be \u09c7\u09ac\u09be\u09a8 , \u0986\u09bf\u09ae\u09a8\u09be\u09b8\u09bf\u09b0\u09a8 \u09b8\u09c1 \u09b2\u09a4\u09be\u09a8\u09be\u09b0
(Gedu ChaCha) (bole) : (na) (go) (bon) , (ami)(Nasreen Sultanar)
\u09a6\u09c1 \u0983\u09c7\u0996\u09b0\u0995\u09a5\u09be\u09c7\u09a4\u09c7\u0995\u0981 \u09c7\u09a6 \u09c7\u09ab\u09bf\u09b2 \u0964
(dookher) (kathate) (kende) (feli)
", "num": null, "type_str": "table", "html": null }, "TABREF32": { "text": "94 Dipankar Das & Sivaji Bandyopadhyay holders (\u09c7\u0997\u09a6\u09c1 \u099a\u09be\u099a\u09be Gedu ChaCha and \u099a\u09be\u09bf\u099a Chachi). \u09c7\u0997\u09a6\u09c1 \u099a\u09be\u099a\u09be\u09b0 \u09a6\u09c1 \u0983\u0996 \u09a5\u09be\u0995\u09be \u09b8\u09c7tto \u099a\u09be\u09bf\u099a \u0986\u09a8n \u0995\u09c7\u09b0 \u09b8\u09ac\u09bei\u09c7\u0995", "content": "
(Gedu ChaChar) (dookkha) (thaka) (satweo) (Chachi) (ananda) (kare) (sabaike)
\u09bf\u09a8\u09c7\u09df \u09a5\u09be\u09c7\u0995 \u0964
(niye) (thake)
", "num": null, "type_str": "table", "html": null }, "TABREF33": { "text": "", "content": "
CasesKrippendorff's \u03b1
Before Error Analysis0.6332
Case 10.6476
Case 20.6417
Case 30.6498
Case 40.6402
Case 1+ Case 20.6510
Case 1+Case 30.6533
Case 1+Case 2+Case 30.6601
Case 1+Case 2+Case 40.6625
Case 1+Case 3+Case 40.6678
Case 1+Case 2+Case 3+Case 40.6772
", "num": null, "type_str": "table", "html": null }, "TABREF34": { "text": "Publications of the Association for Computational Linguistics and Chinese Language Processing Money Order or Check payable to \"The Association for Computation Linguisticsand Chinese Language Processing \" or \"\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703\"\u2027 E-mail\uff1aaclclp@hp.iis.sinica.edu.tw", "content": "
\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703
\u76f8\u95dc\u51fa\u7248\u54c1\u50f9\u683c\u8868\u53ca\u8a02\u8cfc\u55ae
\u7de8\u865f\u66f8\u76ee\u6703 \u54e1\u975e\u6703\u54e1\u518a\u6578\u91d1\u984d
1.no.92-01, no. 92-04 (\u5408\u8a02\u672c) ICG \u4e2d\u7684\u8ad6\u65e8\u89d2\u8272 \u8207 A conceptual Structure for Parsing Mandarin--itsAIRAIR
Frame and General Applications--Surface NT$ 80(US&EURP) NT$(ASIA) _____VOLUME _____AMOUNT
1. 2.2. no.92-02, no. 92-03 (\u5408\u8a02\u672c) no.92-01, no. 92-04(\u5408\u8a02\u672c) ICG \u4e2d\u7684\u8ad6\u65e8\u89d2\u8272\u8207 A Conceptual V-N \u8907\u5408\u540d\u8a5e\u8a0e\u8ad6\u7bc7 \u8207V-R \u8907\u5408\u52d5\u8a5e\u8a0e\u8ad6\u7bc7 Structure for Parsing Mandarin --Its Frame and General Applications--3. no.93-01 \u65b0\u805e\u8a9e\u6599\u5eab\u5b57\u983b\u7d71\u8a08\u8868 no.92-02 V-N \u8907\u5408\u540d\u8a5e\u8a0e\u8ad6\u7bc7 & 92-03 V-R \u8907\u5408\u52d5\u8a5e\u8a0e\u8ad6\u7bc7 4. no.93-02 \u65b0\u805e\u8a9e\u6599\u5eab\u8a5e\u983b\u7d71\u8a08\u8868US$ 9 12120 120 360US$ 19 21_____ US$15 _____ 17 __________ _____ _____ _____ __________ _____
3. no.93-01 \u65b0\u805e\u8a9e\u6599\u5eab\u5b57\u983b\u7d71\u8a08\u8868 5. no.93-03 \u65b0\u805e\u5e38\u7528\u52d5\u8a5e\u8a5e\u983b\u8207\u5206\u985e 4. no.93-02 \u65b0\u805e\u8a9e\u6599\u5eab\u8a5e\u983b\u7d71\u8a08\u8868 6. no.93-05 \u4e2d\u6587\u8a5e\u985e\u5206\u67908 18180 18513 3011 _____ 24 __________ _____ _____ __________ _____
5. no.93-03 \u65b0\u805e\u5e38\u7528\u52d5\u8a5e\u8a5e\u983b\u8207\u5206\u985e 7. no.93-06 \u73fe\u4ee3\u6f22\u8a9e\u4e2d\u7684\u6cd5\u76f8\u8a5e10401513 __________ __________
6. no.93-05 \u4e2d\u6587\u8a5e\u985e\u5206\u6790 8. no.94-01 \u4e2d\u6587\u66f8\u9762\u8a9e\u983b\u7387\u8a5e\u5178(\u65b0\u805e\u8a9e\u6599\u8a5e\u983b\u7d71\u8a08)103801513 __________ __________
7. no.93-06 \u73fe\u4ee3\u6f22\u8a9e\u4e2d\u7684\u6cd5\u76f8\u8a5e 8. no.94-01 \u4e2d\u6587\u66f8\u9762\u8a9e\u983b\u7387\u8a5e\u5178(\u65b0\u805e\u8a9e\u6599\u8a5e\u983b\u7d71\u8a08) 9. no.94-02 \u53e4\u6f22\u8a9e\u5b57\u983b\u8868 9. no.94-02 \u53e4\u6f22\u8a9e\u5b57\u983b\u8868 10. no.95-01 \u6ce8\u97f3\u6aa2\u7d22\u73fe\u4ee3\u6f22\u8a9e\u5b57\u983b\u88685 18 11180 7510 30 168 24 _____ 14 __________ _____ _____ _____ __________ _____ _____
10. no.95-01 \u6ce8\u97f3\u6aa2\u7d22\u73fe\u4ee3\u6f22\u8a9e\u5b57\u983b\u8868 11. no.95-02/98-04 \u4e2d\u592e\u7814\u7a76\u9662\u5e73\u8861\u8a9e\u6599\u5eab\u7684\u5167\u5bb9\u8207\u8aaa\u660e8751310 __________ __________
11. no.95-02/98-04 \u4e2d\u592e\u7814\u7a76\u9662\u5e73\u8861\u8a9e\u6599\u5eab\u7684\u5167\u5bb9\u8207\u8aaa\u660e 12. no.95-03 \u8a0a\u606f\u70ba\u672c\u7684\u683c\u4f4d\u8a9e\u6cd5\u8207\u5176\u5256\u6790\u65b9\u6cd5 12. no.95-03 \u8a0a\u606f\u70ba\u672c\u7684\u683c\u4f4d\u8a9e\u6cd5\u8207\u5176\u5256\u6790\u65b9\u6cd5 13. no.96-01 \u300c\u641c\u300d\u6587\u89e3\u5b57-\u4e2d\u6587\u8a5e\u754c\u7814\u7a76\u8207\u8cc7\u8a0a\u7528\u5206\u8a5e\u6a19\u6e963 375 1108 86 _____ 6 __________ _____ _____ __________ _____
13. no.96-01 \u300c\u641c\u300d\u6587\u89e3\u5b57-\u4e2d\u6587\u8a5e\u754c\u7814\u7a76\u8207\u8cc7\u8a0a\u7528\u5206\u8a5e\u6a19\u6e96 14. no.97-01 \u53e4\u6f22\u8a9e\u8a5e\u983b\u8868 (\u7532)84001311 __________ __________
14. no.97-01 \u53e4\u6f22\u8a9e\u8a5e\u983b\u8868 (\u7532) 15. no.97-02 \u8ad6\u8a9e\u8a5e\u983b\u886819903125 __________ __________
15. no.97-02 \u8ad6\u8a9e\u8a5e\u983b\u8868 16. no.98-01 \u8a5e\u983b\u8a5e\u5178 16 no.98-01 \u8a5e\u983b\u8a5e\u5178 17. no.98-02 Accumulated Word Frequency in CKIP Corpus 17. no.98-02 Accumulated Word Frequency in CKIP Corpus9 18 15395 34014 30 2512 26 _____ 21 __________ _____ _____ _____ __________ _____ _____
18. no.98-03 \u81ea\u7136\u8a9e\u8a00\u8655\u7406\u53ca\u8a08\u7b97\u8a9e\u8a00\u5b78\u76f8\u95dc\u8853\u8a9e\u4e2d\u82f1\u5c0d\u8b6f\u8868 18. no.98-03 \u81ea\u7136\u8a9e\u8a00\u8655\u7406\u53ca\u8a08\u7b97\u8a9e\u8a00\u5b78\u76f8\u95dc\u8853\u8a9e\u4e2d\u82f1\u5c0d\u8b6f\u886849097 __________ __________
19. no.02-01 \u73fe\u4ee3\u6f22\u8a9e\u53e3\u8a9e\u5c0d\u8a71\u8a9e\u6599\u5eab\u6a19\u8a3b\u7cfb\u7d71\u8aaa\u660e 19. no.02-01 \u73fe\u4ee3\u6f22\u8a9e\u53e3\u8a9e\u5c0d\u8a71\u8a9e\u6599\u5eab\u6a19\u8a3b\u7cfb\u7d71\u8aaa\u660e8751311 __________ __________
20. 21. Readings in Chinese Language Processing Computational Linguistics & Chinese Languages Processing (One year) (Back issues of IJCLCLP: US$ 20 per copy) 20 \u8ad6\u6587\u96c6 COLING 2002 \u7d19\u672c 21. \u8ad6\u6587\u96c6 COLING 2002 \u5149\u789f\u7247---25100 300100 25100 _____ 21 __________ _____ _____ __________ _____
22. \u8ad6\u6587\u96c6 COLING 2002 Workshop \u5149\u789f\u7247300__________
23. \u8ad6\u6587\u96c6 ISCSLP 2002 \u5149\u789f\u7247300TOTAL __________ __________
24.10% member discount: ___________Total Due:__________ (\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u67031997\u7b2c\u56db\u5b63\u5b78\u8853\u6d3b\u52d5) \u4ea4\u8ac7\u7cfb\u7d71\u66a8\u8a9e\u5883\u5206\u6790\u7814\u8a0e\u6703\u8b1b\u7fa9 130 _____ _____
\u2027 OVERSEAS USE ONLY \u4e2d\u6587\u8a08\u7b97\u8a9e\u8a00\u5b78\u671f\u520a (\u4e00\u5e74\u56db\u671f) \u5e74\u4efd\uff1a______ 25. (\u904e\u671f\u671f\u520a\u6bcf\u672c\u552e\u50f9500\u5143) \u2027 PAYMENT\uff1a \u25a1 Credit Card ( Preferred ) 26. Readings of Chinese Language Processing 27. \u5256\u6790\u7b56\u7565\u8207\u6a5f\u5668\u7ffb\u8b6f 1990 \u203b \u6b64\u50f9\u683c\u8868\u50c5\u9650\u570b\u5167 (\u53f0\u7063\u5730\u5340) \u4f7f\u7528 \u25a1 Name (please print): \u5283\u64a5\u5e33\u6236\uff1a\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703 \u5283\u64a5\u5e33\u865f\uff1a19166251 ---675 150 Signature:2,500 \u5408 \u8a08_____ _____ _____ __________ _____ _____ _____
Fax: \uf997\u7d61\u96fb\u8a71\uff1a(02) 2788-3799 \u8f491502 \uf997\u7d61\u4eba\uff1a \u9ec3\u742a \u5c0f\u59d0\u3001\u4f55\u5a49\u5982 \u5c0f\u59d0E-mail: E-mail:aclclp@hp.iis.sinica.edu.tw
\u8a02\u8cfc\u8005\uff1a Address\uff1a \u5730 \u5740\uff1a\u6536\u64da\u62ac\u982d\uff1a
\u96fb\u8a71\uff1aE-mail:
", "num": null, "type_str": "table", "html": null } } } }