ACL-OCL / Base_JSON /prefixD /json /D14 /D14-1011.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D14-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:54:11.996887Z"
},
"title": "Accurate Word Segmentation and POS Tagging for Japanese Microblogs: Corpus Annotation and Joint Modeling with Lexical Normalization",
"authors": [
{
"first": "Nobuhiro",
"middle": [],
"last": "Kaji",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo \u2021 National Institute of Informatics",
"location": {}
},
"email": "kaji@tkl.iis.u-tokyo.ac.jp"
},
{
"first": "Masaru",
"middle": [],
"last": "Kitsuregawa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo \u2021 National Institute of Informatics",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Microblogs have recently received widespread interest from NLP researchers. However, current tools for Japanese word segmentation and POS tagging still perform poorly on microblog texts. We developed an annotated corpus and proposed a joint model for overcoming this situation. Our annotated corpus of microblog texts enables not only training of accurate statistical models but also quantitative evaluation of their performance. Our joint model with lexical normalization handles the orthographic diversity of microblog texts. We conducted an experiment to demonstrate that the corpus and model substantially contribute to boosting accuracy.",
"pdf_parse": {
"paper_id": "D14-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "Microblogs have recently received widespread interest from NLP researchers. However, current tools for Japanese word segmentation and POS tagging still perform poorly on microblog texts. We developed an annotated corpus and proposed a joint model for overcoming this situation. Our annotated corpus of microblog texts enables not only training of accurate statistical models but also quantitative evaluation of their performance. Our joint model with lexical normalization handles the orthographic diversity of microblog texts. We conducted an experiment to demonstrate that the corpus and model substantially contribute to boosting accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Microblogs, such as Twitter 1 and Weibo 2 , have recently become an important target of NLP technology. Since microblogs offer an instant way of posting textual messages, they have been given increasing attention as valuable sources for such actions as mining opinions (Jiang et al., 2011) and detecting sudden events such as earthquake (Sakaki et al., 2010) .",
"cite_spans": [
{
"start": 269,
"end": 289,
"text": "(Jiang et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 337,
"end": 358,
"text": "(Sakaki et al., 2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, many studies have reported that current NLP tools do not perform well on microblog texts (Foster et al., 2011; Gimpel et al., 2011) . In the case of Japanese text processing, the most serious problem is poor accuracy of word segmentation and POS tagging. Since these two tasks are positioned as the fundamental step in the text processing pipeline, their accuracy is vital for all downstream applications.",
"cite_spans": [
{
"start": 98,
"end": 119,
"text": "(Foster et al., 2011;",
"ref_id": "BIBREF3"
},
{
"start": 120,
"end": 140,
"text": "Gimpel et al., 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main obstacle that makes word segmentation and POS tagging in the microblog domain challenging is the lack of annotated corpora. Because current annotated corpora are from other domains, such as news articles, it is difficult to train models that perform well on microblog texts. Moreover, system performance cannot be evaluated quantitatively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development of annotated corpus",
"sec_num": "1.1"
},
{
"text": "We remedied this situation by developing an annotated corpus of Japanese microblogs. We collected 1831 sentences from Twitter and manually annotated these sentences with word boundaries, POS tags, and normalized forms of words (c.f., Section 1.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development of annotated corpus",
"sec_num": "1.1"
},
{
"text": "We, for the first time, present a comprehensive empirical study of Japanese word segmentation and POS tagging on microblog texts by using this corpus. Specifically, we investigated how well current models trained on existing corpora perform in the microblog domain. We also explored performance gains achieved by using our corpus for training, and by jointly performing lexical normalization (c.f., Section 1.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development of annotated corpus",
"sec_num": "1.1"
},
{
"text": "Orthographic diversity in microblog texts causes a problem when training a statistical model for word segmentation and POS tagging. Microblog texts frequently contain informal words that are spelled in a non-standard manner, e.g., \"oredi (already)\", \"b4 (before)\", and \"talkin (talking)\" (Han and Baldwin, 2011) . Such words, hereafter referred to as ill-spelled words, are so productive that they considerably increase the vocabulary size. This makes training of statistical models difficult.",
"cite_spans": [
{
"start": 288,
"end": 311,
"text": "(Han and Baldwin, 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint modeling with lexical normalization",
"sec_num": "1.2"
},
{
"text": "We address this problem by jointly conducting lexical normalization. Although a wide variety of ill-spelled words are used in microblog texts, many can be normalized into well-spelled equivalents, which conform to standard rules of spelling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint modeling with lexical normalization",
"sec_num": "1.2"
},
{
"text": "A joint model with lexical normalization is able to handle orthographic diversity by exploiting information obtainable from the well-spelled equivalents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint modeling with lexical normalization",
"sec_num": "1.2"
},
{
"text": "The proposed joint model was empirically evaluated on the microblog corpus we developed. Our experiment demonstrated that the proposed model can perform word segmentation and POS tagging substantially better than current state-of-theart models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint modeling with lexical normalization",
"sec_num": "1.2"
},
{
"text": "Contributions of this paper are the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "1.3"
},
{
"text": "\u2022 We developed a microblog corpus that enables not only training of accurate models but also quantitative evaluation for word segmentation and POS tagging in the microblog domain. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "1.3"
},
{
"text": "\u2022 We propose a joint model with lexical normalization for better handling of orthographic diversity in microblog texts. In particular, we present a new method of training the joint model using a partially annotated corpus (c.f., Section 7.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "1.3"
},
{
"text": "\u2022 We, for the first time, present a comprehensive empirical study of word segmentation and POS tagging for microblogs. The experimental results demonstrated that both the microblog corpus and joint model greatly contributes to training accurate models for word segmentation and POS tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "1.3"
},
{
"text": "The remainder of this paper is organized as follows. Section 2 reviews related work. Section 3 discusses the task of lexical normalization and introduces terminology. Section 4 presents our microblog corpus and results of our corpus analysis. Section 5 presents an overview of our joint model with lexical normalization, and Sections 6 and 7 provide details of the model. Section 8 presents experimental results and discussions, and Section 9 presents concluding remarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "1.3"
},
{
"text": "Researchers have recently developed various microblog corpora annotated with rich linguistic information. Gimpel et al. (2011) and Foster et al. (2011) annotated English microblog posts with 3 Please contact the first author for this corpus. POS tags. Han and Baldwin (2011) released a microblog corpus annotated with normalized forms of words. A Chinese microblog corpus annotated with word boundaries was developed for SIGHAN bakeoff (Duan et al., 2012) . However, there are no microblog corpora annotated with word boundaries, POS tags, and normalized sentences.",
"cite_spans": [
{
"start": 106,
"end": 126,
"text": "Gimpel et al. (2011)",
"ref_id": "BIBREF4"
},
{
"start": 131,
"end": 151,
"text": "Foster et al. (2011)",
"ref_id": "BIBREF3"
},
{
"start": 191,
"end": 192,
"text": "3",
"ref_id": null
},
{
"start": 436,
"end": 455,
"text": "(Duan et al., 2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There has been a surge of interest in lexical normalization with the advent of microblogs (Han and Baldwin, 2011; Liu et al., 2012; Han et al., 2012; Wang and Ng, 2013; Zhang et al., 2013; Ling et al., 2013; Yang and Eisenstein, 2013; Wang et al., 2013) . However, these studies did not address enhancing word segmentation. Wang et al. (2013) proposed a method of joint ill-spelled word recognition and word segmentation. With their method, informal spellings are merely recognized and not normalized. Therefore, they did not investigate how to exploit the information obtainable from well-spelled equivalents to increase word segmentation accuracy.",
"cite_spans": [
{
"start": 90,
"end": 113,
"text": "(Han and Baldwin, 2011;",
"ref_id": "BIBREF5"
},
{
"start": 114,
"end": 131,
"text": "Liu et al., 2012;",
"ref_id": "BIBREF17"
},
{
"start": 132,
"end": 149,
"text": "Han et al., 2012;",
"ref_id": "BIBREF6"
},
{
"start": 150,
"end": 168,
"text": "Wang and Ng, 2013;",
"ref_id": "BIBREF24"
},
{
"start": 169,
"end": 188,
"text": "Zhang et al., 2013;",
"ref_id": "BIBREF28"
},
{
"start": 189,
"end": 207,
"text": "Ling et al., 2013;",
"ref_id": "BIBREF16"
},
{
"start": 208,
"end": 234,
"text": "Yang and Eisenstein, 2013;",
"ref_id": "BIBREF27"
},
{
"start": 235,
"end": 253,
"text": "Wang et al., 2013)",
"ref_id": "BIBREF16"
},
{
"start": 324,
"end": 342,
"text": "Wang et al. (2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Some studies also explored integrating the lexical normalization process into word segmentation and POS tagging (Ikeda et al., 2009; Sasano et al., 2013) . A strength of our joint model is that it uses rich character-level and word-level features used in state-of-the-art models of joint word segmentation and POS tagging (Kudo et al., 2004; Neubig et al., 2011; Kaji and Kitsuregawa, 2013) . Thanks to these features, our model performed much better than Sasano et al.'s system, which is the only publicly available system that jointly conducts lexical normalization, in the experiments (see Section 8). Another advantage is that our model can be trained on a partially annotated corpus. Furthermore, we present a comprehensive evaluation in terms of precision and recall on our microblog corpus. Such an evaluation has not been conducted in previous work due to the lack of annotated corpora. 4",
"cite_spans": [
{
"start": 112,
"end": 132,
"text": "(Ikeda et al., 2009;",
"ref_id": "BIBREF9"
},
{
"start": 133,
"end": 153,
"text": "Sasano et al., 2013)",
"ref_id": "BIBREF21"
},
{
"start": 322,
"end": 341,
"text": "(Kudo et al., 2004;",
"ref_id": "BIBREF14"
},
{
"start": 342,
"end": 362,
"text": "Neubig et al., 2011;",
"ref_id": "BIBREF18"
},
{
"start": 363,
"end": 390,
"text": "Kaji and Kitsuregawa, 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This section explains the task of lexical normalization addressed in this paper. Since lexical normalization is a relatively new research topic, there are no precise definitions of a lexical normalization task that are widely accepted by researchers. Therefore, it is important to clarify our task setting before discussing our joint model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Normalization Task",
"sec_num": "3"
},
{
"text": "Many studies on lexical normalization have pointed out that phonological factors are deeply involved in the process of deriving ill-spelled words. Xia et al. (2006) investigated a Chinese chat corpus and reported that 99.2% of the ill-spelled words were derived by phonetic mapping from well-spelled equivalents. Wang and Ng (2013) analyzed 200 Chinese messages from Weibo and 200 English SMS messages from the NUS SMS corpus (How and Kan, 2005) . Their analysis revealed that most ill-spelled words were derived from well-spelled equivalents based on pronunciation similarity. On top of these investigations, we focused on ill-spelled words that are derived by phonological mapping from well-spelled words by assuming that such ill-spelled words are dominant in Japanese microblogs as well. We also assume that these ill-spelled words can be normalized into well-spelled equivalents on a word-to-word basis, as assumed in a previous study (Han and Baldwin, 2011) . The validity of these two assumptions is empirically assessed in Section 4. Table 1 lists examples of our target ill-spelled words, their well-spelled equivalents, and their phonemes. The ill-spelled word in the first row is formed by changing the continuous two vowels from /oi/ to /ee/. This type of change in pronunciation is often observed in Japanese spoken language. The second row presents contractions. The last vowel character \" \" /u/ of the well-spelled word is dropped. The third row illustrates word lengthening. The ill-spelled word is derived by repeating the vowel character \" \" /i/.",
"cite_spans": [
{
"start": 147,
"end": 164,
"text": "Xia et al. (2006)",
"ref_id": "BIBREF26"
},
{
"start": 313,
"end": 331,
"text": "Wang and Ng (2013)",
"ref_id": "BIBREF24"
},
{
"start": 426,
"end": 445,
"text": "(How and Kan, 2005)",
"ref_id": "BIBREF8"
},
{
"start": 940,
"end": 963,
"text": "(Han and Baldwin, 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 1042,
"end": 1049,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Target ill-spelled words",
"sec_num": "3.1"
},
{
"text": "We now introduce the terminology that will be used throughout the remainder of this paper. The term word surface form (or surface form for short) is used to refer to the word form observed in an actual text, while word normal form (or normal form) refers to the normalized word form. Note that surface forms of well-spelled words are always identical to their normal forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Terminology",
"sec_num": "3.2"
},
{
"text": "It is possible that the word surface form and normal form have distinct POS tags, although they are identical in most cases. Take the ill-spelled word \" \" /modoro/ as an example (the second row of Table 1 ). According to the JUMAN POS tag set, 5 POS of its surface form is CONTRACTED VERB, while that of its normal form is VERB. 6 To handle such a case, we strictly distinguish between these two POS tags by referring to them as surface POS tags and normal POS tags, respectively. Given these terms, the tasks addressed in this paper can be stated as follows. Word segmentation is a task of segmenting a sentence into a sequence of word surface forms, and POS tagging is a task of providing surface POS tags. The task of joint lexical normalization, word segmentation, and POS tagging is to map a sentence into a sequence of quadruplets: word surface form, surface POS tag, normal form, and normal POS tag.",
"cite_spans": [
{
"start": 329,
"end": 330,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Terminology",
"sec_num": "3.2"
},
{
"text": "This section introduces our microblog corpus. We first explain the process of developing the corpus then present the results of our agreement study and corpus analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Microblog Corpus",
"sec_num": "4"
},
{
"text": "The corpus was developed by manually annotating text messages posted to Twitter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data collection and annotation",
"sec_num": "4.1"
},
{
"text": "The posts to be annotated were collected as follows. 171,386 Japanese posts were collected using the Twitter API 7 on December 6, 2013. Among these, 1000 posts were randomly selected then manually split into sentences. As a result, we obtained 1831 sentences as a source of the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data collection and annotation",
"sec_num": "4.1"
},
{
"text": "Two human participants annotated the 1831 sentences with surface forms and surface POS tags. Since much effort has already been done to annotate corpora with this information, the annotation process here follows the guidelines used to develop such corpora in previous studies (Kurohashi and Nagao, 1998; Hashimoto et al., 2011) .",
"cite_spans": [
{
"start": 276,
"end": 303,
"text": "(Kurohashi and Nagao, 1998;",
"ref_id": "BIBREF15"
},
{
"start": 304,
"end": 327,
"text": "Hashimoto et al., 2011)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data collection and annotation",
"sec_num": "4.1"
},
{
"text": "The two participants also annotated ill-spelled words with their normal forms and normal POS tags. Although this paper targets only informal phonological variations (c.f., Section 3), other types of ill-spelled words were also annotated to investigate their frequency distribution in microblog texts. Specifically, besides informal phonological variations, spelling errors and Twitter-specific abbreviations were annotated. As a result, 833 ill-spelled words were identified (Table 2). They were all annotated with normal forms and normal POS tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data collection and annotation",
"sec_num": "4.1"
},
{
"text": "We investigated the inter-annotator agreement to check the reliability of the annotation. During the annotation process, the two participants collaboratively annotated around 90% of the sentences (specifically, 1647 sentences) with normal forms and normal POS tags, and elaborated an annotation guideline through discussion. They then independently annotated the remaining 184 sentences (1431 words), which were used for the agreement study. Our annotation guideline is shown in the supplementary material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agreement study",
"sec_num": "4.2"
},
{
"text": "We first explored the extent to which the two participants agreed in distinguishing between well-spelled words and ill-spelled words. For this task, we observed Cohen's kappa of 0.96 (almost perfect agreement). This results show that it is easy for humans to distinguish between these two types of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agreement study",
"sec_num": "4.2"
},
{
"text": "Next, we investigated whether the two participants could give ill-spelled words with the same normal forms and normal POS tags. For this purpose, we regarded the normal forms and normal POS tags annotated by one participant as goldstandards and calculated precision and recall achieved by the other participant. We observed moderate agreement between the two participants: 70% (56/80) precision and 73% (56/76) recall. We manually analyzed the conflicted examples and found that there were more than one acceptable normal form in many of these cases. Therefore, we would like to note that the precision and recall reported above are rather pessimistic estimations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agreement study",
"sec_num": "4.2"
},
{
"text": "We conducted corpus analysis to confirm the feasibility of our approach. Table 2 illustrates that phonological variations constitute a vast majority of ill-spelled words in Japanese microblog texts. In addition, analysis of the 804 phonological variations showed that 793 of them can be normalized into single words. These represent the validity of the two assumptions we made in Section 3.1.",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 80,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.3"
},
{
"text": "We then investigated whether lexical normalization can decrease the number of out-of-vocabulary words. For the 793 ill-spelled words, we counted how many of their surface forms and normal forms were not registered in the JUMAN dictionary. 8 The result suggests that 411 (51.8%) and 74 (9.3%) are not registered in the dictionary. This indicates the effectiveness of lexical normalization for decreasing out-of-vocabulary words.",
"cite_spans": [
{
"start": 239,
"end": 240,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.3"
},
{
"text": "This section gives an overview of our joint model with lexical normalization for accurate word segmentation and POS tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of Joint Model",
"sec_num": "5"
},
{
"text": "A lattice-based approach has been commonly adopted to perform joint word segmentation and POS tagging (Jiang et al., 2008; Kudo et al., 2004; Kaji and Kitsuregawa, 2013) . In this approach, an input sentence is transformed into a word lattice in which the edges are labeled with surface POS tags (Figure 1 ). Given such a lattice, word segmentation and POS tagging can be performed at the same time by traversing the lattice. A discriminative model is typically used for the traversal.",
"cite_spans": [
{
"start": 102,
"end": 122,
"text": "(Jiang et al., 2008;",
"ref_id": "BIBREF10"
},
{
"start": 123,
"end": 141,
"text": "Kudo et al., 2004;",
"ref_id": "BIBREF14"
},
{
"start": 142,
"end": 169,
"text": "Kaji and Kitsuregawa, 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 296,
"end": 305,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Lattice-based approach",
"sec_num": "5.1"
},
{
"text": "An advantage of this approach is that, while the lattice can represent an exponentially large number of candidate analyses, it can be quickly traversed using dynamic programming (Kudo et al., 2004; Kaji and Kitsuregawa, 2013) or beam search (Jiang et al., 2008) . In addition, a discriminative model allows the use of rich word-level features to find the correct analysis. (Kudo et al., 2004; Kaji and Kitsuregawa, 2013) . Circle and arrow represent node and edge, respectively. Bold edges represent correct analysis. We propose extending the lattice-based approach to jointly perform lexical normalization, word segmentation, and POS tagging. We transform an input sentence into a word lattice in which the edges are labeled with not only surface POS tags but normal forms and normal POS tags (Figure 2) . By traversing such a lattice, the three tasks can be performed at the same time. This approach can not only exploit rich information obtainable from word normal forms, but also achieve efficiency similar to the original lattice-based approach.",
"cite_spans": [
{
"start": 178,
"end": 197,
"text": "(Kudo et al., 2004;",
"ref_id": "BIBREF14"
},
{
"start": 198,
"end": 225,
"text": "Kaji and Kitsuregawa, 2013)",
"ref_id": "BIBREF13"
},
{
"start": 241,
"end": 261,
"text": "(Jiang et al., 2008)",
"ref_id": "BIBREF10"
},
{
"start": 373,
"end": 392,
"text": "(Kudo et al., 2004;",
"ref_id": "BIBREF14"
},
{
"start": 393,
"end": 420,
"text": "Kaji and Kitsuregawa, 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 794,
"end": 804,
"text": "(Figure 2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Lattice-based approach",
"sec_num": "5.1"
},
{
"text": "Issues on how to develop this lattice-based approach is detailed in Sections 6 and 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Issues",
"sec_num": "5.2"
},
{
"text": "Section 6 describes how to generate a word lattice from an input sentence. This is done using a hybrid approach that combines a statistical model and normalization dictionary. The normalization dictionary is specifically a list of quadru- (Table 3) . Section 7 describes a discriminative model for the lattice traversal. Our feature design as well as two training methods are presented.",
"cite_spans": [],
"ref_spans": [
{
"start": 239,
"end": 248,
"text": "(Table 3)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Issues",
"sec_num": "5.2"
},
{
"text": "In this section, we first describe a method of constructing a normalization dictionary then present a method of generating a word lattice from an input sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Lattice Generation",
"sec_num": "6"
},
{
"text": "Although large-scale normalization dictionaries are difficult to obtain, tag dictionaries, which list pairs of word surface forms and their surface POS tags (Table 4) , are widely available in many languages including Japanese. Therefore, we use an existing tag dictionary to construct the normalization dictionary.",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 166,
"text": "(Table 4)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Construction of normalization dictionary",
"sec_num": "6.1"
},
{
"text": "Due to space limitations, we give only a brief overview of our construction method, omitting its details. We note that our method uses hand-crafted rules similar to those used in (Sasano et al., 2013) ; hence, the proposal of this method is not an important contribution. To make our experimental results reproducible, our normalization dictionary, as well as a tool for constructing it, is released as supplementary material.",
"cite_spans": [
{
"start": 179,
"end": 200,
"text": "(Sasano et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Construction of normalization dictionary",
"sec_num": "6.1"
},
{
"text": "Our method of constructing the normalization dictionary takes three steps. The following explains each step using Tables 3 and 4 as running examples.",
"cite_spans": [],
"ref_spans": [
{
"start": 114,
"end": 128,
"text": "Tables 3 and 4",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Construction of normalization dictionary",
"sec_num": "6.1"
},
{
"text": "Step 1 A tag dictionary generally contains a small number of ill-spelled words, although wellspelled words constitute a vast majority. We identify such ill-spelled words by using a manuallytailored list of surface POS tags indicative of informal spelling (e.g., CONTRACTED VERB). For example, entry (c) in Table 4 is identified as an ill-spelled word in this step.",
"cite_spans": [],
"ref_spans": [
{
"start": 306,
"end": 313,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Construction of normalization dictionary",
"sec_num": "6.1"
},
{
"text": "Step 2 The tag dictionary is augmented with normal forms and normal POS tags to construct a small normalization dictionary. For ill-spelled words identified in step 1, the normal forms and normal POS tags are determined by hand-crafted rules. For example, the normal form is derived by appending the vowel character \" \" /u/ to the surface form, if the surface POS tag is CONTRACTED VERB. This rule derives entry (D) in Table 3 from entry (c) in Table 4 . For well-spelled words, on the other hand, the normal forms and normal POS tags are simply set the same as the surface forms and surface POS tags. For example, entries (A), (C), and (E) in Table 3 are generated from entries (a), (b), and (d) in Table 4 , respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 419,
"end": 426,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 445,
"end": 452,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 644,
"end": 651,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 700,
"end": 707,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Construction of normalization dictionary",
"sec_num": "6.1"
},
{
"text": "Step 3 Because the normalization dictionary constructed in step 2 contains only a few illspelled words, it is expanded in this step. For this purpose, we use hand-crafted rules to derive illspelled words from the entries already registered in the normalization dictionary. Some rules are taken from (Sasano et al., 2013) , while the others are newly tailored. In Table 3 , for example, entry (B) is derived from entry (A) by applying the rule that substitutes \" \" /goi/ with \" \" /gee/. A small problem that arises in step 3 is how to handle lengthened words, such as entry (F) in Table 3. While lengthened words can be easily derived using simple rules (Brody and Diakopoulos, 2011; Sasano et al., 2013) , such rules infinitely increase the number of entries because an unlimited number of lengthened words can be derived by repeating characters. To address this problem, no lengthened words are added to the normalization dictionary in step 3. We instead use rules to skip repetitive characters in an input sentence when performing dictionary match.",
"cite_spans": [
{
"start": 299,
"end": 320,
"text": "(Sasano et al., 2013)",
"ref_id": "BIBREF21"
},
{
"start": 653,
"end": 682,
"text": "(Brody and Diakopoulos, 2011;",
"ref_id": "BIBREF0"
},
{
"start": 683,
"end": 703,
"text": "Sasano et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 363,
"end": 370,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Construction of normalization dictionary",
"sec_num": "6.1"
},
{
"text": "A word lattice is generated using both a statistical method (Kaji and Kitsuregawa, 2013) and the normalization dictionary.",
"cite_spans": [
{
"start": 60,
"end": 88,
"text": "(Kaji and Kitsuregawa, 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A hybrid approach",
"sec_num": "6.2"
},
{
"text": "We begin by generating a word lattice which encodes only word surface forms and surface POS tags (c.f., Figure 1 ) using the statistical method proposed by Kaji and Kitsuregawa (2013) . Interested readers may refer to their paper for details.",
"cite_spans": [
{
"start": 156,
"end": 183,
"text": "Kaji and Kitsuregawa (2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 104,
"end": 112,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "A hybrid approach",
"sec_num": "6.2"
},
{
"text": "Each edge in the lattice is then labeled with normal forms and normal POS tags. Note that a single edge can have more than one candidate normal form and normal POS tag. In such a case, new edges are accordingly added to the lattice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A hybrid approach",
"sec_num": "6.2"
},
{
"text": "The edges are labeled with normal forms and normal POS tags in the following manner. First, every edge is labeled with a normal form and normal POS tag that are identical with the surface form and surface POS tag. This is based on our observation that most words are well-spelled ones. The edge is not provided with further normal forms and normal POS tags, if the normalization dictionary contains a well-spelled word that has the same surface form as the edge. Otherwise, we allow the edge to have all pairs of normal forms and normal POS tags that are obtained by using the normalization dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A hybrid approach",
"sec_num": "6.2"
},
{
"text": "This section explains a discriminative model for traversing the word lattice. The lattice traversal with a discriminative model can formally be written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Lattice Traversal",
"sec_num": "7"
},
{
"text": "(w, t, v, s) = arg max (w,t,v,s)\u2208L(x) f (x, w, t, v, s) \u2022 \u03b8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Lattice Traversal",
"sec_num": "7"
},
{
"text": "Here, x denotes an input sentence, w, t, v, and s denote a sequence of word surface forms, surface POS tags, normal forms, and normal POS tags, respectively, L(x) represents a set of candidate analyses represented by the word lattice, and f (\u2022) and \u03b8 are feature and weight vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Lattice Traversal",
"sec_num": "7"
},
{
"text": "We now describe features, a decoding method, and two training methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Lattice Traversal",
"sec_num": "7"
},
{
"text": "We use character-level and word-level features used for word segmentation and POS tagging in (Kaji and Kitsuregawa, 2013) . To take advantage of joint model with lexical normalization, the word-level features are extracted from not only surface forms but also normal forms. See (Kaji and Kitsuregawa, 2013) for the original features.",
"cite_spans": [
{
"start": 93,
"end": 121,
"text": "(Kaji and Kitsuregawa, 2013)",
"ref_id": "BIBREF13"
},
{
"start": 278,
"end": 306,
"text": "(Kaji and Kitsuregawa, 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7.1"
},
{
"text": "In addition, several new features are introduced in this paper. We use the quadruplets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7.1"
},
{
"text": "(w i , t i , v i , s i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7.1"
},
{
"text": "and pairs of surface and normal POS tags (t i , s i ) as binary features to capture probable mappings between ill-spelled words and their well-spelled equivalents. We use another binary feature indicating whether a quadruplet (w i , t i , v i , s i ) is registered in the normalization dictionary. Also, we use a bigram language model feature, which prevents sentences from being normalized into ungrammatical and/or incomprehensible ones. The language model features are associated with normalized bigrams,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7.1"
},
{
"text": "(v i\u22121 , s i\u22121 , v i , s i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7.1"
},
{
"text": ", and take as the values the logarithmic frequency log 10 (f + 1), where f represents the bigram frequency (Kaji and Kitsuregawa, 2011) . Since it is difficult to obtain a precise value of f , it is approximated by the frequency of the surface bigram, (w i\u22121 , t i\u22121 , w i , t i ), calculated from a large raw corpus automatically analyzed using a system of joint word segmentation and POS tagging. See Section 8.1 for the raw corpus and system used in the experiments.",
"cite_spans": [
{
"start": 107,
"end": 135,
"text": "(Kaji and Kitsuregawa, 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7.1"
},
{
"text": "It is easy to find the best analysis (w, t, v, s) among the candidates represented by the word lattice. Although we use several new features, we can still locate the best analysis by using the same dynamic programming algorithm as in previous studies (Kudo et al., 2004; Kaji and Kitsuregawa, 2013) .",
"cite_spans": [
{
"start": 251,
"end": 270,
"text": "(Kudo et al., 2004;",
"ref_id": "BIBREF14"
},
{
"start": 271,
"end": 298,
"text": "Kaji and Kitsuregawa, 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "7.2"
},
{
"text": "It is straightforward to train the joint model provided with a fully annotated corpus, which is labeled with word surface forms, surface POS tags, normal forms, and normal POS tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training on a fully annotated corpus",
"sec_num": "7.3"
},
{
"text": "We use structured perceptron (Collins, 2002) for the training (Algorithm 1). The training begins by initializing \u03b8 as a zero vector (line 1). It then reads the annotated corpus C (line 2-9). Given a training example, (x, w, t, v, s) \u2208 C, the algorithm locates the best analysis, (\u0175,t,v,\u015d) , based on the current weight vector (line 4). If the best analysis differs from the oracle analysis, (w, t, v, s), the weight vector is updated (line 5-7). After going through the annotated corpus m times (m=10 in our experiment), the averaged weight vector is returned (line 10).",
"cite_spans": [
{
"start": 29,
"end": 44,
"text": "(Collins, 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 279,
"end": 288,
"text": "(\u0175,t,v,\u015d)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training on a fully annotated corpus",
"sec_num": "7.3"
},
{
"text": "Although the training with the perceptron algorithm requires a fully annotated corpus, it is laborintensive to fully annotate sentences. This consid-Algorithm 1 Perceptron training 1: \u03b8 \u2190 0 2: for i = 1 . . . m do 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training on a partially annotated corpus",
"sec_num": "7.4"
},
{
"text": "for (x, w, t, v, s) \u2208 C do 4: (\u0175,t,v,\u015d) \u2190 DECODING(x, \u03b8) 5: if (w, t, v, s) = (\u0175,t,v,\u015d) then 6: \u03b8 \u2190 \u03b8 + f (x, w, t, v, s) \u2212 f (x,\u0175,t,v,\u015d) 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training on a partially annotated corpus",
"sec_num": "7.4"
},
{
"text": "end if 8: end for 9: end for 10: return AVERAGE(\u03b8) Algorithm 2 Latent perceptron training 1: \u03b8 \u2190 0 2: for i = 1 . . . m do 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training on a partially annotated corpus",
"sec_num": "7.4"
},
{
"text": "for (x, w, t) \u2208 C \u2032 do 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training on a partially annotated corpus",
"sec_num": "7.4"
},
{
"text": "(\u0175,t,v,\u015d) \u2190 DECODING(x, \u03b8) 5: (w, t,v,s) \u2190 CONSTRAINEDDECODING(x, \u03b8) 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training on a partially annotated corpus",
"sec_num": "7.4"
},
{
"text": "if w =\u0175 or t =t then 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training on a partially annotated corpus",
"sec_num": "7.4"
},
{
"text": "\u03b8 \u2190 \u03b8 + f (x, w, t,v,s) \u2212 f (x,\u0175,t,v,\u015d) 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training on a partially annotated corpus",
"sec_num": "7.4"
},
{
"text": "end if 9: end for 10: end for 11: return AVERAGE(\u03b8) eration motivates us to explore training our model with less supervision. We specifically explore using a corpus annotated with only word boundaries and POS tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training on a partially annotated corpus",
"sec_num": "7.4"
},
{
"text": "We use the latent perceptron algorithm (Sun et al., 2013) to train the joint model from such a partially annotated corpus (Algorithm 2). In this scenario, a training example is a sentence x paired with a sequence of word surface forms w and surface POS tags t (c.f., line 3). Similarly to the perceptron algorithm, we locate the best analysis (\u0175,t,v,\u015d) for a given training example, (line 4). We also locate the best analysis, (w, t,v,s), among those having the same surface forms w and surface POS tags t as the training example (line 5). If the surface forms and surface POS tags of the former analysis differ from the annotations of the training example, parameter is updated by regarding the latter analysis as an oracle (line 6-8).",
"cite_spans": [
{
"start": 39,
"end": 57,
"text": "(Sun et al., 2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training on a partially annotated corpus",
"sec_num": "7.4"
},
{
"text": "We conducted experiments to investigate how the microblog corpus and joint model contribute to improving accuracy of word segmentation and POS tagging in the microblog domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "8"
},
{
"text": "We constructed the normalization dictionary from the JUMAN dictionary 7.0. 9 While JUMAN dic-tionary contains 750,156 entries, the normalization dictionary contains 112,458,326 entries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "8.1"
},
{
"text": "Some features taken from the previous study (Kaji and Kitsuregawa, 2013) are induced using a tag dictionary. For this we used two tag dictionaries. One is JUMAN dictionary 7.0 and the other is a tag dictionary constructed by listing surface forms and surface POS tags in the normalization dictionary.",
"cite_spans": [
{
"start": 44,
"end": 72,
"text": "(Kaji and Kitsuregawa, 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "8.1"
},
{
"text": "To compute the language model features, one billion sentences from Twitter posts were analyzed using MeCab 0.996. 10 We used all bigrams appearing at least 10 times in the auto-analyzed sentences.",
"cite_spans": [
{
"start": 114,
"end": 116,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "8.1"
},
{
"text": "We first investigated the performance of models trained on an existing annotated corpus form news texts. For this experiment, our joint model as well as three state-of-the-art models (Kudo et al., 2004) 11 (Neubig et al., 2011) 12 (Kaji and Kitsuregawa, 2013) were trained on Kyoto University Text corpus 4.0 (Kurohashi and Nagao, 1998) .",
"cite_spans": [
{
"start": 183,
"end": 202,
"text": "(Kudo et al., 2004)",
"ref_id": "BIBREF14"
},
{
"start": 206,
"end": 227,
"text": "(Neubig et al., 2011)",
"ref_id": "BIBREF18"
},
{
"start": 231,
"end": 259,
"text": "(Kaji and Kitsuregawa, 2013)",
"ref_id": "BIBREF13"
},
{
"start": 309,
"end": 336,
"text": "(Kurohashi and Nagao, 1998)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results of word segmentation and POS tagging",
"sec_num": "8.2"
},
{
"text": "Since this training corpus is not annotated with normal forms and normal POS tags, our model was trained using the latent perceptron. Table 5 summarizes the word-level F 1 -scores (Kudo et al., 2004) on our microblog corpus. The two columns represent the results for word segmentation (Seg) and joint word segmentation and POS tagging (Seg+Tag), respectively. We also conducted 5-fold crossvalidation on our microblog corpus to evaluate performance improvement when these models are trained on microblog texts (Table 6 ). In addition to the models in Table 5 , results of a rule-based system (Sasano et al., 2013) 13 and our joint model trained using the perceptron algorithm are also presented. Notice that Proposed and Proposed (latent) represent our model trained using perceptron and latent perceptron, respectively.",
"cite_spans": [
{
"start": 181,
"end": 200,
"text": "(Kudo et al., 2004)",
"ref_id": "BIBREF14"
},
{
"start": 593,
"end": 614,
"text": "(Sasano et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Table 5",
"ref_id": "TABREF4"
},
{
"start": 511,
"end": 519,
"text": "(Table 6",
"ref_id": "TABREF5"
},
{
"start": 552,
"end": 559,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results of word segmentation and POS tagging",
"sec_num": "8.2"
},
{
"text": "From Tables 5 and 6 , as expected, we see that the models trained on news texts performed poorly on microblog texts, while their performance significantly boosted when trained on the microblog texts. This demonstrates the importance of corpus annotation. An exception was Kudo04. Its perfor- mance improved only slightly, even when it was trained on the microblog texts. We believe this is because their model uses dictionary-based rules to prune candidate analyses; thus, it could not perform well in the microblog domain, where out-ofvocabulary words are abundant. Table 6 also illustrates that our joint models achieved F 1 -score better than the state-of-the-art models trained on the microblog texts. This shows that modeling the derivation process of illspelled words makes training easier. We conducted bootstrap resampling (with 1000 samples) to investigate the significance of the improvements achieved with our joint model. The results showed that all improvements over the baselines were statistically significant (p < 0.01). The difference between Proposed and Proposed (latent) were also statistically significant (p < 0.01).",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 19,
"text": "Tables 5 and 6",
"ref_id": "TABREF4"
},
{
"start": 567,
"end": 574,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results of word segmentation and POS tagging",
"sec_num": "8.2"
},
{
"text": "The results of Proposed (latent) are interesting. Table 5 illustrates that our joint model performs well even when it is trained on a news corpus that rarely contains ill-spelled words and is not at all annotated with normal forms and normal POS tags. This indicates the robustness of our training method and the importance of modeling word derivation process in the microblog domain. In Table 6 , we observed that Proposed (latent), which uses less supervision, performed better than Proposed. The reason for this will be examined later.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 5",
"ref_id": "TABREF4"
},
{
"start": 388,
"end": 395,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results of word segmentation and POS tagging",
"sec_num": "8.2"
},
{
"text": "In summary, we can conclude that both the microblog corpus and joint model significantly contribute to training accurate models for word segmentation and POS tagging in the microblog domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of word segmentation and POS tagging",
"sec_num": "8.2"
},
{
"text": "While the main goal with this study was to enhance word segmentation and POS tagging in the microblog domain, it is interesting to explore how well our joint model can normalize ill-spelled words. Table 7 illustrates precision, recall, and F 1score for the lexical normalization task. To put the results into context, we report on the baseline results of a tagging model proposed by Neubig et al. (2011) . This baseline conducts lexical normalization by regarding it as two independent tagging tasks (i.e., tasks of tagging normal forms and normal POS tags). The result of the baseline model is also obtained using 5-fold crossvalidation. Table 7 illustrates that Proposed performed significantly better than the simple tagging model, Neubig11. This suggests the effectiveness of our joint model. On the other hand, Proposed (latent) performed poorly in this task. From this result, we can argue that Proposed (latent) can achieve superior performance in word segmentation and POS tagging (Table 6 ) because it gave up correctly normalizing ill-spelled words, focusing on word segmentation and POS tagging.",
"cite_spans": [
{
"start": 383,
"end": 403,
"text": "Neubig et al. (2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 7",
"ref_id": "TABREF6"
},
{
"start": 639,
"end": 646,
"text": "Table 7",
"ref_id": "TABREF6"
},
{
"start": 989,
"end": 997,
"text": "(Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results of lexical normalization",
"sec_num": "8.3"
},
{
"text": "The experimental results so far suggest the following strategy for training our joint model. If accuracy of word segmentation and POS tagging is the main concern, we can use the latent perceptron. This approach has the advantage of being able to use a partially annotated corpus. On the other hand, if performance of lexical normalization is crucial, we have to use the standard perceptron algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of lexical normalization",
"sec_num": "8.3"
},
{
"text": "We manually analyzed erroneous outputs and observed several tendencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "8.4"
},
{
"text": "We found that a word lattice sometimes missed the correct output. Such an error was, for example, observed in a sentence including many ill-spelled words, e.g., ' (be nervous about what other people think!)', where the part ' ' is in ill-spelled words. Improving the lattice generation algorithm is considered necessary to achieve further performance gain.",
"cite_spans": [
{
"start": 161,
"end": 162,
"text": "'",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "8.4"
},
{
"text": "Even if the correct analysis appears in the word lattice, our model sometimes failed to handle ill-spelled words, incorrectly analyzing them as out-of-vocabulary words. For example, the proposed method treated the phrase ' (snack time)' as a single out-of-vocabulary word, even though the correct analysis was found in the word lattice. More sophisticated features would be required to accurately distinguish between illspelled and out-of-vocabulary words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "8.4"
},
{
"text": "We presented our attempts towards developing an accurate model for word segmentation and POS tagging in the microblog domain. To this end, we, for the first time, developed an annotated corpus of microblogs. We also proposed a joint model with lexical normalization to handle orthographic diversity in the microblog text. Intensive experiments demonstrated that we could successfully improve the performance of word segmentation and POS tagging on microblog texts. We believe this study will have a large practical impact on a various research areas that target microblogs. One limitation of our approach is that it cannot handle certain types of ill-spelled words. For example, the current model cannot handle the cases in which there are no one-to-one-mappings between well-spelled and ill-spelled words. Also, our model cannot handle spelling errors, which are considered relatively frequent in the microblog than news domains. The treatment of these problems would require further research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "Another future research is to speed-up our model. Since the joint model with lexical normalization significantly increases the search space, it is much slower than the original lattice-based model for word segmentation and POS tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "https://twitter.com 2 https://www.weibo.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Very recently, Saito et al. (2014) conducted similar empirical evaluation on microblog corpus. However, they used biased dataset, in which every sentence includes at least one ill-spelled words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.ist.i. In this paper, we use simplified POS tags for explanation purposes. Remind that these tags are different from the original ones defined in JUMAN POS tag set.7 https://stream.twitter.com/1.1/statuses/sample.json",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.ist.i.kyoto-u.ac.jp/index.php?JUMAN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?JUMAN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://code.google.com/p/mecab 11 https://code.google.com/p/mecab 12 http://www.phontron.com/kytea/ 13 http://nlp.ist.i.kyoto-u.ac.jp/index.php?JUMAN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank Naoki Yoshinaga for his help in developing the microblog corpus as well as fruitful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Cooooooooooooooollllllllllllll!!!!!!!!!!!!!! using word lengthening to detect sentiment in microblogs",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Brody",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Diakopoulos",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "562--570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Brody and Nicholas Diakopoulos. 2011. Cooooooooooooooollllllllllllll!!!!!!!!!!!!!! using word lengthening to detect sentiment in microblogs. In Proceedings of EMNLP, pages 562-570.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden Markov models: Theory and exper- iments with perceptron algorithms. In Proceedings of EMNLP, pages 1-8.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The CIPS-SIGHAN CLP 2012 Chinese word segmentation on microblog corpora bakeoff",
"authors": [
{
"first": "Huiming",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
},
{
"first": "Ye",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Second CIPS-SIGHAN Joint Conrerence on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "35--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huiming Duan, Zhifang Sui, Ye Tian, and Wenjie Li. 2012. The CIPS-SIGHAN CLP 2012 Chinese word segmentation on microblog corpora bakeoff. In Pro- ceedings of the Second CIPS-SIGHAN Joint Conr- erence on Chinese Language Processing, pages 35- 40.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "#hardtoparse: POS tagging and parsing the twitterverse",
"authors": [
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Cetinoglu",
"suffix": ""
},
{
"first": "Joachim",
"middle": [],
"last": "Wagner",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"Le"
],
"last": "Roux",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Deirdre",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of AAAI Workshop on Analysing Microtext",
"volume": "",
"issue": "",
"pages": "20--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer Foster, Ozlem Cetinoglu, Joachim Wagner, Joseph Le Roux, Stephen Hogan, Joakim Nivre, Deirdre Hogan, and Josef van Genabith. 2011. #hardtoparse: POS tagging and parsing the twit- terverse. In Proceedings of AAAI Workshop on Analysing Microtext, pages 20-25.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Part-of-speech tagging for twitter: Annotation, features, and experiments",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Mills",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Flanigan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "42--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for twitter: Annotation, features, and experiments. In Proceedings of ACL, pages 42-47.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Lexical normalization of short text messages: Makin sens a #twitter",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "368--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han and Timothy Baldwin. 2011. Lexical normal- ization of short text messages: Makin sens a #twitter. In Proceedings of ACL, pages 368-378.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatically constructing a normalisation dictionary for microblogs",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "421--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han, Paul Cook, and Timothy Baldwin. 2012. Au- tomatically constructing a normalisation dictionary for microblogs. In Proceedings of EMNLP-CoNLL, pages 421-432.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Construction of a blog corpus with syntactic, anaphoric, and semantic annotations",
"authors": [
{
"first": "Chikara",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Keiji",
"middle": [],
"last": "Shinzato",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Natural Language Processing",
"volume": "18",
"issue": "2",
"pages": "175--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chikara Hashimoto, Sadao Kurohashi, Daisuke Kawa- hara, Keiji Shinzato, and Masaaki Nagata. 2011. Construction of a blog corpus with syntac- tic, anaphoric, and semantic annotations (in Japanese). Journal of Natural Language Process- ing, 18(2):175-201.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Optimizing predictive text entry for short message service on mobile phones",
"authors": [
{
"first": "Yijiu",
"middle": [],
"last": "How",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Computer Interfaces International",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yijiu How and Min-Yen Kan. 2005. Optimizing pre- dictive text entry for short message service on mo- bile phones. In Proceedings of Human Computer Interfaces International.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Unsupervised text normalization approach for morphological analysis of blog documents",
"authors": [
{
"first": "Kazushi",
"middle": [],
"last": "Ikeda",
"suffix": ""
},
{
"first": "Tadashi",
"middle": [],
"last": "Yanagihara",
"suffix": ""
},
{
"first": "Kazunori",
"middle": [],
"last": "Matsumoto",
"suffix": ""
},
{
"first": "Yasuhiro",
"middle": [],
"last": "Takishima",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Australasian Joint Conference on Advances in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "401--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazushi Ikeda, Tadashi Yanagihara, Kazunori Mat- sumoto, and Yasuhiro Takishima. 2009. Unsuper- vised text normalization approach for morphological analysis of blog documents. In Proceedings of Aus- tralasian Joint Conference on Advances in Artificial Intelligence, pages 401-411.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Word lattice reranking for Chinese word segmentation and part-of-speech tagging",
"authors": [
{
"first": "Wenbin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of Coling",
"volume": "",
"issue": "",
"pages": "385--392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenbin Jiang, Haitao Mi, and Qun Liu. 2008. Word lattice reranking for Chinese word segmentation and part-of-speech tagging. In Proceedings of Coling, pages 385-392.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Target-dependent Twitter sentiment classification",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tiejun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "151--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent Twitter sen- timent classification. In Proceedings of ACL, pages 151-160.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Splitting noun compounds via monolingual and bilingual paraphrasing: A study on Japanese Katakana words",
"authors": [
{
"first": "Nobuhiro",
"middle": [],
"last": "Kaji",
"suffix": ""
},
{
"first": "Masaru",
"middle": [],
"last": "Kitsuregawa",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "959--969",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nobuhiro Kaji and Masaru Kitsuregawa. 2011. Split- ting noun compounds via monolingual and bilingual paraphrasing: A study on Japanese Katakana words. In Proceedings of EMNLP, pages 959-969.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Efficient word lattice generation for joint word segmentation and POS tagging in Japanese",
"authors": [
{
"first": "Nobuhiro",
"middle": [],
"last": "Kaji",
"suffix": ""
},
{
"first": "Masaru",
"middle": [],
"last": "Kitsuregawa",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of IJCNLP",
"volume": "",
"issue": "",
"pages": "153--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nobuhiro Kaji and Masaru Kitsuregawa. 2013. Effi- cient word lattice generation for joint word segmen- tation and POS tagging in Japanese. In Proceedings of IJCNLP, pages 153-161.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Applying conditional random fields to Japanese morphological analysis",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Kaoru",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "230--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo, Kaoru Yamamoto, and Yuji Matsumoto. 2004. Applying conditional random fields to Japanese morphological analysis. In Proceedings of EMNLP, pages 230-237.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Building a Japanese parsed corpus while improving the parsing system",
"authors": [
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "719--724",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadao Kurohashi and Makoto Nagao. 1998. Building a Japanese parsed corpus while improving the parsing system. In Proceedings of LREC, pages 719-724.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Paraphrasing 4 microblog normalization",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "73--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Chris Dyer, Alan W Black, and Isabel Trancoso. 2013. Paraphrasing 4 microblog normal- ization. In Proceedings of EMNLP, pages 73-84.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Joint inference of named entity recognition and normalization for tweets",
"authors": [
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiangyang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhongyang",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "526--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaohua Liu, Ming Zhou, Xiangyang Zhou, Zhongyang Fu, and Furu Wei. 2012. Joint inference of named entity recognition and normal- ization for tweets. In Proceedings of ACL, pages 526-535.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Pointwise prediction for robust adaptable Japanese morphological analysis",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Yousuke",
"middle": [],
"last": "Nakata",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "529--533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Yousuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust adaptable Japanese morphological analysis. In Proceedings of ACL, pages 529-533.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Morphological analysis for Japanese noisy text based on character-level and word-level normalization",
"authors": [
{
"first": "Itsumi",
"middle": [],
"last": "Saito",
"suffix": ""
},
{
"first": "Kugatsu",
"middle": [],
"last": "Sadamitsu",
"suffix": ""
},
{
"first": "Hisako",
"middle": [],
"last": "Asano",
"suffix": ""
},
{
"first": "Yoshihiro",
"middle": [],
"last": "Matsuo",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COL-ING",
"volume": "",
"issue": "",
"pages": "1773--1782",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Itsumi Saito, Kugatsu Sadamitsu, Hisako Asano, and Yoshihiro Matsuo. 2014. Morphological analysis for Japanese noisy text based on character-level and word-level normalization. In Proceedings of COL- ING, pages 1773-1782.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Earthquak shakes Twitter users: real-time event detection by social sensors",
"authors": [
{
"first": "Takeshi",
"middle": [],
"last": "Sakaki",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Matsuo",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of WWW",
"volume": "",
"issue": "",
"pages": "851--860",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquak shakes Twitter users: real-time event detection by social sensors. In Proceedings of WWW, pages 851-860.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A simple approach to unknown word processing in Japanese morphological analysis",
"authors": [
{
"first": "Ryohei",
"middle": [],
"last": "Sasano",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Okumura",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of IJCNLP",
"volume": "",
"issue": "",
"pages": "162--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryohei Sasano, Sadao Kurohashi, and Manabu Oku- mura. 2013. A simple approach to unknown word processing in Japanese morphological analysis. In Proceedings of IJCNLP, pages 162-170.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Latent structured perceptrons for large-scale learning with hidden information",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Takuya",
"middle": [],
"last": "Matsuzaki",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2013,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "25",
"issue": "9",
"pages": "2063--2075",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Sun, Takuya Matsuzaki, and Wenjie Li. 2013. Latent structured perceptrons for large-scale learn- ing with hidden information. IEEE Transactions on Knowledge and Data Engineering, 25(9):2063- 2075.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Mining informal language from Chinese microtext: Joint word recognition and segmentation",
"authors": [
{
"first": "Aobo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "731--741",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aobo Wang and Min-Yen Kan. 2013. Mining informal language from Chinese microtext: Joint word recog- nition and segmentation. In Proceedings of ACL, pages 731-741.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A beam-search decoder for normalization of social media text with application to machine translation",
"authors": [
{
"first": "Pidong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "471--481",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pidong Wang and Hwee Tou Ng. 2013. A beam-search decoder for normalization of social media text with application to machine translation. In Proceedings of NAACL, pages 471-481.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Chinese informal word normalization: an experimental study",
"authors": [
{
"first": "Aobo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Andrade",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Onishi",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Ishikawa",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of IJCNLP",
"volume": "",
"issue": "",
"pages": "127--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aobo Wang, Min-Yen Kan, Daniel Andrade, Takashi Onishi, and Kai Ishikawa. 2013. Chinese informal word normalization: an experimental study. In Pro- ceedings of IJCNLP, pages 127-135.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A phonetic-based approach to Chinese chat text normalization",
"authors": [
{
"first": "Yunqing",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "993--1000",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yunqing Xia, Kam-Fai Wong, and Wenjie Li. 2006. A phonetic-based approach to Chinese chat text nor- malization. In Proceedings of ACL, pages 993- 1000.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A log-linear model for unsupervised text normalization",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "61--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Yang and Jacob Eisenstein. 2013. A log-linear model for unsupervised text normalization. In Pro- ceedings of EMNLP, pages 61-72.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Adaptive parsercentric text normalization",
"authors": [
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tyler",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Howard",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Benny",
"middle": [],
"last": "Kimelfeld",
"suffix": ""
},
{
"first": "Yunyao",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1159--1168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Congle Zhang, Tyler Baldwin, Howard Ho, Benny Kimelfeld, and Yunyao Li. 2013. Adaptive parser- centric text normalization. In Proceedings of ACL, pages 1159-1168.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Example lattice",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Lattice used to perform joint task. Normal forms and normal POS tags are shown in parentheses. As indicated by dotted arrows, normalized sentence can be obtained by concatenating normal forms associated with edges in correct analysis.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"html": null,
"text": "Examples of our target ill-spelled words and their well-spelled equivalents. Phonemes are shown between slashes. English translations are provided in parentheses.",
"content": "<table><tr><td>Ill-spelled word</td><td>Well-spelled equivalent</td></tr><tr><td>/sugee/</td><td>/sugoi/ (great)</td></tr><tr><td>/modoro/</td><td>/modorou/ (going to return)</td></tr><tr><td>/umaiiii/</td><td>/umai/ (yummy)</td></tr></table>"
},
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"text": "Frequency distribution over three types of ill-spelled words in corpus.",
"content": "<table><tr><td>Type</td><td>Frequency</td></tr><tr><td colspan=\"2\">Informal phonological variation 804 (92.9%)</td></tr><tr><td>Spelling error</td><td>27 (3.1%)</td></tr><tr><td>Twitter-specific abbreviation</td><td>34 (3.9%)</td></tr><tr><td>Total</td><td>865 (100%)</td></tr></table>"
},
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"text": "Normalization dictionary. Columns represent entry ID, surface form, surface POS, normal form, and normal POS, respectively.",
"content": "<table><tr><td>ID Surf.</td><td>Surf. POS</td><td>Norm. Norm. POS</td></tr><tr><td>A</td><td>ADJECTIVE</td><td>ADJECTIVE</td></tr><tr><td>B</td><td>ADJECTIVE</td><td>ADJECTIVE</td></tr><tr><td>C</td><td>VERB</td><td>VERB</td></tr><tr><td>D</td><td>CONTR. VERB</td><td>VERB</td></tr><tr><td>E</td><td>ADJECTIVE</td><td>ADJECTIVE</td></tr><tr><td>F</td><td>ADJECTIVE</td><td>ADJECTIVE</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"num": null,
"html": null,
"text": "Tag dictionary.",
"content": "<table><tr><td colspan=\"2\">ID Surf. form</td><td>Surf. POS</td></tr><tr><td>a</td><td>(great)</td><td>ADJECTIVE</td></tr><tr><td>b</td><td colspan=\"2\">(going to return) VERB</td></tr><tr><td>c</td><td>(gonna return)</td><td>CONTR. VERB</td></tr><tr><td>d</td><td>(yummy)</td><td>ADJECTIVE</td></tr></table>"
},
"TABREF4": {
"type_str": "table",
"num": null,
"html": null,
"text": "Performance of models trained on the news articles.",
"content": "<table><tr><td/><td colspan=\"2\">Seg Seg+Tag</td></tr><tr><td>Kudo04</td><td>81.8</td><td>71.0</td></tr><tr><td>Neubig11</td><td>80.5</td><td>69.1</td></tr><tr><td>Kaji13</td><td>83.2</td><td>73.1</td></tr><tr><td colspan=\"2\">Proposed (latent) 83.0</td><td>73.9</td></tr></table>"
},
"TABREF5": {
"type_str": "table",
"num": null,
"html": null,
"text": "Results of 5-fold cross-validation on microblog corpus.",
"content": "<table><tr><td/><td colspan=\"2\">Seg Seg+Tag</td></tr><tr><td>Kudo04</td><td>82.7</td><td>71.7</td></tr><tr><td>Neubig11</td><td>88.6</td><td>75.9</td></tr><tr><td>Kaji13</td><td>90.9</td><td>82.1</td></tr><tr><td>Sasano13</td><td>82.7</td><td>73.3</td></tr><tr><td>Proposed</td><td>91.3</td><td>83.2</td></tr><tr><td colspan=\"2\">Proposed (latent) 91.4</td><td>83.7</td></tr></table>"
},
"TABREF6": {
"type_str": "table",
"num": null,
"html": null,
"text": "Results of lexical normalization task in terms of precision, recall, and F 1 -score.",
"content": "<table><tr><td/><td colspan=\"2\">Precision Recall</td><td>F1</td></tr><tr><td>Neubig11</td><td>69.2</td><td colspan=\"2\">35.9 47.3</td></tr><tr><td>Proposed</td><td>77.1</td><td colspan=\"2\">44.6 56.6</td></tr><tr><td>Proposed (latent)</td><td>53.7</td><td colspan=\"2\">24.7 33.9</td></tr></table>"
}
}
}
}