File size: 43,954 Bytes
6fa4bc9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 | {
"paper_id": "P96-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:03:04.978786Z"
},
"title": "An Iterative Algorithm to Build Chinese Language Models",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Johns Hopkins University",
"location": {
"addrLine": "3400 N. Charles St",
"postCode": "MD21218",
"settlement": "Baltimore",
"country": "USA"
}
},
"email": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an iterative procedure to build a Chinese language model (LM). We segment Chinese text into words based on a word-based Chinese language model. However, the construction of a Chinese LM itself requires word boundaries. To get out of the chicken-and-egg problem, we propose an iterative procedure that alternates two operations: segmenting text into words and building an LM. Starting with an initial segmented corpus and an LM based upon it, we use a Viterbi-liek algorithm to segment another set of data. Then, we build an LM based on the second set and use the resulting LM to segment again the first corpus. The alternating procedure provides a self-organized way for the segmenter to detect automatically unseen words and correct segmentation errors. Our preliminary experiment shows that the alternating procedure not only improves the accuracy of our segmentation, but discovers unseen words surprisingly well. The resulting word-based LM has a perplexity of 188 for a general Chinese corpus.",
"pdf_parse": {
"paper_id": "P96-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an iterative procedure to build a Chinese language model (LM). We segment Chinese text into words based on a word-based Chinese language model. However, the construction of a Chinese LM itself requires word boundaries. To get out of the chicken-and-egg problem, we propose an iterative procedure that alternates two operations: segmenting text into words and building an LM. Starting with an initial segmented corpus and an LM based upon it, we use a Viterbi-liek algorithm to segment another set of data. Then, we build an LM based on the second set and use the resulting LM to segment again the first corpus. The alternating procedure provides a self-organized way for the segmenter to detect automatically unseen words and correct segmentation errors. Our preliminary experiment shows that the alternating procedure not only improves the accuracy of our segmentation, but discovers unseen words surprisingly well. The resulting word-based LM has a perplexity of 188 for a general Chinese corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In statistical speech recognition (Bahl et al., 1983) , it is necessary to build a language model(LM) for assigning probabilities to hypothesized sentences. The LM is usually built by collecting statistics of words over a large set of text data. While doing so is straightforward for English, it is not trivial to collect statistics for Chinese words since word boundaries are not marked in written Chinese text. Chinese is a morphosyllabic language (DeFrancis, 1984) in that almost all Chinese characters represent a single syllable and most Chinese characters are also morphemes. Since a word can be multi-syllabic, it is generally non-trivial to segment a Chinese sentence into words (Wu and Tseng, 1993) . Since segmentation is a fundamental problem in Chinese information processing, there is a large literature to deal with the problem. Recent work includes (Sproat et al., 1994) and (Wang et al., 1992) . In this paper, we adopt a statistical approach to segment Chinese text based on an LM because of its autonomous nature and its capability to handle unseen words.",
"cite_spans": [
{
"start": 34,
"end": 53,
"text": "(Bahl et al., 1983)",
"ref_id": "BIBREF4"
},
{
"start": 450,
"end": 467,
"text": "(DeFrancis, 1984)",
"ref_id": "BIBREF2"
},
{
"start": 687,
"end": 707,
"text": "(Wu and Tseng, 1993)",
"ref_id": "BIBREF1"
},
{
"start": 864,
"end": 885,
"text": "(Sproat et al., 1994)",
"ref_id": "BIBREF0"
},
{
"start": 890,
"end": 909,
"text": "(Wang et al., 1992)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As far as speech recognition is concerned, what is needed is a model to assign a probability to a string of characters. One may argue that we could bypass the segmentation problem by building a characterbased LM. However, we have a strong belief that a word-based LM would be better than a characterbased 1 one. In addition to speech recognition, the use of word based models would have value in information retrieval and other language processing applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "If word boundaries are given, all established techniques can be exploited to construct an LM (Jelinek et al., 1992) just as is done for English. Therefore, segmentation is a key issue in building the Chinese LM. In this paper, we propose a segmentation algorithm based on an LM. Since building an LM itself needs word boundaries, this is a chicken-and-egg problem. To get out of this, we propose an iterative procedure that alternates between the segmentation of Chinese text and the construction of the LM. Our preliminary experiments show that the iterative procedure is able to improve the segmentation accuracy and more importantly, it can detect unseen words automatically.",
"cite_spans": [
{
"start": 93,
"end": 115,
"text": "(Jelinek et al., 1992)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In section 2, the Viterbi-like segmentation algorithm based on a LM is described. Then in section section:iter-proc we discuss the alternating procedure of segmentation and building Chinese LMs. We test the segmentation algorithm and the alternating procedure and the results are reported in sec-I A character-based trigram model has a perplexity of 46 per character or 462 per word (a Chinese word has an average length of 2 characters), while a word-based trigram model has a perplexity 188 on the same set of data. While the comparison would be fairer using a 5gram character model, that the word model would have a lower perplexity as long as the coverage is high. tion 4. Finally, the work is summarized in section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we assume there is a word-based Chinese LM at our disposal so that we are able to compute the probability of a sentence (with word boundaries). We use a Viterbi-like segmentation algorithm based on the LM to segment texts. Denote a sentence S by C1C~.. \"C,,-1Cn, where each Ci (1 < i < n } is a Chinese character. To segment a sentence into words is to group these characters into words, i.e. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "segmentation based on LM",
"sec_num": "2"
},
{
"text": "\u2022 ..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S = C:C2...C,-:C,",
"sec_num": null
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S = C:C2...C,-:C,",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= w:w2...w,,",
"eq_num": "(4)"
}
],
"section": "S = C:C2...C,-:C,",
"sec_num": null
},
{
"text": "where xk is the index of the last character in k ~h word wk, i,e wk = Cxk_l+:'\"Cxk(k = 1,2,-.-,m), and of course, z0 = 0, z,~ = n. Note that a segmentation of the sentence S can be uniquely represented by an integer sequence z:,.--, zrn, so we will denote a segmentation by its corresponding integer sequence thereafter. Let ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S = C:C2...C,-:C,",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m = ~--~logPa(wi[hi) (7) /=1",
"eq_num": "(6)"
}
],
"section": "S = C:C2...C,-:C,",
"sec_num": null
},
{
"text": "where w i = C=~_,+:...C~(j = 1,2,-..,m), and hi is understood as the history words w:...wi-t. In this paper the trigram model (Jelinek et al., 1992) is used and therefore hi = wi-2wi-: Among all possible segmentations, we pick the one g* with the highest score as our result. That is,",
"cite_spans": [
{
"start": 126,
"end": 148,
"text": "(Jelinek et al., 1992)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "S = C:C2...C,-:C,",
"sec_num": null
},
{
"text": "g* = arg g~Ga~S) L(g(S)) (8) = arg max logPg(wl...wm) (9) gea(S)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S = C:C2...C,-:C,",
"sec_num": null
},
{
"text": "Note the score depends on segmentation g and this is emphasized by the subscript in (9). The optimal segmentation g* can be obtained by dynamic programming. With a slight abuse of notation, let L(k) be the max accumulated score for the first k characters. L(k) is defined for k = 1, 2,..., n with L(1) = 0 and L(g*) = L(n). Given {L(i) : 1 < i < k-l}, L(k) can be computed recursively as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S = C:C2...C,-:C,",
"sec_num": null
},
{
"text": "L(k)--max [L(i)-t-logP(Ci+:...C~]hi)] (10) :<i_<k-:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S = C:C2...C,-:C,",
"sec_num": null
},
{
"text": "where hi is the history words ended with the i th character Ci. At the end of the recursion, we need to trace back to find the segmentation points. Therefore, it's necessary to record the segmentation points in (10).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S = C:C2...C,-:C,",
"sec_num": null
},
{
"text": "Let p(k) be the index of the last character in the preceding word. Then V(k) = arg :<sm.<~x :[L(i ) + log P(Ci+:... Ck ]hi)] 11that is, Cp(k)+: \"\" \u2022 Ck comprises the last word of the optimal segmentation up to the k 'h character.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S = C:C2...C,-:C,",
"sec_num": null
},
{
"text": "A typical example of a six-character sentence is shown in table 1. Since p(6) = 4, we know the last word in the optimal segmentation is C5C6. Since p(4) = 3, the second last word is C4. So on and so forth. The optimal segmentation for this sentence is The searches in (10) and (11) are in general timeconsuming. Since long words are very rare in Chinese(94% words are with three or less characters (Wu and Tseng, 1993) ), it won't hurt at all to limit the search space in (10) and (11) by putting an upper bound(say, 10) to the length of the exploring word, i.e, impose the constraint i >_ ma\u00a2l, k -d in (10) and (11), where d is the upper bound of Chinese word length. This will speed the dynamic programming significantly for long sentences.",
"cite_spans": [
{
"start": 398,
"end": 418,
"text": "(Wu and Tseng, 1993)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "S = C:C2...C,-:C,",
"sec_num": null
},
{
"text": "(61)(C2C3)(C4)(65C6) \u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S = C:C2...C,-:C,",
"sec_num": null
},
{
"text": "It is worth of pointing out that the algorithm in (10) and (11) could pick an unseen word(i.e, a word not included in the vocabulary on which the LM is built on) in the optimal segmentation provided LM assigns proper probabilities to unseen words. This is the beauty of the algorithm that it is able to handle unseen words automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S = C:C2...C,-:C,",
"sec_num": null
},
{
"text": "Iterative procedure to build LM",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "In the previous section, we assumed there exists a Chinese word LM at our disposal. However, this is not true in reality. In this section, we discuss an iterative procedure that builds LM and automatically appends the unseen words to the current vocabulary. The procedure first splits the data into two parts, set T1 and T2. We start from an initial segmentation of the set T1. This can be done, for instance, by a simple greedy algorithm described in (Sproat et al., 1994) . With the segmented T1, we construct a LMi on it. Then we segment the set T2 by using the LMi and the algorithm described in section 2. At the same time, we keep a counter for each unseen word in optimal segmentations and increment the counter whenever its associated word appears in an optimal segmentation. This gives us a measure to tell whether an unseen word is an accidental character string or a real word not included in our vocabulary. The higher a counter is, the more likely it is a word. After segmenting the set T2, we add to our vocabulary all unseen words with its counter greater than a threshold e. Then we use the augmented vocabulary and construct another LMi+I using the segmented T2. The pattern is clear now: LMi+I is used to segment the set T1 again and the vocabulary is further augmented.",
"cite_spans": [
{
"start": 452,
"end": 473,
"text": "(Sproat et al., 1994)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "To be more precise, the procedure can be written in pseudo code as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "Step 0: Initially segment the set T1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "Construct an LM LMo with an initial vocabulary V0. set i=1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "Step 1: Let j=i mod 2;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "For each sentence S in the set Tj, do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "1.1 segment it using LMi-1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "1.2 for each unseen word in the optimal segmentation, increment its counter by the number of times it appears in the optimal segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "Step 2: Let A=the set of unseen words with counter greater than e. set Vi = ~-1 U A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "Construct another LMi using the segmented set and the vocabulary ~.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "Step 3: i--i+l and goto step 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "Unseen words, most of which are proper nouns, pose a serious problem to Chinese text segmentation. In (Sproat et al., 1994 ) a class based model was proposed to identify personal names. In (Wang et al., 1992) , a title driven method was used to identify personal names. The iterative procedure proposed here provides a self-organized way to detect unseen words, including proper nouns. The advantage is that it needs little human intervention. The procedure provides a chance for us to correct segmenting errors.",
"cite_spans": [
{
"start": 102,
"end": 122,
"text": "(Sproat et al., 1994",
"ref_id": "BIBREF0"
},
{
"start": 189,
"end": 208,
"text": "(Wang et al., 1992)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "Experiments and Evaluation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "Our first attempt is to see how accurate the segmentation algorithm proposed in section 2 is. To this end, we split the whole data set ~ into two parts, half for building LMs and half reserved for testing. The trigram model used in this experiment is the standard deleted interpolation model described in (Jelinek et al., 1992 ) with a vocabulary of 20K words. Since we lack an objective criterion to measure the accuracy of a segmentation system, we ask three ~The corpus has about 5 million characters and is coarsely pre-segmented. native speakers to segment manually 100 sentences picked randomly from the test set and compare them with segmentations by machine. The result is summed in table 2, where ORG stands for the original segmentation, P1, P2 and P3 for three human subjects, and TRI and UNI stand for the segmentations generated by trigram LM and unigram LM respectively. The number reported here is the arithmetic average of recall and precision, as was used in n_~ (Sproat et al., 1994) , i.e., 1/2(~-~ + n2), where nc is the number of common words in both segmentations, nl and n2 are the number of words in each of the segmentations. We can make a few remarks about the result in table 2. First of all, it is interesting to note that the agreement of segmentations among human subjects is roughly at the same level of that between human subjects and machine. This confirms what reported in (Sproat et al., 1994) . The major disagreement for human subjects comes from compound words, phrases and suffices. Since we don't give any specific instructions to human subjects, one of them tends to group consistently phrases as words because he was implicitly using semantics as his segmentation criterion. For example, he segments thesentence 3 dao4 jial li2 chil dun4 fan4(see table 3) as two words dao4 j\u00b1al l\u00b12(go home) and chil dun4 :fem4(have a meal) because the two \"words\" are clearly two semantic units. The other two subjects and machine segment it as dao4 / jial li2/ chil/ dtm4 / fern4.",
"cite_spans": [
{
"start": 305,
"end": 326,
"text": "(Jelinek et al., 1992",
"ref_id": "BIBREF3"
},
{
"start": 980,
"end": 1001,
"text": "(Sproat et al., 1994)",
"ref_id": "BIBREF0"
},
{
"start": 1407,
"end": 1428,
"text": "(Sproat et al., 1994)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation Accuracy",
"sec_num": "4.1"
},
{
"text": "Chinese has very limited morphology (Spencer, 1991) in that most grammatical concepts are conveyed by separate words and not by morphological processes. The limited morphology includes some ending morphemes to represent tenses of verbs, and this is another source of disagreement. For example, for the partial sentence zuo4 were2 le, where le functions as labeling the verb zuo4 wa.u2 as \"perfect\" tense, some subjects tend to segment it as two words zuo4 ~an2/ le while the other treat it as one single word.",
"cite_spans": [
{
"start": 36,
"end": 51,
"text": "(Spencer, 1991)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation Accuracy",
"sec_num": "4.1"
},
{
"text": "Second, the agreement of each of the subjects with either the original, trigram, or unigram segmentation is quite high (see columns 2, 6, and 7 in Table 2 ) and appears to be specific to the subject. 3Here we use Pin Yin followed by its tone to represent a character.",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 154,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Segmentation Accuracy",
"sec_num": "4.1"
},
{
"text": "Third, it seems puzzling that the trigram LM agrees with the original segmentation better than a unigram model, but gives a worse result when compared with manual segmentations. However, since the LMs are trained using the presegmented data, the trigram model tends to keep the original segmentation because it takes the preceding two words into account while the unigram model is less restricted to deviate from the original segmentation. In other words, if trained with \"cleanly\" segmented data, a trigram model is more likely to produce a better segmentation since it tends to preserve the nature of training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation Accuracy",
"sec_num": "4.1"
},
{
"text": "In addition to the 5 million characters of segmented text, we had unsegmented data from various sources reaching about 13 million characters. We applied our iterative algorithm to that corpus. Table 4 shows the figure of merit of the resulting segmentation of the 100 sentence test set described earlier. After one iteration, the agreement with the original segmentation decreased by 3 percentage points, while the agreement with the human segmentation increased by less than one percentage point. We ran our computation intensive procedure for one iteration only. The results indicate that the impact on segmentation accuracy would be small. However, the new unsegmented corpus is a good source of automatically discovered words. A 20 examples picked randomly from about 1500 unseen words are shown in Table 5 . 16 of them are reasonably good words and are listed with their translated meanings. The problematic words are marked with \"?\".",
"cite_spans": [],
"ref_spans": [
{
"start": 193,
"end": 200,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 803,
"end": 810,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment of the iterative procedure",
"sec_num": "4.2"
},
{
"text": "After each segmentation, an interpolated trigram model is built, and an independent test set with 2.5 million characters is segmented and then used to measure the quality of the model. We got a perplexity 188 for a vocabulary of 80K words, and the alternating procedure has little impact on the perplexity. This can be explained by the fact that the change of segmentation is very little ( which is reflected in table reftab:accuracy-iter ) and the addition of unseen words(1.5K) to the vocabulary is also too little to affect the overall perplexity. The merit of the alternating procedure is probably its ability to detect unseen words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Perplexity of the language model",
"sec_num": "4.3"
},
{
"text": "In this paper, we present an iterative procedure to build Chinese language model(LM). We segment Chinese text into words based on a word-based Chinese language model. However, the construction of a Chinese LM itself requires word boundaries. To get out of the chicken-egg problem, we propose an iterative procedure that alternates two operations: segmenting text into words and building an LM. Starting with an initial segmented corpus and an LM based upon it, we use Viterbi-like algorithm to segment another set of data. Then we build an LM based on the second set and use the LM to segment again the first corpus. The alternating procedure provides a self-organized way for the segmenter to detect automatically unseen words and correct segmentation errors. Our preliminary experiment shows that the alternating procedure not only improves the accuracy of our segmentation, but discovers unseen words surprisingly well. We get a perplexity 188 for a general Chinese corpus with 2.5 million characters 4 6 Andrew Spencer. 1992. Morphological theory : an introduction to word structure in generative grammar pages 38-39. Oxford, UK ; Cambridge, Mass., USA. Basil Blackwell, 1991. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "The first author would like to thank various members of the Human Language technologies Department at the IBM T.J Watson center for their encouragement and helpful advice. Special thanks go to Dr. Martin Franz for providing continuous help in using the IBM language model tools. The authors would also thank the comments and insight of two anonymous reviewers which help improve the final draft.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A stochastic finite-state word segmentation algorithm for Chinese",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
},
{
"first": "Chilin",
"middle": [],
"last": "Shih",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Gale",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of A GL 'Y~",
"volume": "",
"issue": "",
"pages": "66--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Sproat, Chilin Shih, William Gale and Nancy Chang. 1994. A stochastic finite-state word segmentation algorithm for Chinese. In Pro- ceedings of A GL 'Y~ , pages 66-73",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Chinese Text Segmentation for Text Retrieval: Achievements and Problems Journal of the American Society for Information Science",
"authors": [
{
"first": "Zimin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Gwyneth",
"middle": [],
"last": "Tseng",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "44",
"issue": "",
"pages": "532--542",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zimin Wu and Gwyneth Tseng 1993. Chinese Text Segmentation for Text Retrieval: Achievements and Problems Journal of the American Society for Information Science, 44(9):532-542.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Chinese Language",
"authors": [
{
"first": "John",
"middle": [],
"last": "Defrancis",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John DeFrancis. 1984. The Chinese Language. Uni- versity of Hawaii Press, Honolulu.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Principles of Lexical Language Modeling for Speech recognition",
"authors": [
{
"first": "Frederick",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 1992,
"venue": "Advances in Speech Signal Processing",
"volume": "",
"issue": "",
"pages": "651--699",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frederick Jelinek, Robert L. Mercer and Salim Roukos. 1992. Principles of Lexical Language Modeling for Speech recognition. In Advances in Speech Signal Processing, pages 651-699, edited by S. Furui and M. M. Sondhi. Marcel Dekker Inc., 1992",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Maximum Likelihood Approach to Continuous Speech Recognition",
"authors": [
{
"first": "L",
"middle": [],
"last": "Bahl",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1983,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "5",
"issue": "",
"pages": "179--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L.R Bahl, Fred Jelinek and R.L. Mercer. 1983. A Maximum Likelihood Approach to Continu- ous Speech Recognition. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 1983,5(2):179-190",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Recognizing unregistered names for mandarin word identification",
"authors": [
{
"first": "Liang-Jyh",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wei-Chuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chao-Huang",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of COLING-92",
"volume": "",
"issue": "",
"pages": "1239--1243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang-Jyh Wang, Wei-Chuan Li, and Chao-Huang Chang. 1992. Recognizing unregistered names for mandarin word identification. In Proceedings of COLING-92, pages 1239-1243. COLING 4Unfortunately, we could not find a report of Chinese perplexity for comparison in the published literature con- cerning Mandarin speech recognition",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"uris": null,
"text": "of all possible segmentations of sentence S. Suppose a word-based LM is given, then for a segmentation g(S) -\" (z:...xm) e G(S), we can assign a score to g(S) by L(g(S)) = logPg(w:'\"Wm)",
"type_str": "figure"
},
"TABREF0": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td/><td colspan=\"5\">: A segmentation example</td><td/></tr><tr><td colspan=\"7\">chars I C: C2 C3 C4 C5 C6</td></tr><tr><td colspan=\"2\">k I 1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td></tr><tr><td>p(k)</td><td>0</td><td>1</td><td>1</td><td>3</td><td>3</td><td>4</td></tr></table>"
},
"TABREF1": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td/><td/><td colspan=\"3\">: Segmentation Accuracy</td></tr><tr><td/><td>ORG</td><td>P1</td><td>P2</td><td>P3 TRI</td><td>UNI</td></tr><tr><td>ORG</td><td/><td/><td/><td>94.2</td><td>91.2</td></tr><tr><td>P1</td><td>85.9</td><td/><td/><td>85.3</td><td>87.4</td></tr><tr><td>P2</td><td>79.1</td><td>90.9</td><td/><td>80.1</td><td>82.2</td></tr><tr><td>P3</td><td>87.4</td><td colspan=\"2\">85.7 82.2</td><td>85.6</td><td>85.7</td></tr></table>"
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td colspan=\"4\">: Segmentation of phrases</td><td/></tr><tr><td colspan=\"5\">Chinese [ dao4 jial li2 chil dun4 fan4</td></tr><tr><td>Meaning I go</td><td>home</td><td>eat</td><td>a</td><td>meal</td></tr></table>"
},
"TABREF3": {
"html": null,
"num": null,
"type_str": "table",
"text": "Segmentation of accuracy after one iteration",
"content": "<table><tr><td colspan=\"2\">.920 .890 .863 .877 .817 .832 ~ TR0 TR1 .850 .849</td></tr><tr><td colspan=\"2\">Table 5: Examples of unseen words</td></tr><tr><td>PinYin</td><td>Meaning</td></tr><tr><td>kui2 er2</td><td>last name of former US vice president</td></tr><tr><td>he2 shi4 lu4 yinl dai4</td><td>cassette of audio tape</td></tr><tr><td>shou2 d~o3</td><td>(abbr)pretect (the) island</td></tr><tr><td>ren4 zhong4</td><td>first name or p~rt of a phrase</td></tr><tr><td>ji4 jian3 zi4 hai4</td><td>(abbr) discipline monitoring ?</td></tr><tr><td>shuangl bao3</td><td>double guarantee</td></tr><tr><td>ji4 dongl</td><td>(abbr) Eastern He Bei province</td></tr><tr><td>zi3 jiaol</td><td>purple glue</td></tr><tr><td>xiaol long2 shi2 1i4 bo4 h~i3 du4 shanl</td><td>personal name ? ?</td></tr><tr><td>shangl ban4</td><td>(abbr) commercial oriented</td></tr><tr><td>liu6 ha, J4</td><td>six (types of) harms</td></tr><tr><td>sa4 he4 le4</td><td>t r,xnslat ed no, me</td></tr><tr><td>ku~i4 xun4</td><td>fast news</td></tr><tr><td>cheng4 jing3</td><td>train cop</td></tr><tr><td>hu~ng2 du2 ba3 lian2</td><td>yellow poison ?</td></tr><tr><td>he2 dao3</td><td>a (biological) jargon</td></tr></table>"
}
}
}
} |