File size: 66,043 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
{
    "paper_id": "P01-1041",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:30:03.226642Z"
    },
    "title": "Japanese Named Entity Recognition based on a Simple Rule Generator and Decision Tree Learning",
    "authors": [
        {
            "first": "Hideki",
            "middle": [],
            "last": "Isozaki",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "NTT Communication Science Laboratories",
                "location": {
                    "addrLine": "2-4 Hikaridai, Seika-cho, Souraku-gun",
                    "postCode": "619-0237",
                    "settlement": "Kyoto",
                    "country": "Japan"
                }
            },
            "email": "isozaki@cslab.kecl.ntt.co.jp"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "Named entity (NE) recognition is a task in which proper nouns and numerical information in a document are detected and classified into categories such as person, organization, location, and date. NE recognition plays an essential role in information extraction systems and question answering systems. It is well known that hand-crafted systems with a large set of heuristic rules are difficult to maintain, and corpus-based statistical approaches are expected to be more robust and require less human intervention. Several statistical approaches have been reported in the literature. In a recent Japanese NE workshop, a maximum entropy (ME) system outperformed decision tree systems and most hand-crafted systems. Here, we propose an alternative method based on a simple rule generator and decision tree learning. Our experiments show that its performance is comparable to the ME approach. We also found that it can be trained more efficiently with a large set of training data and that it improves readability.",
    "pdf_parse": {
        "paper_id": "P01-1041",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "Named entity (NE) recognition is a task in which proper nouns and numerical information in a document are detected and classified into categories such as person, organization, location, and date. NE recognition plays an essential role in information extraction systems and question answering systems. It is well known that hand-crafted systems with a large set of heuristic rules are difficult to maintain, and corpus-based statistical approaches are expected to be more robust and require less human intervention. Several statistical approaches have been reported in the literature. In a recent Japanese NE workshop, a maximum entropy (ME) system outperformed decision tree systems and most hand-crafted systems. Here, we propose an alternative method based on a simple rule generator and decision tree learning. Our experiments show that its performance is comparable to the ME approach. We also found that it can be trained more efficiently with a large set of training data and that it improves readability.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Named entity (NE) recognition is a task in which proper nouns and numerical information in a document are detected and classi-fied into categories such as person, organization, location, and date. NE recognition plays an essential role in information extraction systems (see MUC documents (1996) ) and question answering systems (see TREC-QA documents, http://trec.nist.gov/). When you want to know the location of the Taj Mahal, traditional IR techniques direct you to relevant documents but do not directly answer your question. NE recognition is essential for finding possible answers from documents. Although it is easy to build an NE recognition system with mediocre performance, it is difficult to make it reliable because of the large number of ambiguous cases. For instance, we cannot determine whether \"Washington\" is a person's name or a location's name without the necessary context.",
                "cite_spans": [
                    {
                        "start": 289,
                        "end": 295,
                        "text": "(1996)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "There are two major approaches to building NE recognition systems. The first approach employs hand-crafted rules. It is well known that handcrafted systems are difficult to maintain because it is not easy to predict the effect of a small change in a rule. The second approach employs a statistical method, which is expected to be more robust and to require less human intervention. Several statistical methods have been reported in the literature (Bikel et al., 1999; Borthwick, 1999; Sekine et al., 1998; .",
                "cite_spans": [
                    {
                        "start": 447,
                        "end": 467,
                        "text": "(Bikel et al., 1999;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 468,
                        "end": 484,
                        "text": "Borthwick, 1999;",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 485,
                        "end": 505,
                        "text": "Sekine et al., 1998;",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "IREX (Information Retrieval and Extraction Exercise, (Sekine and Eriguchi, 2000; IRE, 1999) ) was held in 1999, and fifteen systems participated in the formal run of the Japanese NE excercise. In the formal run, participants were requested to tag two data sets (GENERAL and AR-REST), and their scores were compared in terms of F-measure, i.e., the harmonic mean of 'recall' and 'precision' defined as follows. recall = x/(the number of correct NEs) precision = x/(the number of NEs extracted by the system) where x is the number of NEs correctly extracted and classified by the system. GENERAL was the larger test set, and its best system was a hand-crafted one that attained F=83.86%. The second best system (F=80.05%) was also hand-crafted but enhanced with transformation-based error-driven learning. The third best system (F=77.37%) was Borthwick's ME system enhanced with hand-crafted rules and dictionaries (1999) . Thus, the best three systems used quite different approaches.",
                "cite_spans": [
                    {
                        "start": 53,
                        "end": 80,
                        "text": "(Sekine and Eriguchi, 2000;",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 81,
                        "end": 91,
                        "text": "IRE, 1999)",
                        "ref_id": null
                    },
                    {
                        "start": 913,
                        "end": 919,
                        "text": "(1999)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper, we propose an alternative approach based on a simple rule generator and decision tree learning (RG+DT). Our experiments show that its performance is comparable to the ME method, and we found that it can be trained more efficiently with a large set of training data. By adding in-house data, the proposed system's performance was improved by several points, while a standard ME toolkit crashed.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "When we try to extract NEs in Japanese, we encounter several problems that are not serious in English. It is relatively easy to detect English NEs because of capitalization. In Japanese, there is no such useful hint. Proper nouns and common nouns look very similar. In English, it is also easy to tokenize a sentence because of inter-word spacing. In Japanese, inter-word spacing is rarely used. We can use an off-the-shelf morphological analyzer for tokenization, but its word boundaries may differ from the corresponding NE boundaries in the training data. For instance, a morphological analyzer may divide a four-character expression OO-SAKA-SHI-NAI into two words OO-SAKA (= Osaka) and SHI-NAI (= in the city), but the training data would be tagged as <LOCATION>OO-SAKA-SHI</LO-CATION>NAI (= in <LOCATION>Osaka City </LOCATION>). Moreover, unknown words are often divided excessively or incorrectly because an analyzer tries to interpret a sentence as a sequence of known words. Throughout this paper, the typewriter-style font is used for Japanese, and hyphens indicate character boundaries. Different types of characters are used in Japanese: hiragana, katakana, kanji, symbols, numbers, and letters of the Roman alphabet. We use 17 character types for words, e.g., single-kanji, all-kanji, all-katakana, all-uppercase, float (for floating point numbers), small-integer (up to 4 digits).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Our RG+DT system (Fig. 1) generates a recognition rule from each NE in the training data. Then, the rule is refined by decision tree learning. By applying the refined recognition rules to a new document, we get NE candidates. Then, nonoverlapping candidates are selected by a kind of longest match method.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 17,
                        "end": 25,
                        "text": "(Fig. 1)",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Methodology",
                "sec_num": "2"
            },
            {
                "text": "In our method, each tokenized NE is converted to a recognition rule that is essentially a sequence of part-of-speech (POS) tags in the NE. For instance, OO-SAKA-GIN-KOU (= Osaka Bank) is tokenized into two words: OO-SAKA:allkanji:location-name (= Osaka) and GIN-KOU:all-kanji:common-noun (= Bank), where location-name and common-noun are POS tags. In this case, we get the following recognition rule. Here, '*' matches anything. *:*:location-name, *:*:common-noun -> ORGANIZATION However, this rule is not very good. For instance, OO-SAKA-WAN (= Osaka Bay) follows this pattern, but it is a location's name. GIN-KOU and WAN strongly imply ORGANIZATION and LOCATION, respectively. Thus, the last word of an NE is often a head that is more useful than other words for the classification. Therefore, we register the last word into a suffix dictionary for each non-numerical NE class (i.e., ORGANIZA-TION, PERSON, LOCATION, and ARTIFACT) in order to accept only reliable candidates. If the last word appears in two or more different NE, we call it a reliable NE suffix. We register only reliable ones. In the above examples, the last words were common nouns. However, the last word can also be a proper noun. For instance, we will get the following rule from <ORGANIZATION>OO-SAKA-TO-YO-TA</ORGANIZATION> (= Osaka Toyota) because Japanese POS taggers know that TO-YO-TA is an organization name (a kind of proper noun). *:*:location-name, *:*:org-name -> ORGANIZATION,0,0",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "Since Yokohama Honda and Kyoto Sony also follow this pattern, the second element *:*:org-name should not be restricted to the words in the training data. Therefore, we do not restrict proper nouns by a suffix dictionary, and we do not restrict numbers either.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "In addition, the first or last word of an NE may contain an NE boundary as we described before (SHI</LOCATION>NAI). In this case, we can get OO-SAKA-SHI by removing no character of the first word OO-SAKA and one character of the last word SHI-NAI. Accordingly, this modification can be represented by two integers: 0,1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "Furthermore, one-word NEs are different from other NEs in the following respects.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "The word is usually a proper noun, an unknown word, or a number; otherwise, it is an exceptional case.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "The character type of a one-word NE gives a useful hint for its classification. For instance, all-uppercase words (e.g., IOC) are often classified as ORGANIZATION.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "Since unknown words are often proper nouns, we assume they are tagged as misc-proper-noun.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "If the training data contains <ORGANIZATION>I-O-C</ORGANIZATION> and I-O-C (= IOC) is an unknown word, we will get I-O-C:alluppercase:misc-proper-noun.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "By considering these facts, we modify the above rule generation. That is, we replace every word in an NE and its character type by '*' to get the left-hand side of the corresponding recognition rule except the following cases.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "A word that contains an NE boundary If the first or last word of the NE contains an NE boundary (e.g, SHI</LOCATION>NAI), the word is not replaced by '*'. The number of characters to be deleted is also recorded in the right-hand side of the recognition rule.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "One-word NE The following exceptions are applied to one-word NEs. If the word is a proper noun or a number, its character type is not replaced by '*'. Otherwise, the word is not replaced by '*'.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "The last word of a longer NE The following exceptions are applied to the last word of a non-numerical NE that is composed of two or more words when the word is neither a proper noun nor a number. If the last word is a reliable NE suffix (i.e., it appears in two or more different NEs in the class), its information (i.e., the last word, its character type, and its POS tag) is registered into a suffix dictionary for the NE class. The last word of the recognition rule must be an element of the suffix dictionary. Unreliable NE suffixes are not replaced by '*'. Suffixes of numerical NEs (i.e., DATE, TIME, MONEY, PERCENT) are not replaced, either. Now, we obtain the following recognition rules from the above examples.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "*:all-uppercase:misc-proper-noun -> ORGANIZATION,0,0. *:*:location-name, SHI-NAI:*:common-noun -> LOCATION,0,1. *:*:location-name, *:*:common-noun -> ORGANIZATION,0,0.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "The first rule extracts CNN as an organization. The second rule extracts YOKO-HAMA-SHI (= Yokohama City) from YOKO-HAMA-SHI-NAI (= in Yokohama City). The third rule extracts YOKO-HAMA-GIN-KOU (= Yokohama Bank) as an organization. Note that, in this rule, the second element (*:*:common-noun) is constrained by the suffix dictionary for ORGANIZATION because it is neither a proper noun nor a number. Hence, the rule does not match YOKO-HAMA-WAN (= Yokohama Bay). If the suffix dictionary also happens to have KOU-KOU:all-kanji: commmon-noun (= senior high school), the rule also matches YOKO-HAMA-KOU-KOU (= Yokohama Senior High School).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "IREX introduced <ARTIFACT> for product names, prizes, pacts, books, and fine arts, among other nouns. Titles of books and fine arts are often long and have atypical word patterns. However, they are often delimited by a pair of symbols that correspond to quotation marks in English. Some atypical organization names are also delimited by these symbols. In order to extract such a long NE, we concatenate all words within a pair of such symbols into one word. We employ the first and last word of the quoted words as extra features. In addition, we do not regard the quotation symbols as adjacent words because they are constant and lack semantic meaning.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "When a large amount of training data is given, thousands of recognition rules are generated. For efficiency, we compile these recognition rules by using a hash table that converts a hash key into a list of relevant rules that have to be examined. We make this hash table as follows. If the lefthand side of a rule contains only one element, the element is used as a hash key and its rule identifier is appended to the corresponding rule list. If the left-hand side contains two or more elements, the first two elements are concatenated and used as a hash key and its rule identifier is appended to the corresponding rule list. After this compilation, we can efficiently apply all of the rules to a new document. By taking the first two elements into consideration, we can reduce the number of rules that need to be examined.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Generation of recognition rules",
                "sec_num": "2.1"
            },
            {
                "text": "Some recognition rules are not reliable. For instance, we get the following rule when a person's name is incorrectly tagged as a location's name by a POS tagger.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Refinement of recognition rules",
                "sec_num": "2.2"
            },
            {
                "text": "*:all-kanji:location-name -> PERSON,0,0 Therefore, we have to consider a way to refine the recognition rules. By applying each recognition rule to the untagged training data, we can obtain NE candidates for the rule. By comparing the candidates with the given answer for the training data, we can classify them into positive examples and negative examples for the recognition rule. Consequently, we can apply decision tree learning to classify these examples correctly. We represent each example by a list of features: words in the NEs, \u00a1 preceding words, \u00a2 succeeding words, their character types, and their POS tags. If we consider one preceding word and two succeeding words, the feature list for a two-word named entity (\u00a3 is a boolean value that indicates whether it is a positive example. If a feature value appears less than three times in the examples, it is replaced by a dummy constant. We also replace numbers by dummy constants because most numerical NEs follow typical patterns, and their specific values are often useless for NE recognition.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Refinement of recognition rules",
                "sec_num": "2.2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "\u00a5 \u00a4 \u00a7 \u00a6 \u00a3 \u00a9 ) will be \u00a3 \u00a9 , \u00a9 , \u00a9 , \u00a3 \u00a4 , \u00a4 , \u00a4 , \u00a3 \u00a9 , \u00a9 , \u00a9 , \u00a3 , ,",
                        "eq_num": ", \u00a3 , , , \" ! ,"
                    }
                ],
                "section": "Refinement of recognition rules",
                "sec_num": "2.2"
            },
            {
                "text": "Here, we discuss handling short NEs. For example, NO-O-BE-RU-SHOU-SEN-KOU-I-IN-KAI (= the Nobel Prize Selection Committee) is an organization's name that contains a person's name NO-O-BE-RU (= Nobel) and an artifact name NO-O-BE-RU-SHOU (= Nobel Prize), but <PERSON>NO-O-BE-RU</PER-SON> and <ARTIFACT>NO-O-BE-RU-SHOU </ARTIFACT> are incorrect in this case. If the training data contain NO-O-BE-RU as both positive and negative examples of a person's name, the decision tree learner will be confused. They are rejected because there is a longer named entity and overlapping tags are not allowed. We do not have to change our knowledge that Nobel is a person's name. Therefore, we remove such negative examples caused by longer NEs. Consequently, the decision tree may fail to reject <PERSON> NO-O-BE-RU</PERSON>, but it will disappear in the final output because we use a longest match method for arbitration.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Refinement of recognition rules",
                "sec_num": "2.2"
            },
            {
                "text": "For readability, we translate each decision tree into a set of production rules by c4.5rules (Quinlan, 1993) . Throughout this paper, we call them dt-rules (Fig. 1 ) in order to distinguish them from recognition rules. Thus, each recognition rule is enhanced by a set of dt-rules. The dt-rules removes unlikely candidates.",
                "cite_spans": [
                    {
                        "start": 93,
                        "end": 108,
                        "text": "(Quinlan, 1993)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 156,
                        "end": 163,
                        "text": "(Fig. 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Refinement of recognition rules",
                "sec_num": "2.2"
            },
            {
                "text": "Once the refined rules are generated, we can apply them to a new document. This obtains a large number of NE candidates (Fig. 1) . Since overlapping tags are not allowed, we use a kind of leftto-right longest match method. First, we compare their starting points and select the earliest ones. If two or more candidates start at the same point, their ending points are compared and the longest candidate is selected. Therefore, the candidates overlapping the selected candidate are removed from the candidate set. This procedure is repeated until the candidate set becomes empty.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 120,
                        "end": 128,
                        "text": "(Fig. 1)",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Arbitration of candidates",
                "sec_num": "2.3"
            },
            {
                "text": "The rank of a candidate starting at the ( th word boundary and ending at the ) -th word boundary can be represented by a pair 0 1 ( 2 \u00a6 3 4 ) 6 5",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Arbitration of candidates",
                "sec_num": "2.3"
            },
            {
                "text": ". The beginning of a sentence is the zeroth word boundary, and the first word ends at the first word boundary, etc. Then, the selected candidate should have the minimum rank according to the lexicographical ordering of 0 1 ( 7 \u00a6 8 3 4 ) 6 5",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Arbitration of candidates",
                "sec_num": "2.3"
            },
            {
                "text": ". When a candidate starts or ends within a word (e.g., SHI-NAI), we assume that the entire word is a member of the candidate for the definition of 0 1 ( 2 \u00a6 3 4 ) 6 5 . According to this ordering, two candidates can have the same rank. One of them might assert that a certain word is an organization's name and another candidate might assert that it is a person's name. In order to apply the most frequently used rule, we extend this ordering by 0 1 ( 2 \u00a6 3 4 ) 6 \u00a6 8 3 @ 9 B A 5",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Arbitration of candidates",
                "sec_num": "2.3"
            },
            {
                "text": ", where 9 C A is the number of positive examples for the rule D .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Arbitration of candidates",
                "sec_num": "2.3"
            },
            {
                "text": "In order to compare our method with the ME approach, we also implement an ME system based on Ristad's toolkit (1997). Borthwick's (1999) and Uchimoto's (2000) We use the following features for each word in the training data: the word itself, \u00a1 preceding words, \u00a2 succeeding words, their character types, and their POS tags. By following Uchimoto, we disregard words that appear fewer than five times and other features that appear fewer than three times.",
                "cite_spans": [
                    {
                        "start": 118,
                        "end": 136,
                        "text": "Borthwick's (1999)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 141,
                        "end": 158,
                        "text": "Uchimoto's (2000)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Maximum entropy system",
                "sec_num": "2.4"
            },
            {
                "text": "Then, the ME-based classifier gives a probability for each class to each word in a new sentence. Finally, the Viterbi algorithm (see textbooks, e.g., (Allen, 1995) ) enhanced with consistency checking (e.g., PERS ON-EN D should follow PER SON-BEGI N or PERS ON-M IDDLE) determines the best combination for the entire sentence.",
                "cite_spans": [
                    {
                        "start": 150,
                        "end": 163,
                        "text": "(Allen, 1995)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Maximum entropy system",
                "sec_num": "2.4"
            },
            {
                "text": "We generate the word boundary rewriting rules as follows. First, the NE boundaries inside a word are assumed to be at the nearest word boundary outside the named entity. Hence, SHI</LOCATION>NAI is rewritten as SHI-NAI</LOCATION>. Accordingly, SHI-NAI is classified as LOC ATION -END. The original NE boundary is recorded for the pair SHI-NAI/ LOCATION -END, If SHI-NAI/LOCATION-END is found in the output of the Viterbi algorithm, it is rewritten as SHI</LOCATION>NAI. Since rewriting rules from rare cases can be harmful, we employ a rewriting rule only when the rule correctly works for more than 50% of the word/class pairs in the training data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Maximum entropy system",
                "sec_num": "2.4"
            },
            {
                "text": "Now, we compare our method with the ME system. We used the standard IREX training data (CRL NE 1.4 MB and NERT 30 KB) and the formal run test data (GENERAL and AR-REST). When human annotators were not sure, they used <OPTIONAL POSSIBILITY=...> where POSSIBILITY is a list of possible NE classes. We also used 7.4 MB of in-house NE data that did not contain optional tags. All of the training data (all = CRL NE+NERT+in-house) were based on the Mainichi Newspaper's 1994 and 1995 CD-ROMs. Table 1 shows the details. We removed an optional tag when its possibility list contains NONE, which means this part is accepted without a tag. Otherwise, we selected the majority class in the list. As a result, 56 NEs were added to CRL NE.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 488,
                        "end": 495,
                        "text": "Table 1",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "3"
            },
            {
                "text": "For tokenization, we used chasen 2.2.1 (http:// chasen. aist-nara. ac. jp/). It has about 90 POS tags and large proper noun dictionaries (persons = 32,167, organizations = 16,610, locations = 67,296, miscellaneous proper nouns = 26,106). (Large dictionaries sometimes make the extraction of NEs difficult. If OO-SAKA-GIN-KOU is registered as a single word, GIN-KOU is not extracted as an organization suffix from this example.) We tuned chasen's parameters for NE recognition. In order to avoid the excessive division of unknown words (see Introduction), we reduced the cost for unknown words (30000 R 7000). We also changed its setting so that an unknown word are classified as a misc-proper-noun.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "3"
            },
            {
                "text": "Then, we compared the above methods in terms of the averaged F-measures by 5-fold crossvalidation of CRL NE data. by removing bad templates with fewer positive examples than negative ones.) Thus, the two methods returned similar results. However, we cannot expect good performance for other documents because CRL NE is limited to January, 1995. Figure 2 compares these systems by using the formal run data. We cannot show the ME results for the large training data because Ristad's toolkit crashes even on a 2 GB memory machine. According to this graph, the RG+DT system's scores are comparable to those of the ME system. When all the training data was used, RG+DT's F-measure for GENERAL was 87.43%. We also examined RG+DT's variants. When we replaced character types of one-word NEs by '*', the score dropped to 86.79%. When we did not replace any character type by '*' at all, the score was 86.63%. RG+DT/n in the figure is a variant that also applies suffix dictionary to numerical NE classes.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 345,
                        "end": 353,
                        "text": "Figure 2",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "3"
            },
            {
                "text": "When we used tokenized CRL NE for training, the RG+DT system's training time was about 3 minutes on a Pentium III 866 MHz 256 MB memory Linux machine. This performance is much faster than that of the ME system, which takes a few hours; this difference cannot be explained by the fact that the ME system is implemented on a slower machine. When we used all of the training data, the training time was less than one hour and the processing time of tokenized GENERAL (79 KB before tokenization) was about 14 seconds.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "3"
            },
            {
                "text": "Before the experiments, we did not expect that the RG+DT system would perform very well because the number of possible combinations of POS tags increases exponentially with respect to the num- When we compare the RG+DT method with other statistical methods, its advantage is its readability and independence of generated rules. When using cascaded rules, a small change in a rule can damage another rule's functionality. On the other hand, the recognition rules of our system are not cascaded (Fig. 1) . Therefore, rewriting a recognition rule does not influence the performance of other rules at all. Moreover, dt-rules are usually very simple. When all of the training data were used, most of the RG+DT's recognition rules had a simple additional constraint that always accepts (65%) or rejects (16%) candidates. This result also implies the usefulness of our rule generator. Only 2% of the recognition rules have 10 or more dt-rules. For instance, the following recognition rule has dozens of dt-rules. *:all-katakana:misc-proper-noun -> PERSON,0,0.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 493,
                        "end": 501,
                        "text": "(Fig. 1)",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "4"
            },
            {
                "text": "However, they are easy to understand as follows. We can explain this tendency as follows. Short NEs like 'Washington' are often ambiguous, but longer NEs like 'Washington State University' are less ambiguous. Thus, short recognition rules often have dozens of dt-rules, whereas long rules have simple constraints.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "4"
            },
            {
                "text": "Some NE systems use decision tree learning to classify a word. Sekine's system (1998) is similar to the above ME systems, but C4.5 (Quinlan, 1993 ) is used instead. A similar system participated in IREX, but failed to show good performance. Borthwick (1999) explained the reason for this tendency. When he added lexical questions (e.g., whether the current word is ( or not) to Sekine's system, C4.5 crashed with CRL NE. Accordingly, the decision tree systems did not directly use words as features. Instead, they used a word's memberships in their word lists. Cowie (1995) interprets a decision tree deterministically and uses heuristic rewriting rules to get consistent results. Baluja's system (2000) simply determines whether a word is in an NE or not and does not classify it. On the other hand, Paliouras (2000) uses decision tree learning for classification of a noun phrase by assuming that named entities are noun phrases. Gallippi (1996) employs hundreds of hand-crafted templates as features for decision tree learning. Brill's rule generation method (Brill, 2000) is not used for NE tasks, but it might be useful.",
                "cite_spans": [
                    {
                        "start": 131,
                        "end": 145,
                        "text": "(Quinlan, 1993",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 241,
                        "end": 257,
                        "text": "Borthwick (1999)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 561,
                        "end": 573,
                        "text": "Cowie (1995)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 681,
                        "end": 703,
                        "text": "Baluja's system (2000)",
                        "ref_id": null
                    },
                    {
                        "start": 801,
                        "end": 817,
                        "text": "Paliouras (2000)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 932,
                        "end": 947,
                        "text": "Gallippi (1996)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 1062,
                        "end": 1075,
                        "text": "(Brill, 2000)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "4"
            },
            {
                "text": "Recently, unsupervised or minimally supervised models have been proposed (Collins and Singer, 2000; ).",
                "cite_spans": [
                    {
                        "start": 73,
                        "end": 99,
                        "text": "(Collins and Singer, 2000;",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "4"
            },
            {
                "text": "Collins' system is not a full NE system and Utsuro's score is not very good yet, but they represent interesting directions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discussion",
                "sec_num": "4"
            },
            {
                "text": "As far as we can tell, Japanese NE recognition technology has not yet matured. Conventional decision tree systems have not shown good performance. The maximum entropy method is competitive, but adding more training data causes problems. In this paper, we presented an alternative method based on decision tree learning and longest match. According to our experiments, this method's performance is comparable to that of the maximum entropy system, and it can be trained more efficiently. We hope our method can be applicable to other languages.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "5"
            }
        ],
        "back_matter": [
            {
                "text": "I would like to thank Yutaka Sasaki, Kiyotaka Uchimoto, Tsuneaki Kato, Eisaku Maeda, Shigeru Katagiri, Kenichiro Ishii, and anonymous reviewers.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgement",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Natural Language Understanding 2nd",
                "authors": [
                    {
                        "first": "James",
                        "middle": [],
                        "last": "Allen",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "James Allen. 1995. Natural Language Understanding 2nd. Ed. Benjamin Cummings.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Applying Machine Learning for High Performance Named-Entity Extraction",
                "authors": [
                    {
                        "first": "Shumeet",
                        "middle": [],
                        "last": "Baluja",
                        "suffix": ""
                    },
                    {
                        "first": "Vibhu",
                        "middle": [],
                        "last": "Mittal",
                        "suffix": ""
                    },
                    {
                        "first": "Rahul",
                        "middle": [],
                        "last": "Sukthankar",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Computational Intelligence",
                "volume": "16",
                "issue": "4",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Shumeet Baluja, Vibhu Mittal, and Rahul Sukthankar. 2000. Applying Machine Learning for High Perfor- mance Named-Entity Extraction. Computational Intelligence, 16(4).",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "An algorithm that learns what's in a name",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Daniel",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Bikel",
                        "suffix": ""
                    },
                    {
                        "first": "Ralph",
                        "middle": [
                            "M"
                        ],
                        "last": "Schwartz",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Weischedel",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Machine Learning",
                "volume": "34",
                "issue": "",
                "pages": "211--231",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Daniel M. Bikel, Richard Schwartz, and Ralph M. Weischedel. 1999. An algorithm that learns what's in a name. Machine Learning, 34(1-3):211-231.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "A Maximum Entropy Approach to Named Entity Recognition",
                "authors": [
                    {
                        "first": "Andrew",
                        "middle": [],
                        "last": "Borthwick",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andrew Borthwick. 1999. A Maximum Entropy Ap- proach to Named Entity Recognition. Ph.D. thesis, New York University.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Pattern-based disambiguation for natural language processing",
                "authors": [
                    {
                        "first": "Eric",
                        "middle": [],
                        "last": "Brill",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of EMNLP/VLC-2000",
                "volume": "",
                "issue": "",
                "pages": "1--8",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Eric Brill. 2000. Pattern-based disambiguation for natural language processing. In Proceedings of EMNLP/VLC-2000, pages 1-8.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Unsupervised models for named entity classification",
                "authors": [
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Collins",
                        "suffix": ""
                    },
                    {
                        "first": "Yoram",
                        "middle": [],
                        "last": "Singer",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of EMNLP/VLC",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Michael Collins and Yoram Singer. 2000. Unsuper- vised models for named entity classification. In Proceedings of EMNLP/VLC.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "CRL/NMSU description of the CRL/NMSU system used for MUC-6",
                "authors": [
                    {
                        "first": "Jim",
                        "middle": [],
                        "last": "Cowie",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Proceedings of the Sixth Message Understanding Conference",
                "volume": "",
                "issue": "",
                "pages": "157--166",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jim Cowie. 1995. CRL/NMSU description of the CRL/NMSU system used for MUC-6. In Proceed- ings of the Sixth Message Understanding Confer- ence, pages 157-166. Morgan Kaufmann.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Learning to recognize names accross lanugages",
                "authors": [
                    {
                        "first": "Anthony",
                        "middle": [
                            "F"
                        ],
                        "last": "Gallippi",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Proceedings of the International Conference on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "424--429",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Anthony F. Gallippi. 1996. Learning to recognize names accross lanugages. In Proceedings of the In- ternational Conference on Computational Linguis- tics, pages 424-429.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Proceedings of the IREX Workshop",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Irex Comittee",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "IREX Comittee. 1999. Proceedings of the IREX Workshop (in Japanese).",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Proceedings of the Sixth Message Understanding Conference",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Muc-6",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "MUC-6. 1996. Proceedings of the Sixth Message Un- derstanding Conference. Morgan Kaufmann.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Learning decision trees for named-entity recognition and classification",
                "authors": [
                    {
                        "first": "Georgios",
                        "middle": [],
                        "last": "Paliouras",
                        "suffix": ""
                    },
                    {
                        "first": "Vangelis",
                        "middle": [],
                        "last": "Karkaletsis",
                        "suffix": ""
                    },
                    {
                        "first": "Georgios",
                        "middle": [],
                        "last": "Petasis",
                        "suffix": ""
                    },
                    {
                        "first": "Constantine",
                        "middle": [
                            "D"
                        ],
                        "last": "Spyropoulos",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "ECAI Workshop on Machine Learning for Information Extraction",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Georgios Paliouras, Vangelis Karkaletsis, Georgios Petasis, and Constantine D. Spyropoulos. 2000. Learning decision trees for named-entity recogni- tion and classification. In ECAI Workshop on Ma- chine Learning for Information Extraction.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "C4.5: Programs for Machine Learning",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "Ross",
                        "middle": [],
                        "last": "Quinlan",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Ross Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Maximum entropy modeling toolkit, release 1.5 Beta",
                "authors": [
                    {
                        "first": "Eric",
                        "middle": [],
                        "last": "Sven Ristad",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Eric Sven Ristad, 1997. Maximum entropy modeling toolkit, release 1.5 Beta. ftp:// ftp. cs. princeton. edu/ pub/ packages/ memt, January.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Named entity chunking techniques in supervised learning for Japanese named entity recognition",
                "authors": [
                    {
                        "first": "Manabu",
                        "middle": [],
                        "last": "Sassano",
                        "suffix": ""
                    },
                    {
                        "first": "Takehito",
                        "middle": [],
                        "last": "Utsuro",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the International Conference on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "705--711",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Manabu Sassano and Takehito Utsuro. 2000. Named entity chunking techniques in supervised learning for Japanese named entity recognition. In Proceed- ings of the International Conference on Computa- tional Linguistics, pages 705-711.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Japanese named entity extraction evaluation -analysis of results",
                "authors": [
                    {
                        "first": "Satoshi",
                        "middle": [],
                        "last": "Sekine",
                        "suffix": ""
                    },
                    {
                        "first": "Yoshio",
                        "middle": [],
                        "last": "Eriguchi",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of 18th International Conference on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "1106--1110",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Satoshi Sekine and Yoshio Eriguchi. 2000. Japanese named entity extraction evaluation -analysis of results -. In Proceedings of 18th International Conference on Computational Linguistics, pages 1106-1110.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "A decision tree method for finding and classifying names in Japanese texts",
                "authors": [
                    {
                        "first": "Satoshi",
                        "middle": [],
                        "last": "Sekine",
                        "suffix": ""
                    },
                    {
                        "first": "Ralph",
                        "middle": [],
                        "last": "Grishman",
                        "suffix": ""
                    },
                    {
                        "first": "Hiroyuki",
                        "middle": [],
                        "last": "Shinnou",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Proceedings of the Sixth Workshop on Very Large Corpora",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Satoshi Sekine, Ralph Grishman, and Hiroyuki Shin- nou. 1998. A decision tree method for finding and classifying names in Japanese texts. In Proceedings of the Sixth Workshop on Very Large Corpora.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Named entity extraction based on a maximum entropy model and transformation rules",
                "authors": [
                    {
                        "first": "Kiyotaka",
                        "middle": [],
                        "last": "Uchimoto",
                        "suffix": ""
                    },
                    {
                        "first": "Qing",
                        "middle": [],
                        "last": "Ma",
                        "suffix": ""
                    },
                    {
                        "first": "Masaki",
                        "middle": [],
                        "last": "Murata",
                        "suffix": ""
                    },
                    {
                        "first": "Hiromi",
                        "middle": [],
                        "last": "Ozaku",
                        "suffix": ""
                    },
                    {
                        "first": "Masao",
                        "middle": [],
                        "last": "Utiyama",
                        "suffix": ""
                    },
                    {
                        "first": "Hitoshi",
                        "middle": [],
                        "last": "Isahara",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Journal of Natural Language Processing",
                "volume": "7",
                "issue": "",
                "pages": "63--90",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kiyotaka Uchimoto, Qing Ma, Masaki Murata, Hi- romi Ozaku, Masao Utiyama, and Hitoshi Isahara. 2000. Named entity extraction based on a maxi- mum entropy model and transformation rules (in Japanese). Journal of Natural Language Process- ing, 7(2):63-90.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Minimally supervised Japanese named entity recognition: Resources and evaluation",
                "authors": [
                    {
                        "first": "Takehito",
                        "middle": [],
                        "last": "Utsuro",
                        "suffix": ""
                    },
                    {
                        "first": "Manabu",
                        "middle": [],
                        "last": "Sassano",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the Second International Conference on Language Resources and Evaluation",
                "volume": "",
                "issue": "",
                "pages": "1229--1236",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Takehito Utsuro and Manabu Sassano. 2000. Min- imally supervised Japanese named entity recogni- tion: Resources and evaluation. In Proceedings of the Second International Conference on Language Resources and Evaluation, pages 1229-1236.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "Rough sketch of RG+DT system",
                "type_str": "figure",
                "uris": null,
                "num": null
            },
            "FIGREF2": {
                "text": "Comparison of RG+DT systems and Max. Ent. system ber of words in an NE. However, the above results are encouraging. Its performance is comparable to the ME system. Why did it work so well? First, the percentage of long NEs is negligible. 91% of the NEs in the training data have at most three words. Second, the POS tags frequently used in NEs are limited.",
                "type_str": "figure",
                "uris": null,
                "num": null
            },
            "TABREF1": {
                "content": "<table><tr><td>O RGAN IZAT ION, E PERS ON, LOC ATIO N, ARTI FACT, DATE, TIM E,</td></tr><tr><td>MON EY, PERC ENTF SING LEF Q P O THERF . For instance, the words H G B EG IN, MID DLE, END, I E I E in \"President &lt;PERSON&gt; George Herbert Walker</td></tr><tr><td>Bush &lt;/PERSON&gt;\" are classified as follows:</td></tr><tr><td>President = OTHE R, George = PERS ON-BE GIN,</td></tr><tr><td>Herbert = PERSO N-MI DDLE, Walker = PER SON-</td></tr><tr><td>MIDD LE, Bush = PER SON-END.</td></tr></table>",
                "num": null,
                "text": "ME systems are quite similar but differ in details. They regarded Japanese NE recognition as a classification problem of a word. The first word of a person name is classified as PERS ON-B EGIN. The last word is classified as PERS ON-E ND. Other words in the person's name (if any) are classified as PERS ON-M IDDL E. If the person's name is composed of only one word, it is classified as PERS ON-S INGLE. Similar labels are given to all other classes such as LOCATION. Non-NE words are classified as OTHE R. Thus, every word is classified into 33 classes, i.e.,",
                "type_str": "table",
                "html": null
            },
            "TABREF3": {
                "content": "<table><tr><td colspan=\"2\">: Data used for comparison</td></tr><tr><td>attained 81.18% for</td><td>0 \u1e84 \" \u00a6 \" 5</td></tr></table>",
                "num": null,
                "text": "",
                "type_str": "table",
                "html": null
            }
        }
    }
}