File size: 140,507 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
{
    "paper_id": "P19-1014",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T08:21:59.365807Z"
    },
    "title": "A Joint Named-Entity Recognizer for Heterogeneous Tag-sets Using a Tag Hierarchy",
    "authors": [
        {
            "first": "Genady",
            "middle": [],
            "last": "Beryozkin",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Google Research Tel Aviv",
                "location": {
                    "country": "Israel"
                }
            },
            "email": "genady@google.com"
        },
        {
            "first": "Yoel",
            "middle": [],
            "last": "Drori",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Google Research Tel Aviv",
                "location": {
                    "country": "Israel"
                }
            },
            "email": ""
        },
        {
            "first": "Oren",
            "middle": [],
            "last": "Gilon",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Google Research Tel Aviv",
                "location": {
                    "country": "Israel"
                }
            },
            "email": "ogilon@google.com"
        },
        {
            "first": "Tzvika",
            "middle": [],
            "last": "Hartman",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Google Research Tel Aviv",
                "location": {
                    "country": "Israel"
                }
            },
            "email": "tzvika@google.com"
        },
        {
            "first": "Idan",
            "middle": [],
            "last": "Szpektor",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Google Research Tel Aviv",
                "location": {
                    "country": "Israel"
                }
            },
            "email": "szpektor@google.com"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We study a variant of domain adaptation for named-entity recognition where multiple, heterogeneously tagged training sets are available. Furthermore, the test tag-set is not identical to any individual training tag-set. Yet, the relations between all tags are provided in a tag hierarchy, covering the test tags as a combination of training tags. This setting occurs when various datasets are created using different annotation schemes. This is also the case of extending a tag-set with a new tag by annotating only the new tag in a new dataset. We propose to use the given tag hierarchy to jointly learn a neural network that shares its tagging layer among all tag-sets. We compare this model to combining independent models and to a model based on the multitasking approach. Our experiments show the benefit of the tag-hierarchy model, especially when facing non-trivial consolidation of tag-sets.",
    "pdf_parse": {
        "paper_id": "P19-1014",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We study a variant of domain adaptation for named-entity recognition where multiple, heterogeneously tagged training sets are available. Furthermore, the test tag-set is not identical to any individual training tag-set. Yet, the relations between all tags are provided in a tag hierarchy, covering the test tags as a combination of training tags. This setting occurs when various datasets are created using different annotation schemes. This is also the case of extending a tag-set with a new tag by annotating only the new tag in a new dataset. We propose to use the given tag hierarchy to jointly learn a neural network that shares its tagging layer among all tag-sets. We compare this model to combining independent models and to a model based on the multitasking approach. Our experiments show the benefit of the tag-hierarchy model, especially when facing non-trivial consolidation of tag-sets.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Named Entity Recognition (NER) has seen significant progress in the last couple of years with the application of Neural Networks to the task. Such models achieve state-of-the-art performance with little or no manual feature engineering (Collobert et al., 2011; Huang et al., 2015; Lample et al., 2016; Ma and Hovy, 2016; Dernoncourt et al., 2017) . Following this success, more complex NER setups are approached with neural models, among them domain adaptation (Qu et al., 2016; He and Sun, 2017; Dong et al., 2017) .",
                "cite_spans": [
                    {
                        "start": 236,
                        "end": 260,
                        "text": "(Collobert et al., 2011;",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 261,
                        "end": 280,
                        "text": "Huang et al., 2015;",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 281,
                        "end": 301,
                        "text": "Lample et al., 2016;",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 302,
                        "end": 320,
                        "text": "Ma and Hovy, 2016;",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 321,
                        "end": 346,
                        "text": "Dernoncourt et al., 2017)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 461,
                        "end": 478,
                        "text": "(Qu et al., 2016;",
                        "ref_id": "BIBREF29"
                    },
                    {
                        "start": 479,
                        "end": 496,
                        "text": "He and Sun, 2017;",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 497,
                        "end": 515,
                        "text": "Dong et al., 2017)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this work we study one type of domain adaptation for NER, denoted here heterogeneous tagsets. In this variant, samples from the test set are not available at training time. Furthermore, the test tag-set differs from each training tag-set. However every test tag can be represented either as a single training tag or as a combination of several training tags. This information is given in the form of a hypernym hierarchy over all tags, training and test (see Fig. 1 ).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 462,
                        "end": 468,
                        "text": "Fig. 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "This setting arises when different schemes are used for annotating multiple datasets for the same task. This often occurs in the medical domain, where healthcare providers use customized tagsets to create their own private test sets (Shickel et al., 2017; Lee et al., 2018) . Another scenario is selective annotation, as in the case of extending an existing tag-set, e.g. {'Name', 'Location'}, with another tag, e.g. 'Date'. To save annotation effort, new training data is labeled only with the new tag. This case of disjoint tag-sets is also discussed in the work of Greenberg et al. (2018) . A similar case is extending a training-set with new examples in which only rare tags are annotated. In domains where training data is scarce, out-ofdomain datasets annotated with infrequent tags may be very valuable.",
                "cite_spans": [
                    {
                        "start": 233,
                        "end": 255,
                        "text": "(Shickel et al., 2017;",
                        "ref_id": "BIBREF32"
                    },
                    {
                        "start": 256,
                        "end": 273,
                        "text": "Lee et al., 2018)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 568,
                        "end": 591,
                        "text": "Greenberg et al. (2018)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "A naive approach concatenates all trainingsets, ignoring the differences between the tagging schemes in each example. A different approach would be to learn to tag with multiple training tagsets. Then, in a post-processing step, the predictions from the different tag-sets need to be consolidated into a single test tag sequence, resolving tagging differences along the way. We study two such models. The first model learns an independent NER model for each training tag-set. The second model applies the multitasking (MTL) (Collobert et al., 2011; Ruder, 2017) paradigm, in which a shared latent representation of the input text is fed into separate tagging layers.",
                "cite_spans": [
                    {
                        "start": 524,
                        "end": 548,
                        "text": "(Collobert et al., 2011;",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 549,
                        "end": 561,
                        "text": "Ruder, 2017)",
                        "ref_id": "BIBREF30"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The above models require heuristic postprocessing to consolidate the different predicted tag sequences. To overcome this limitation, we propose a model that incorporates the given tag hierarchy within the neural NER model. Specifically, this model learns to predict a tag sequence only over the fine-grained tags in the hierarchy.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Tag-set 1 (T 1 ): Name, Street, City, Hospital, Age>90 Tag-set 2 (T 2 ): First Name, Last Name, Address, Age Tag-set 3 (T 3 ) Figure 1 : A tag hierarchy for three tag-sets.",
                "cite_spans": [
                    {
                        "start": 24,
                        "end": 31,
                        "text": "Street,",
                        "ref_id": null
                    },
                    {
                        "start": 32,
                        "end": 37,
                        "text": "City,",
                        "ref_id": null
                    },
                    {
                        "start": 38,
                        "end": 47,
                        "text": "Hospital,",
                        "ref_id": null
                    },
                    {
                        "start": 48,
                        "end": 54,
                        "text": "Age>90",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 109,
                        "end": 125,
                        "text": "Tag-set 3 (T 3 )",
                        "ref_id": "TABREF5"
                    },
                    {
                        "start": 126,
                        "end": 134,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "At training time, gradients on each dataset-specific labeled examples are propagated as gradients on plausible fine-grained tags. At inference time the model predicts a single sequence of fine-grained tags, which are then mapped to the test tag-set by traversing the tag hierarchy. Importantly, all tagging decisions are performed in the model without the need for a post-processing consolidation step.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We conducted two experiments. The first evaluated the extension of a tag-set with a new tag via selective annotation of a new dataset with only the extending tag, using datasets from the medical and news domains. In the second experiment we integrated two full tag-sets from the medical domain with their training data while evaluating on a third test tag-set. The results show that the model which incorporates the tag-hierarchy is more robust compared to a combination of independent models or MTL, and typically outperforms them. This is especially evident when many tagging collisions need to be settled at post-processing. In these cases, the performance gap in favor of the tag-hierarchy model is large.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The goal in the heterogeneous tag-sets domain adaptation task is to learn an NER model M that given an input token sequence",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Task Definition",
                "sec_num": "2.1"
            },
            {
                "text": "x = {x i } n 1 infers a tag sequence y = {y i } n 1 = M (x) over a test tag-set T s , \u2200 i y i \u2208T s . To learn the model, K train- ing datasets {DS r k } K k=1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Task Definition",
                "sec_num": "2.1"
            },
            {
                "text": "are provided, each labeled with its own tag-set T r k . Superscripts 's' and 'r' stand for 'test' and 'training', respectively. In this task, no training tag-set is identical to the test tagset T s by itself. However, all tags in T s can be covered by combining the training tag-sets {T r k } K k=1 . This information is provided in the form of a directed acyclic graph (DAG) representing hyper- nymy relations between all training and test tags. Fig. 1 illustrates such a hierarchy.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 447,
                        "end": 453,
                        "text": "Fig. 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Task Definition",
                "sec_num": "2.1"
            },
            {
                "text": "As mentioned above, an example scenario is selective annotation, in which an original tag-set is extended with a new tag t, each with its own training data, and the test tag-set is their union. But, some setups require combinations other than a simple union, e.g. covering the test tag 'Address' with the finer training tags 'Street' and 'City', each from a different tag-set.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Task Definition",
                "sec_num": "2.1"
            },
            {
                "text": "This task is different from inductive domain adaptation (Pan and Yang, 2010; Ruder, 2017) , in which the tag-sets are different but the tasks differ as well (e.g. NER and parsing), with no need to map the outcomes to a single tag-set at test time.",
                "cite_spans": [
                    {
                        "start": 56,
                        "end": 76,
                        "text": "(Pan and Yang, 2010;",
                        "ref_id": "BIBREF26"
                    },
                    {
                        "start": 77,
                        "end": 89,
                        "text": "Ruder, 2017)",
                        "ref_id": "BIBREF30"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Task Definition",
                "sec_num": "2.1"
            },
            {
                "text": "As the underlying architecture shared by all models in this paper, we follow the neural network proposed by Lample et al. (2016) , which achieved state-of-the-art results on NER. In this model, depicted in Fig. 2 , each input token x i is represented as a combination of: (a) a one-hot vector x w i , mapping the input to a fixed word vocabulary, and (b) a sequence of one-hot vectors {x c i,j } n i j=1 , representing the input word's character sequence.",
                "cite_spans": [
                    {
                        "start": 108,
                        "end": 128,
                        "text": "Lample et al. (2016)",
                        "ref_id": "BIBREF19"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 206,
                        "end": 212,
                        "text": "Fig. 2",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Neural network for NER",
                "sec_num": "2.2"
            },
            {
                "text": "Each input token x i is first embedded in latent space by applying both a word-embedding matrix, we i = E x w i , and a character-based embedding layer ce i = CharBiRNN({x c i,j }) (Ling et al., 2015) . This output of this step is e i = ce i \u2295 we i , where \u2295 stands for vector concatenation. Then, the embedding vector sequence {e i } n is re-encoded in context using a bidirectional RNN layer {r i } n 1 = BiRNN({e i } n 1 ) (Schuster and Paliwal, 1997) . The sequence {r i } n 1 constitutes the latent representation of the input text.",
                "cite_spans": [
                    {
                        "start": 181,
                        "end": 200,
                        "text": "(Ling et al., 2015)",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 426,
                        "end": 454,
                        "text": "(Schuster and Paliwal, 1997)",
                        "ref_id": "BIBREF31"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Neural network for NER",
                "sec_num": "2.2"
            },
            {
                "text": "Finally, each re-encoded vector r i is projected to tag space for the target tag-set T ,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Neural network for NER",
                "sec_num": "2.2"
            },
            {
                "text": "t i = P r i , where |t i | = |T |. The sequence {t i } n",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Neural network for NER",
                "sec_num": "2.2"
            },
            {
                "text": "1 is then taken as input to a CRF layer (Lafferty et al., 2001 ), which maintains a global tag transition matrix. At inference time, the model output is y = M (x), the most probable CRF tag sequence for input x.",
                "cite_spans": [
                    {
                        "start": 40,
                        "end": 62,
                        "text": "(Lafferty et al., 2001",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Neural network for NER",
                "sec_num": "2.2"
            },
            {
                "text": "One way to learn a model for the heterogeneous tag-sets setting is to train a base NER (Sec. 2.2) on the concatenation of all training-sets, predicting tags from the union of all training tag-sets. In our experiments, this model under performed, due to the fact that it treats each training example as fully tagged despite being tagged only with the tags belonging to the training-set from which the example is taken (see Sec. 6).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Models for Multiple Tagging Layers",
                "sec_num": "3"
            },
            {
                "text": "We next present two models that instead learn to tag each training tag-set separately. In the first model the outputs from independent base models, each trained on a different tag-set, are merged. The second model utilizes the the multitasking approach to train separate tagging layers that share a single text representation layer.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Models for Multiple Tagging Layers",
                "sec_num": "3"
            },
            {
                "text": "In this model, we train a separate NER model for each training set, resulting in K models {M k } K k=1 . At test time, each model predicts a sequence y k = M k (x) over the corresponding tag-set T r k . The sequences {y k } K k=1 are consolidated into a single sequence y s over the test tag-set T s .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Combining independent models",
                "sec_num": "3.1"
            },
            {
                "text": "We perform this consolidation in a postprocessing step. First, each predicted tag y k,i is mapped to the test tag-set as y s k,i . We employ the provided tag hierarchy for this mapping by traversing it starting from y k,i until a test tag is reached. Then, for every token x i , we consider the test tags predicted at position i by the different models M (x i ) = {y s k,i |y s k,i = 'Other'}. Cases where M (x i ) contains more than one tag are called collisions. Models must consolidate collisions, selecting a single predicted tag for x i .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Combining independent models",
                "sec_num": "3.1"
            },
            {
                "text": "We introduce three different consolidation methods. The first is to randomly select a tag from M (x i ). The second chooses the tag that originates from the tag sequence y k with the highest CRF probability score. The third computes the marginal CRF tag probability for each tag and selects the one with the highest probability.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Combining independent models",
                "sec_num": "3.1"
            },
            {
                "text": "Lately, several works explored using multitasking (MTL) for inductive transfer learning within a neural architecture (Collobert and Weston, 2008; Chen et al., 2016; Peng and Dredze, 2017) . Such algorithms jointly train a single model to solve different NLP tasks, such as NER, sentiment analysis and text classification. The various tasks share the same text representation layer in the model but maintain a separate tagging layer per task.",
                "cite_spans": [
                    {
                        "start": 117,
                        "end": 145,
                        "text": "(Collobert and Weston, 2008;",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 146,
                        "end": 164,
                        "text": "Chen et al., 2016;",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 165,
                        "end": 187,
                        "text": "Peng and Dredze, 2017)",
                        "ref_id": "BIBREF27"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Multitasking for heterogeneous tag-sets",
                "sec_num": "3.2"
            },
            {
                "text": "We adapt multitasking to heterogeneous tagsets by considering each training dataset, which has a different tag-set T r k , as a separate NER task. Thus, a single model is trained, in which the latent text representation {r i } n 1 (see Sec. 2.2) is shared between NER tasks. As mentioned above, the tagging layers (projection and CRF) are kept separate for each tag-set. Fig. 3 illustrates this architecture.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 371,
                        "end": 377,
                        "text": "Fig. 3",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Multitasking for heterogeneous tag-sets",
                "sec_num": "3.2"
            },
            {
                "text": "We emphasize that the output of the MTL model still consists of {y k } K k=1 different tag sequence predictions. They are consolidated into a final single sequence y s using the same post-processing step described in Sec. 3.1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Multitasking for heterogeneous tag-sets",
                "sec_num": "3.2"
            },
            {
                "text": "The models introduced in Sec. 3.1 and 3.2 learn to predict a tag sequence for each training tagset separately and they do not share parameters between tagging layers. In addition, they require Figure 4 : The tag hierarchy in Fig. 1 for three tag-sets after closure extension. Green nodes and edges were automatically added in this process. Fine-grained tags are surrounded by a dotted box.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 193,
                        "end": 201,
                        "text": "Figure 4",
                        "ref_id": null
                    },
                    {
                        "start": 225,
                        "end": 231,
                        "text": "Fig. 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Tag Hierarchy Model",
                "sec_num": "4"
            },
            {
                "text": "Tag-set 1 (T 1 ): Name, Street, City, Hospital, Age>90, T1-Other Tag-set 2 (T 2 ): First Name, Last Name, Address, Age, T 2 -Other Tag-set 3 (T 3 ): Name, Location,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Tag Hierarchy Model",
                "sec_num": "4"
            },
            {
                "text": "a post-processing step, outside of the model, for merging the tag sequences inferred for the different tag-sets. A simple concatenation of all training data is also not enough to accommodate the differences between the tag-sets within the model (see Sec. 3). Moreover, none of these models utilizes the relations between tags, which are provided as input in the form of a tag hierarchy.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 250,
                        "end": 254,
                        "text": "Sec.",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Tag Hierarchy Model",
                "sec_num": "4"
            },
            {
                "text": "In this section, we propose a model that addresses these limitations. This model utilizes the given tag hierarchy at training time to learn a single, shared tagging layer that predicts only finegrained tags. The hierarchy is then used during inference to map fine-grained tags onto a target tag-set. Consequently, all tagging decisions are made in the model, without the need for a postprocessing step.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Tag Hierarchy Model",
                "sec_num": "4"
            },
            {
                "text": "In the input hierarchy DAG, each node represents some semantic role of words in sentences, (e.g. 'Name'). A directed edge c \u2192 d implies that c is a hyponym of d, meaning c captures a subset of the semantics of d. Examples include 'LastName' \u2192 'Name', and 'Street' \u2192 'Location' in Fig. 1 . We denote the set of all tags that capture some subset of semantics of d by If a node d has no hyponyms (Sem(d) = {d}), it represents some fine-grained tag semantics. We denote the set of all fine-grained tags by T F G . We also denote all fine-grained tags that are hyponyms of d by",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 280,
                        "end": 286,
                        "text": "Fig. 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Notations",
                "sec_num": "4.1"
            },
            {
                "text": "Sem(d) = {d} \u222a {c|c R \u2212 \u2192 d}, where R \u2212 \u2192 in- dicates that",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Notations",
                "sec_num": "4.1"
            },
            {
                "text": "Fine(d) = T F G \u2229 Sem(d), e.g.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Notations",
                "sec_num": "4.1"
            },
            {
                "text": "Fine(N ame) = {LastN ame, F irstN ame}. As mentioned above, our hierarchical model predicts tag sequences only from T F G and then maps them onto a target tag-set.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Notations",
                "sec_num": "4.1"
            },
            {
                "text": "For each tag d we would like the semantics captured by the union of semantics of all tags in Fine(d) to be exactly the semantics of d, making sure we will not miss any aspect of d when predicting only over T F G . Yet, this semantics-equality property does not hold in general. One such example in Fig. 4 is 'Age> 90'\u2192'Age', because there may be age mentions below 90 annotated in T 2 's dataset.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 298,
                        "end": 304,
                        "text": "Fig. 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Hierarchy extension with 'Other' tags",
                "sec_num": "4.2"
            },
            {
                "text": "To fix the semantics-equality above, we use the notion of the 'Other' tag in NER, which has the semantics of \"all the rest\". Specifically, for every d / \u2208 T F G , a fine-grained tag 'd-Other' \u2208 T F G and an edge 'd-Other'\u2192'd' are automatically added to the graph, hence 'd-Other'\u2208 Fine(d). For instance, 'Age-Other'\u2192'Age'. These new tags represent the aspects of d not captured by the other tags in Fine(d).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hierarchy extension with 'Other' tags",
                "sec_num": "4.2"
            },
            {
                "text": "Next a tag 'T i -Other' is automatically added to each tag-set T i , explicitly representing the \"all the rest\" semantics of T i . The labels for 'T i -Other' are induced automatically from unlabeled tokens in the original DS r i dataset. To make sure that the semantics-equality property above also holds for 'T i -Other', a fine-grained tag 'FG-Other' is also added, which captures the \"all the rest\" semantics at the fine-grained level. Then, each 'T i -Other' is connected to all fine-grained tags that do not capture some semantics of the tags in T i , defining:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hierarchy extension with 'Other' tags",
                "sec_num": "4.2"
            },
            {
                "text": "Fine(T i -Other) = T F G \\ d\u2208T i {T i -Other} Sem(d)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hierarchy extension with 'Other' tags",
                "sec_num": "4.2"
            },
            {
                "text": "This mapping is important at training time, where 'T i -Other' labels are used as distant supervision over their related fine-grained tags (Sec. 4.3). Fig.  4 depicts our hierarchy example after this step. We emphasize that all extensions in this step are done automatically as part of the model's algorithm.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 151,
                        "end": 158,
                        "text": "Fig.  4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Hierarchy extension with 'Other' tags",
                "sec_num": "4.2"
            },
            {
                "text": "One outcome of the extension step is that the set of fine-grained tags T F G covers all distinct finegrained semantics across all tag-sets. In the following, we train a single NER model (Sec. 2.2) that predicts sequences of tags from the T F G tagset. As there is only one tagging layer, model parameters are shared across all training examples.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "NER model with tag hierarchy",
                "sec_num": "4.3"
            },
            {
                "text": "At inference time, this model predicts the most likely fine-grained tag sequence y f g for the input",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "NER model with tag hierarchy",
                "sec_num": "4.3"
            },
            {
                "text": "x. As the model outputs only a single sequence, post-processing consolidation is not needed. The tag hierarchy is used to map each predicated finegrained tag y f g i to a tag in a test tag-set T s by traversing the out-edges of y f g i until a tag in T s is reached. This procedure is also used in the baseline models (see Sec. 3.1) for mapping their predictions onto the test tag-set. However, unlike the baselines, which end with multiple candidate predictions in the test tag-set and need to consolidate between them, here, only a single fine-grained tag sequence is mapped, so no further consolidation is needed.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "NER model with tag hierarchy",
                "sec_num": "4.3"
            },
            {
                "text": "At training time, each example x that belongs to some training dataset DS r i is labeled with a gold-standard tag sequence y where the tags are taken only from the corresponding tag-set T r i . This means that tags {y i } are not necessarily finegrained tags, so there is no direct supervision for predicting fine-grained tag sequences. However, each gold label y i provides distant supervision over its related fine-grained tags, Fine(y i ). It indicates that one of them is the correct fine-grained label without explicitly stating which one, so we consider all possibilities in a probabilistic manner.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "NER model with tag hierarchy",
                "sec_num": "4.3"
            },
            {
                "text": "Henceforth, we say that a fine-grained tag se- We denote all fine-grained tag sequences that agree with y by AgreeWith(y).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "NER model with tag hierarchy",
                "sec_num": "4.3"
            },
            {
                "text": "quence y f g agrees with y if \u2200 i y f g i \u2208 Fine(y i ), i.e. y f",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "NER model with tag hierarchy",
                "sec_num": "4.3"
            },
            {
                "text": "Using this definition, the tag-hierarchy model is trained with the loss function:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "NER model with tag hierarchy",
                "sec_num": "4.3"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "loss(y) = \u2212log( Z y Z ) (1) Z y = y f g \u2208AgreeWith(y) \u03c6(y f g )",
                        "eq_num": "(2)"
                    }
                ],
                "section": "NER model with tag hierarchy",
                "sec_num": "4.3"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "Z = y f g \u03c6(y f g )",
                        "eq_num": "(3)"
                    }
                ],
                "section": "NER model with tag hierarchy",
                "sec_num": "4.3"
            },
            {
                "text": "where \u03c6(y) stands for the model's score for sequence y, viewed as unnormalized probability. Z is the standard CRF partition function over all possible fine-grained tag sequences. Z y , on the other hand, accumulates scores only of fine-grained tag sequences that agree with y. Thus, this loss function aims at increasing the summed probability of all fine-grained sequences agreeing with y. Both Z y and Z can be computed efficiently using the Forward-Backward algorithm (Lafferty et al., 2001) . We note that we also considered finding the most likely tag sequence over a test tag-set at inference time by summing the probabilities of all finegrained tag sequences that agree with each candidate sequence y: max y y f g \u2208AgreeWith(y) \u03c6(y f g ). However, this problem is NP-hard (Lyngs\u00f8 and Pedersen, 2002) . We plan to explore other alternatives in future work.",
                "cite_spans": [
                    {
                        "start": 471,
                        "end": 494,
                        "text": "(Lafferty et al., 2001)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 779,
                        "end": 806,
                        "text": "(Lyngs\u00f8 and Pedersen, 2002)",
                        "ref_id": "BIBREF24"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "NER model with tag hierarchy",
                "sec_num": "4.3"
            },
            {
                "text": "To test the tag-hierarchy model under heterogeneous tag-set scenarios, we conducted experiments using datasets from two domains. We next describe these datasets as well as implementation details for the tested models. Sec. 6 then details the experiments and their results.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Settings",
                "sec_num": "5"
            },
            {
                "text": "Five datasets from two domains, medical and news, were used in our experiments. Table 1 summarizes their main statistics.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 80,
                        "end": 87,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Datasets",
                "sec_num": "5.1"
            },
            {
                "text": "For the medical domain we used the datasets I2B2-2006 (denoted I2B2'06) (Uzuner et al., 2007) , I2B2-2014 (denoted I2B2'14) (Stubbs and Uzuner, 2015) and the PhysioNet golden set (denoted Physio) (Goldberger et al., 2000) . These datasets are all annotated for the NER task of deidentification (a.k.a text anonymization) (Dernoncourt et al., 2017). Still, as seen in Table 1 , each dataset is annotated with a different tag-set. Both I2B2'06 and I2B2'14 include train and test sets, while Physio contains only a test set.",
                "cite_spans": [
                    {
                        "start": 72,
                        "end": 93,
                        "text": "(Uzuner et al., 2007)",
                        "ref_id": "BIBREF38"
                    },
                    {
                        "start": 124,
                        "end": 149,
                        "text": "(Stubbs and Uzuner, 2015)",
                        "ref_id": "BIBREF36"
                    },
                    {
                        "start": 196,
                        "end": 221,
                        "text": "(Goldberger et al., 2000)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 367,
                        "end": 374,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Datasets",
                "sec_num": "5.1"
            },
            {
                "text": "For the news domain we used the English part of CONLL-2003 (denoted Conll) (Tjong Kim Sang and De Meulder, 2003) and OntoNotes-v5 (denoted Onto) (Weischedel et al., 2013) , both with train and test sets. We note that I2B2'14, Conll and Onto also contain a dev-set, which is used for hyper-param tuning (see below).",
                "cite_spans": [
                    {
                        "start": 82,
                        "end": 112,
                        "text": "Kim Sang and De Meulder, 2003)",
                        "ref_id": "BIBREF37"
                    },
                    {
                        "start": 145,
                        "end": 170,
                        "text": "(Weischedel et al., 2013)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Datasets",
                "sec_num": "5.1"
            },
            {
                "text": "In all experiments, each example is a full document. Each document is split into tokens on whitespaces and punctuation. A tag-hierarchy covering the 57 tags from all five datasets was given as input to all models in all experiments. We constructed this hierarchy manually. The only non-trivial tag was 'Location', which in I2B2'14 is split into finer tags ('City', 'Street' etc.) and includes also hospital mentions in Conll and Onto. We resolved these relations similarly to the graph in Figure 1 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 489,
                        "end": 497,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Datasets",
                "sec_num": "5.1"
            },
            {
                "text": "Four models were compared in our experiments:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compared Models",
                "sec_num": "5.2"
            },
            {
                "text": "M Concat A single NER model on the concatenation of datasets and tag-sets (Sec. 3).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compared Models",
                "sec_num": "5.2"
            },
            {
                "text": "M Indep Combining predictions of independent NER models, one per tag-set (Sec. 3.1).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compared Models",
                "sec_num": "5.2"
            },
            {
                "text": "M MTL Multitasking over training tag-sets (Sec. 3.2).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compared Models",
                "sec_num": "5.2"
            },
            {
                "text": "M Hier A tag hierarchy employed within a single base model (Sec. 4).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compared Models",
                "sec_num": "5.2"
            },
            {
                "text": "All models are based on the neural network described in Sec. 2.2. We tuned the hyper-params in the base model to achieve state-of-the-art results for a single NER model on Conll and I2B2'14 when trained and tested on the same dataset (Strubell et al., 2017; Dernoncourt et al., 2017) (see Table 2 ). This is done to maintain a constant baseline, and is also due to the fact that I2B2'06 does not have a standard dev-set.",
                "cite_spans": [
                    {
                        "start": 234,
                        "end": 257,
                        "text": "(Strubell et al., 2017;",
                        "ref_id": "BIBREF35"
                    },
                    {
                        "start": 258,
                        "end": 283,
                        "text": "Dernoncourt et al., 2017)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 289,
                        "end": 296,
                        "text": "Table 2",
                        "ref_id": "TABREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Compared Models",
                "sec_num": "5.2"
            },
            {
                "text": "We tuned hyper-params over the dev-sets of Conll and I2B2'14. For character-based embedding we used a single bidirectional LSTM (Hochreiter and Schmidhuber, 1997) with hidden state size of 25. For word embeddings we used pre-trained GloVe embeddings 1 (Pennington et al., 2014) , without further training. For token recoding we used a two-level stacked bidirectional LSTM (Graves et al., 2013) with both output and hidden state of size 100.",
                "cite_spans": [
                    {
                        "start": 128,
                        "end": 162,
                        "text": "(Hochreiter and Schmidhuber, 1997)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 252,
                        "end": 277,
                        "text": "(Pennington et al., 2014)",
                        "ref_id": "BIBREF28"
                    },
                    {
                        "start": 372,
                        "end": 393,
                        "text": "(Graves et al., 2013)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compared Models",
                "sec_num": "5.2"
            },
            {
                "text": "Once these hyper-params were set, no further tuning was made in our experiments, which means all models for heterogeneous tag-sets were tested under the above fixed hyper-param set. In each experiment, each model was trained until convergence on the respective training set. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Compared Models",
                "sec_num": "5.2"
            },
            {
                "text": "We performed two experiments. The first refers to selective annotation, in which an existing tag-set is extended with a new tag by annotating a new dataset only with the new tag. The second experiment tests the ability of each model to integrate two full tag-sets.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments and Results",
                "sec_num": "6"
            },
            {
                "text": "In all experiments we assess model performance via micro-averaged tag F1, in accordance with CoNLL evaluation (Tjong Kim Sang and De Meulder, 2003) . Statistical significance was computed using the Wilcoxon two-sided signed ranks test at p = 0.01 (Wilcoxon, 1945) . We next detail each experiment and its results.",
                "cite_spans": [
                    {
                        "start": 117,
                        "end": 147,
                        "text": "Kim Sang and De Meulder, 2003)",
                        "ref_id": "BIBREF37"
                    },
                    {
                        "start": 247,
                        "end": 263,
                        "text": "(Wilcoxon, 1945)",
                        "ref_id": "BIBREF40"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments and Results",
                "sec_num": "6"
            },
            {
                "text": "In all our experiments, we found the performance of the different consolidation methods (Sec. 3.1) to be on par. One reason that using model scores does not beat random selection may be due to the overconfidence of the tagging models -their prediction probabilities are close to 0 or 1. We report figures for random selection as representative of all consolidation methods.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments and Results",
                "sec_num": "6"
            },
            {
                "text": "In this experiment, we considered the 4 most frequent tags that occur in at least two of our datasets: 'Name', 'Date', 'Location' and 'Hospital' (Table 3 summarizes their statistics). For each frequent tag t and an ordered pair of datasets in which t occurs, we constructed new training sets by removing t from the first training set (termed base dataset) and remove all tags but t from the second training set (termed extending dataset). For example, for the triplet of { 'Name', I2B2'14, I2B2'06}, we constructed a version of I2B2'14 without 'Name' annotations and a version of I2B2'06 containing only annotations for 'Name'. This process yielded 32 such triplets. For every triplet, we train all tested models on the two modified training sets and test them on the test-set of the base dataset (I2B2'14 in the example above). Each test-set was not altered and contains all tags of the base tag-set, including t.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 145,
                        "end": 153,
                        "text": "(Table 3",
                        "ref_id": "TABREF5"
                    }
                ],
                "eq_spans": [],
                "section": "Tag-set extension experiment",
                "sec_num": "6.1"
            },
            {
                "text": "M Concat performed poorly in this experiment. For example, on the dataset extending I2B2'14 with 'Name' from I2B2'06, M Concat tagged only one 'Name' out of over 4000 'Name' mentions in the test set. Given this, we do not provide further details of the results of M Concat in this experiment.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Tag-set extension experiment",
                "sec_num": "6.1"
            },
            {
                "text": "For the three models tested, this experiment yields 96 results. The main results 2 of this experiment are shown in Table 4 . Surprisingly, in more tests M Indep outperformed M MTL than vice versa, adding to prior observations that multitasking can hurt performance instead of improving it (Bingel and S\u00f8gaard, 2017; Alonso and Plank, 2017; Bjerva, 2017) . But, applying a shared tagging layer on top of a shared text representation boosts the model's capability and stability. Indeed, overall, M Hier outperforms the other models in most tests, and in the rest it is similar to the best performing model.",
                "cite_spans": [
                    {
                        "start": 289,
                        "end": 315,
                        "text": "(Bingel and S\u00f8gaard, 2017;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 316,
                        "end": 339,
                        "text": "Alonso and Plank, 2017;",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 340,
                        "end": 353,
                        "text": "Bjerva, 2017)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 115,
                        "end": 122,
                        "text": "Table 4",
                        "ref_id": "TABREF7"
                    }
                ],
                "eq_spans": [],
                "section": "Tag-set extension experiment",
                "sec_num": "6.1"
            },
            {
                "text": "Analyzing the results, we noticed that the gap between model performance increases when more collisions are encountered for M MTL and M Indep at post-processing time (see Sec. 3.1). The amount of collisions may be viewed as a predictor for the baselines' difficulty to handle a specific heterogeneous tag-sets setting. Table 5 presents the tests in which more than 100 collisions were detected for either M Indep or M MTL , constituting 66% of all 2 Detailed results for all 96 tests are given in the Appendix. test triplets. In these tests, M Hier is a clear winner, outperforming the compared models in all but two comparisons, often by a significant margin. Finally, we compared the models trained with selective annotation to an \"upper-bound\" of training and testing a single NER model on the same dataset with all tags annotated (Table 2) . As expected, performance is usually lower with selective annotation. But, the drop intensifies when the base and extending datasets are from different domains -medical and news. In these cases, we observed that M Hier is more robust. Its drop compared to combining datasets from the same domain is the least in almost all such combinations. Table 6 provides some illustrative examples.",
                "cite_spans": [
                    {
                        "start": 448,
                        "end": 449,
                        "text": "2",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 319,
                        "end": 326,
                        "text": "Table 5",
                        "ref_id": "TABREF9"
                    },
                    {
                        "start": 834,
                        "end": 843,
                        "text": "(Table 2)",
                        "ref_id": "TABREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Tag-set extension experiment",
                "sec_num": "6.1"
            },
            {
                "text": "A scenario distinct from selective annotation is the integration of full tag-sets. On one hand, more training data is available for similar tags. On the other hand, more tags need to be consolidated among the tag-sets. To test this scenario, we trained the tested model types on the training sets of I2B2'06 and I2B2'14, which have different tag-sets. The models were evaluated both on the test sets of these datasets and on Physio, an unseen test-set that requires the combination of the two training tag-sets for full coverage of its tag-set. We also compared the models to single models trained on each of the training sets alone. Table 7 displays the results.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 634,
                        "end": 641,
                        "text": "Table 7",
                        "ref_id": "TABREF12"
                    }
                ],
                "eq_spans": [],
                "section": "Full tag-set integration experiment",
                "sec_num": "6.2"
            },
            {
                "text": "As expected, single models do well on the testset companion of their training-set but they underperform on the other test-sets. This is expected because the tag-set on which they were trained does not cover well the tag-sets in the other test-sets.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Full tag-set integration experiment",
                "sec_num": "6.2"
            },
            {
                "text": "When compared with the best-performing single model, using M Concat shows reduced results on all 3 test sets. This can be attributed to reduced performance for types that are semantically different between datasets (e.g. 'Date'), while performance on similar tags (e.g. 'Name') does not drop.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Full tag-set integration experiment",
                "sec_num": "6.2"
            },
            {
                "text": "Combining the two training sets using either M Indep or M MTL leads to substantial performance drop in 5 out of 6 test-sets compared to the bestperforming single model. This is strongly correlated with the number of collisions encountered (see Table 7 ). Indeed, the only competitive result, M MTL tested on Physio, had less than 100 collisions. This demonstrates the non triviality in realworld tag-set integration, and the difficulty of resolving tagging decisions across tag-sets.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 244,
                        "end": 251,
                        "text": "Table 7",
                        "ref_id": "TABREF12"
                    }
                ],
                "eq_spans": [],
                "section": "Full tag-set integration experiment",
                "sec_num": "6.2"
            },
            {
                "text": "By contrast, M Hier has no performance drop compared to the single models trained and tested on the same dataset. Moreover, it is the best performing model on the unseen Physio test-set, with 6% relative improvement in F1 over the best single model. This experiment points up the robustness of the tag hierarchy approach when applied to this heterogeneous tag-set scenario. Collobert et al. (2011) introduced the first competitive NN-based NER that required little or no feature engineering. Huang et al. (2015) combined LSTM with CRF, showing performance similar to non-NN models. Lample et al. (2016) extended this model with character-based embeddings in addition to word embedding, achieving state-of-theart results. Similar architectures, such as combinations of convolutional networks as replacements of RNNs were shown to out-perform previous NER models (Ma and Hovy, 2016; Chiu and Nichols, 2016; Strubell et al., 2017) . Dernoncourt et al. (2017) and Liu et al. (2017) showed that the LSTM-CRF model achieves stateof-the-art results also for de-identification in the medical domain. Lee et al. (2018) demonstrated how performance drops significantly when the LSTM-CRF model is tested under transfer learning within the same domain in this task. Collobert and Weston (2008) introduced MTL for NN, and other works followed, showing it helps in various NLP tasks (Chen et al., 2016; Peng and Dredze, 2017) . S\u00f8gaard and Goldberg (2016) and Hashimoto et al. (2017) argue that cascading architectures can improve MTL performance. Several works have explored conditions for successful application of MTL (Bingel and S\u00f8gaard, 2017; Bjerva, 2017; Alonso and Plank, 2017) .",
                "cite_spans": [
                    {
                        "start": 374,
                        "end": 397,
                        "text": "Collobert et al. (2011)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 492,
                        "end": 511,
                        "text": "Huang et al. (2015)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 582,
                        "end": 602,
                        "text": "Lample et al. (2016)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 861,
                        "end": 880,
                        "text": "(Ma and Hovy, 2016;",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 881,
                        "end": 904,
                        "text": "Chiu and Nichols, 2016;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 905,
                        "end": 927,
                        "text": "Strubell et al., 2017)",
                        "ref_id": "BIBREF35"
                    },
                    {
                        "start": 930,
                        "end": 955,
                        "text": "Dernoncourt et al. (2017)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 960,
                        "end": 977,
                        "text": "Liu et al. (2017)",
                        "ref_id": "BIBREF23"
                    },
                    {
                        "start": 1092,
                        "end": 1109,
                        "text": "Lee et al. (2018)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 1254,
                        "end": 1281,
                        "text": "Collobert and Weston (2008)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 1369,
                        "end": 1388,
                        "text": "(Chen et al., 2016;",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 1389,
                        "end": 1411,
                        "text": "Peng and Dredze, 2017)",
                        "ref_id": "BIBREF27"
                    },
                    {
                        "start": 1414,
                        "end": 1441,
                        "text": "S\u00f8gaard and Goldberg (2016)",
                        "ref_id": "BIBREF34"
                    },
                    {
                        "start": 1446,
                        "end": 1469,
                        "text": "Hashimoto et al. (2017)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 1607,
                        "end": 1633,
                        "text": "(Bingel and S\u00f8gaard, 2017;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 1634,
                        "end": 1647,
                        "text": "Bjerva, 2017;",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 1648,
                        "end": 1671,
                        "text": "Alonso and Plank, 2017)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Full tag-set integration experiment",
                "sec_num": "6.2"
            },
            {
                "text": "Few works attempt to share information across datasets at the tagging level. Greenberg et al. (2018) proposed a single CRF model for tagging with heterogeneous tag-sets but without a hierarchy. They show the utility of this method for indomain datasets with a balanced tag distribution. Our model can be viewed as an extension of theirs for tag hierarchies. Augenstein et al. (2018) use tag embeddings in MTL to further propagate information between tasks. Li et al. (2017) propose to use a tag-set made of cross-product of two different POS tag-sets and train a model for it. Given the explosion in tag-set size, they introduce automatic pruning of cross-product tags. Kim et al. (2015) and Qu et al. (2016) automatically learn correlations between tag-sets, given training data for both tag-sets. They rely on similar contexts for related source and target tags, such as 'professor' and 'student'.",
                "cite_spans": [
                    {
                        "start": 77,
                        "end": 100,
                        "text": "Greenberg et al. (2018)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 358,
                        "end": 382,
                        "text": "Augenstein et al. (2018)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 457,
                        "end": 473,
                        "text": "Li et al. (2017)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 670,
                        "end": 687,
                        "text": "Kim et al. (2015)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 692,
                        "end": 708,
                        "text": "Qu et al. (2016)",
                        "ref_id": "BIBREF29"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "7"
            },
            {
                "text": "Our tag-hierarchy model was inspired by recent work on hierarchical multi-label classification (Silla and Freitas, 2011; Zhang and Zhou, 2014) , and can be viewed as an extension of this direction onto sequences tagging.",
                "cite_spans": [
                    {
                        "start": 95,
                        "end": 120,
                        "text": "(Silla and Freitas, 2011;",
                        "ref_id": "BIBREF33"
                    },
                    {
                        "start": 121,
                        "end": 142,
                        "text": "Zhang and Zhou, 2014)",
                        "ref_id": "BIBREF41"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "7"
            },
            {
                "text": "We proposed a tag-hierarchy model for the heterogeneous tag-sets NER setting, which does not require a consolidation post-processing stage. In the conducted experiments, the proposed model consistently outperformed the baselines in difficult tagging cases and showed robustness when applying a single trained model to varied test sets.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "8"
            },
            {
                "text": "In the case of integrating datasets from the news and medical domains we found the blending task to be difficult. In future work, we'd like to improve this integration in order to gain from training on examples from different domains for tags like 'Name' and 'Location'. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "8"
            }
        ],
        "back_matter": [
            {
                "text": "The authors would like to thank Yossi Matias, Katherine Chou, Greg Corrado, Avinatan Hassidim, Rony Amira, Itay Laish and Amit Markel for their help in creating this work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "When is multitask learning effective? semantic sequence prediction under varying data conditions",
                "authors": [
                    {
                        "first": "Hector",
                        "middle": [],
                        "last": "Martinez",
                        "suffix": ""
                    },
                    {
                        "first": "Alonso",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "Barbara",
                        "middle": [],
                        "last": "Plank",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "EACL 2017-15th Conference of the European Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "1--10",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hector Martinez Alonso and Barbara Plank. 2017. When is multitask learning effective? semantic se- quence prediction under varying data conditions. In EACL 2017-15th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 1-10.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Multi-task learning of pairwise sequence classification tasks over disparate label spaces",
                "authors": [
                    {
                        "first": "Isabelle",
                        "middle": [],
                        "last": "Augenstein",
                        "suffix": ""
                    },
                    {
                        "first": "Sebastian",
                        "middle": [],
                        "last": "Ruder",
                        "suffix": ""
                    },
                    {
                        "first": "Anders",
                        "middle": [],
                        "last": "S\u00f8gaard",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1802.09913v2"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Isabelle Augenstein, Sebastian Ruder, and Anders S\u00f8gaard. 2018. Multi-task learning of pairwise sequence classification tasks over disparate label spaces. arXiv:1802.09913v2.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Identifying beneficial task relations for multi-task learning in deep neural networks. In ACL",
                "authors": [
                    {
                        "first": "Joachim",
                        "middle": [],
                        "last": "Bingel",
                        "suffix": ""
                    },
                    {
                        "first": "Anders",
                        "middle": [],
                        "last": "S\u00f8gaard",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Joachim Bingel and Anders S\u00f8gaard. 2017. Identify- ing beneficial task relations for multi-task learning in deep neural networks. In ACL.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Will my auxiliary tagging task help? estimating auxiliary tasks effectivity in multi-task learning",
                "authors": [
                    {
                        "first": "Johannes",
                        "middle": [],
                        "last": "Bjerva",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 21st Nordic Conference on Computational Linguistics",
                "volume": "131",
                "issue": "",
                "pages": "216--220",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Johannes Bjerva. 2017. Will my auxiliary tagging task help? estimating auxiliary tasks effectivity in multi-task learning. In Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017, Gothenburg, Sweden, 131, pages 216-220. Link\u00f6ping University Elec- tronic Press.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Neural network for heterogeneous annotations",
                "authors": [
                    {
                        "first": "Hongshen",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "Yue",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Qun",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "EMNLP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hongshen Chen, Yue Zhang, and Qun Liu. 2016. Neural network for heterogeneous annotations. In EMNLP.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Named entity recognition with bidirectional lstm-cnns",
                "authors": [
                    {
                        "first": "Jason",
                        "middle": [],
                        "last": "Chiu",
                        "suffix": ""
                    },
                    {
                        "first": "Eric",
                        "middle": [],
                        "last": "Nichols",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "TACL",
                "volume": "4",
                "issue": "1",
                "pages": "357--370",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. TACL, 4(1):357-370.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
                "authors": [
                    {
                        "first": "Ronan",
                        "middle": [],
                        "last": "Collobert",
                        "suffix": ""
                    },
                    {
                        "first": "Jason",
                        "middle": [],
                        "last": "Weston",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "ICML",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Natural language processing (almost) from scratch",
                "authors": [
                    {
                        "first": "Ronan",
                        "middle": [],
                        "last": "Collobert",
                        "suffix": ""
                    },
                    {
                        "first": "Jason",
                        "middle": [],
                        "last": "Weston",
                        "suffix": ""
                    },
                    {
                        "first": "L\u00e9on",
                        "middle": [],
                        "last": "Bottou",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Karlen",
                        "suffix": ""
                    },
                    {
                        "first": "Koray",
                        "middle": [],
                        "last": "Kavukcuoglu",
                        "suffix": ""
                    },
                    {
                        "first": "Pavel",
                        "middle": [],
                        "last": "Kuksa",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "JMLR",
                "volume": "12",
                "issue": "",
                "pages": "2493--2537",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR, 12(Aug):2493-2537.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "De-identification of patient notes with recurrent neural networks",
                "authors": [
                    {
                        "first": "Franck",
                        "middle": [],
                        "last": "Dernoncourt",
                        "suffix": ""
                    },
                    {
                        "first": "Ji",
                        "middle": [
                            "Young"
                        ],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "Ozlem",
                        "middle": [],
                        "last": "Uzuner",
                        "suffix": ""
                    },
                    {
                        "first": "Peter",
                        "middle": [],
                        "last": "Szolovits",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "J. Am Med Inform Assoc",
                "volume": "24",
                "issue": "3",
                "pages": "596--606",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Franck Dernoncourt, Ji Young Lee, Ozlem Uzuner, and Peter Szolovits. 2017. De-identification of patient notes with recurrent neural networks. J. Am Med Inform Assoc, 24(3):596-606.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Multichannel lstm-crf for named entity recognition in chinese social media",
                "authors": [
                    {
                        "first": "Chuanhai",
                        "middle": [],
                        "last": "Dong",
                        "suffix": ""
                    },
                    {
                        "first": "Huijia",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    },
                    {
                        "first": "Jiajun",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Chengqing",
                        "middle": [],
                        "last": "Zong",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "CCL/NLP-NABD",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chuanhai Dong, Huijia Wu, Jiajun Zhang, and Chengqing Zong. 2017. Multichannel lstm-crf for named entity recognition in chinese social media. In CCL/NLP-NABD. Springer.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Physiobank, physiotoolkit, and physionet",
                "authors": [
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Ary",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Goldberger",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [
                            "N"
                        ],
                        "last": "Luis",
                        "suffix": ""
                    },
                    {
                        "first": "Leon",
                        "middle": [],
                        "last": "Amaral",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Glass",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Jeffrey",
                        "suffix": ""
                    },
                    {
                        "first": "Plamen",
                        "middle": [
                            "Ch"
                        ],
                        "last": "Hausdorff",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ivanov",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Roger",
                        "suffix": ""
                    },
                    {
                        "first": "Joseph",
                        "middle": [
                            "E"
                        ],
                        "last": "Mark",
                        "suffix": ""
                    },
                    {
                        "first": "George",
                        "middle": [
                            "B"
                        ],
                        "last": "Mietus",
                        "suffix": ""
                    },
                    {
                        "first": "Chung-Kang",
                        "middle": [],
                        "last": "Moody",
                        "suffix": ""
                    },
                    {
                        "first": "H Eugene",
                        "middle": [],
                        "last": "Peng",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Stanley",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "",
                "volume": "101",
                "issue": "",
                "pages": "215--220",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ary L Goldberger, Luis AN Amaral, Leon Glass, Jef- frey M Hausdorff, Plamen Ch Ivanov, Roger G Mark, Joseph E Mietus, George B Moody, Chung- Kang Peng, and H Eugene Stanley. 2000. Phys- iobank, physiotoolkit, and physionet. Circulation, 101(23):215-220.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Speech recognition with deep recurrent neural networks",
                "authors": [
                    {
                        "first": "Alex",
                        "middle": [],
                        "last": "Graves",
                        "suffix": ""
                    },
                    {
                        "first": "Mohamed",
                        "middle": [],
                        "last": "Abdel-Rahman",
                        "suffix": ""
                    },
                    {
                        "first": "Geoffrey",
                        "middle": [],
                        "last": "Hinton",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recur- rent neural networks. In ICASSP.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Marginal likelihood training of bilstm-crf for biomedical named entity recognition from disjoint label sets",
                "authors": [
                    {
                        "first": "Nathan",
                        "middle": [],
                        "last": "Greenberg",
                        "suffix": ""
                    },
                    {
                        "first": "Trapit",
                        "middle": [],
                        "last": "Bansal",
                        "suffix": ""
                    },
                    {
                        "first": "Patrick",
                        "middle": [],
                        "last": "Verga",
                        "suffix": ""
                    },
                    {
                        "first": "Andrew",
                        "middle": [],
                        "last": "Mccallum",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "2824--2829",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nathan Greenberg, Trapit Bansal, Patrick Verga, and Andrew McCallum. 2018. Marginal likelihood training of bilstm-crf for biomedical named entity recognition from disjoint label sets. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2824-2829.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "A joint many-task model: Growing a neural network for multiple nlp tasks",
                "authors": [
                    {
                        "first": "Kazuma",
                        "middle": [],
                        "last": "Hashimoto",
                        "suffix": ""
                    },
                    {
                        "first": "Yoshimasa",
                        "middle": [],
                        "last": "Tsuruoka",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Socher",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "EMNLP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kazuma Hashimoto, Yoshimasa Tsuruoka, Richard Socher, et al. 2017. A joint many-task model: Grow- ing a neural network for multiple nlp tasks. In EMNLP.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "A unified model for cross-domain and semi-supervised named entity recognition in chinese social media",
                "authors": [
                    {
                        "first": "Hangfeng",
                        "middle": [],
                        "last": "He",
                        "suffix": ""
                    },
                    {
                        "first": "Xu",
                        "middle": [],
                        "last": "Sun",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "AAAI",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hangfeng He and Xu Sun. 2017. A unified model for cross-domain and semi-supervised named entity recognition in chinese social media. In AAAI.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Long short-term memory",
                "authors": [
                    {
                        "first": "Sepp",
                        "middle": [],
                        "last": "Hochreiter",
                        "suffix": ""
                    },
                    {
                        "first": "J\u00fcrgen",
                        "middle": [],
                        "last": "Schmidhuber",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Neural computation",
                "volume": "9",
                "issue": "8",
                "pages": "1735--1780",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Bidirectional lstm-crf models for sequence tagging",
                "authors": [
                    {
                        "first": "Zhiheng",
                        "middle": [],
                        "last": "Huang",
                        "suffix": ""
                    },
                    {
                        "first": "Wei",
                        "middle": [],
                        "last": "Xu",
                        "suffix": ""
                    },
                    {
                        "first": "Kai",
                        "middle": [],
                        "last": "Yu",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1508.01991"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidi- rectional lstm-crf models for sequence tagging. arXiv:1508.01991.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "New transfer learning techniques for disparate label sets",
                "authors": [
                    {
                        "first": "Young-Bum",
                        "middle": [],
                        "last": "Kim",
                        "suffix": ""
                    },
                    {
                        "first": "Karl",
                        "middle": [],
                        "last": "Stratos",
                        "suffix": ""
                    },
                    {
                        "first": "Ruhi",
                        "middle": [],
                        "last": "Sarikaya",
                        "suffix": ""
                    },
                    {
                        "first": "Minwoo",
                        "middle": [],
                        "last": "Jeong",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Young-Bum Kim, Karl Stratos, Ruhi Sarikaya, and Minwoo Jeong. 2015. New transfer learning tech- niques for disparate label sets. In ACL.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
                "authors": [
                    {
                        "first": "John",
                        "middle": [
                            "D"
                        ],
                        "last": "Lafferty",
                        "suffix": ""
                    },
                    {
                        "first": "Andrew",
                        "middle": [],
                        "last": "Mccallum",
                        "suffix": ""
                    },
                    {
                        "first": "Fernando",
                        "middle": [
                            "C N"
                        ],
                        "last": "Pereira",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "ICML",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In ICML.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Neural architectures for named entity recognition",
                "authors": [
                    {
                        "first": "Guillaume",
                        "middle": [],
                        "last": "Lample",
                        "suffix": ""
                    },
                    {
                        "first": "Miguel",
                        "middle": [],
                        "last": "Ballesteros",
                        "suffix": ""
                    },
                    {
                        "first": "Sandeep",
                        "middle": [],
                        "last": "Subramanian",
                        "suffix": ""
                    },
                    {
                        "first": "Kazuya",
                        "middle": [],
                        "last": "Kawakami",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Dyer",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In ACL.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Transfer learning for named-entity recognition with neural networks",
                "authors": [
                    {
                        "first": "Ji",
                        "middle": [
                            "Young"
                        ],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "Franck",
                        "middle": [],
                        "last": "Dernoncourt",
                        "suffix": ""
                    },
                    {
                        "first": "Peter",
                        "middle": [],
                        "last": "Szolovits",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ji Young Lee, Franck Dernoncourt, and Peter Szolovits. 2018. Transfer learning for named-entity recogni- tion with neural networks.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Coupled pos tagging on heterogeneous annotations",
                "authors": [
                    {
                        "first": "Zhenghua",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Jiayuan",
                        "middle": [],
                        "last": "Chao",
                        "suffix": ""
                    },
                    {
                        "first": "Min",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Wenliang",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "Meishan",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Guohong",
                        "middle": [],
                        "last": "Fu",
                        "suffix": ""
                    },
                    {
                        "first": "Zhenghua",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Jiayuan",
                        "middle": [],
                        "last": "Chao",
                        "suffix": ""
                    },
                    {
                        "first": "Min",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Wenliang",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "TASLP",
                "volume": "25",
                "issue": "3",
                "pages": "557--571",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zhenghua Li, Jiayuan Chao, Min Zhang, Wenliang Chen, Meishan Zhang, Guohong Fu, Zhenghua Li, Jiayuan Chao, Min Zhang, Wenliang Chen, et al. 2017. Coupled pos tagging on heterogeneous an- notations. TASLP, 25(3):557-571.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Finding function in form: Compositional character models for open vocabulary word representation",
                "authors": [
                    {
                        "first": "Wang",
                        "middle": [],
                        "last": "Ling",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Dyer",
                        "suffix": ""
                    },
                    {
                        "first": "Alan",
                        "middle": [
                            "W"
                        ],
                        "last": "Black",
                        "suffix": ""
                    },
                    {
                        "first": "Isabel",
                        "middle": [],
                        "last": "Trancoso",
                        "suffix": ""
                    },
                    {
                        "first": "Ramon",
                        "middle": [],
                        "last": "Fernandez",
                        "suffix": ""
                    },
                    {
                        "first": "Silvio",
                        "middle": [],
                        "last": "Amir",
                        "suffix": ""
                    },
                    {
                        "first": "Luis",
                        "middle": [],
                        "last": "Marujo",
                        "suffix": ""
                    },
                    {
                        "first": "Tiago",
                        "middle": [],
                        "last": "Luis",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "EMNLP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Wang Ling, Chris Dyer, Alan W Black, Isabel Tran- coso, Ramon Fernandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015. Finding function in form: Compositional character models for open vocabu- lary word representation. In EMNLP.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "De-identification of clinical notes via recurrent neural network and conditional random field",
                "authors": [
                    {
                        "first": "Zengjian",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Buzhou",
                        "middle": [],
                        "last": "Tang",
                        "suffix": ""
                    },
                    {
                        "first": "Xiaolong",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Qingcai",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "J. Biomed. Inf",
                "volume": "75",
                "issue": "",
                "pages": "34--42",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zengjian Liu, Buzhou Tang, Xiaolong Wang, and Qingcai Chen. 2017. De-identification of clinical notes via recurrent neural network and conditional random field. J. Biomed. Inf., 75:34-42.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "The consensus string problem and the complexity of comparing hidden markov models",
                "authors": [
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Rune",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Lyngs\u00f8",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [
                            "S"
                        ],
                        "last": "Christian",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Pedersen",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Journal of Computer and System Sciences",
                "volume": "65",
                "issue": "3",
                "pages": "545--569",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Rune B Lyngs\u00f8 and Christian NS Pedersen. 2002. The consensus string problem and the complexity of comparing hidden markov models. Journal of Com- puter and System Sciences, 65(3):545-569.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf",
                "authors": [
                    {
                        "first": "Xuezhe",
                        "middle": [],
                        "last": "Ma",
                        "suffix": ""
                    },
                    {
                        "first": "Eduard",
                        "middle": [],
                        "last": "Hovy",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In ACL.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "A survey on transfer learning",
                "authors": [
                    {
                        "first": "Qiang",
                        "middle": [],
                        "last": "Sinno Jialin Pan",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Yang",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "IEEE Transactions on knowledge and data engineering",
                "volume": "22",
                "issue": "10",
                "pages": "1345--1359",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359.",
                "links": null
            },
            "BIBREF27": {
                "ref_id": "b27",
                "title": "Multi-task domain adaptation for sequence tagging",
                "authors": [
                    {
                        "first": "Nanyun",
                        "middle": [],
                        "last": "Peng",
                        "suffix": ""
                    },
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Dredze",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nanyun Peng and Mark Dredze. 2017. Multi-task do- main adaptation for sequence tagging. In Proceed- ings of the 2nd Workshop on Representation Learn- ing for NLP.",
                "links": null
            },
            "BIBREF28": {
                "ref_id": "b28",
                "title": "Glove: Global vectors for word representation",
                "authors": [
                    {
                        "first": "Jeffrey",
                        "middle": [],
                        "last": "Pennington",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Socher",
                        "suffix": ""
                    },
                    {
                        "first": "Christopher",
                        "middle": [],
                        "last": "Manning",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "EMNLP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In EMNLP.",
                "links": null
            },
            "BIBREF29": {
                "ref_id": "b29",
                "title": "Named entity recognition for novel types by transfer learning",
                "authors": [
                    {
                        "first": "Lizhen",
                        "middle": [],
                        "last": "Qu",
                        "suffix": ""
                    },
                    {
                        "first": "Gabriela",
                        "middle": [],
                        "last": "Ferraro",
                        "suffix": ""
                    },
                    {
                        "first": "Liyuan",
                        "middle": [],
                        "last": "Zhou",
                        "suffix": ""
                    },
                    {
                        "first": "Weiwei",
                        "middle": [],
                        "last": "Hou",
                        "suffix": ""
                    },
                    {
                        "first": "Timothy",
                        "middle": [],
                        "last": "Baldwin",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "EMNLP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lizhen Qu, Gabriela Ferraro, Liyuan Zhou, Weiwei Hou, and Timothy Baldwin. 2016. Named entity recognition for novel types by transfer learning. In EMNLP.",
                "links": null
            },
            "BIBREF30": {
                "ref_id": "b30",
                "title": "An overview of multi-task learning in",
                "authors": [
                    {
                        "first": "Sebastian",
                        "middle": [],
                        "last": "Ruder",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "deep neural networks",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1706.05098"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv:1706.05098.",
                "links": null
            },
            "BIBREF31": {
                "ref_id": "b31",
                "title": "Bidirectional recurrent neural networks",
                "authors": [
                    {
                        "first": "Mike",
                        "middle": [],
                        "last": "Schuster",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Kuldip",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Paliwal",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "IEEE Transactions on Signal Processing",
                "volume": "45",
                "issue": "11",
                "pages": "2673--2681",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.",
                "links": null
            },
            "BIBREF32": {
                "ref_id": "b32",
                "title": "Deep ehr: A survey of recent advances in deep learning techniques for electronic health record (ehr) analysis",
                "authors": [
                    {
                        "first": "Benjamin",
                        "middle": [],
                        "last": "Shickel",
                        "suffix": ""
                    },
                    {
                        "first": "Patrick",
                        "middle": [
                            "James"
                        ],
                        "last": "Tighe",
                        "suffix": ""
                    },
                    {
                        "first": "Azra",
                        "middle": [],
                        "last": "Bihorac",
                        "suffix": ""
                    },
                    {
                        "first": "Parisa",
                        "middle": [],
                        "last": "Rashidi",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "IEEE Journal of Biomedical and Health Informatics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Benjamin Shickel, Patrick James Tighe, Azra Bihorac, and Parisa Rashidi. 2017. Deep ehr: A survey of re- cent advances in deep learning techniques for elec- tronic health record (ehr) analysis. IEEE Journal of Biomedical and Health Informatics.",
                "links": null
            },
            "BIBREF33": {
                "ref_id": "b33",
                "title": "A survey of hierarchical classification across different application domains. Data Mining and Knowledge Discovery",
                "authors": [
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Carlos",
                        "suffix": ""
                    },
                    {
                        "first": "Alex",
                        "middle": [
                            "A"
                        ],
                        "last": "Silla",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Freitas",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "",
                "volume": "22",
                "issue": "",
                "pages": "31--72",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Carlos N Silla and Alex A Freitas. 2011. A survey of hierarchical classification across different appli- cation domains. Data Mining and Knowledge Dis- covery, 22(1-2):31-72.",
                "links": null
            },
            "BIBREF34": {
                "ref_id": "b34",
                "title": "Deep multi-task learning with low level tasks supervised at lower layers",
                "authors": [
                    {
                        "first": "Anders",
                        "middle": [],
                        "last": "S\u00f8gaard",
                        "suffix": ""
                    },
                    {
                        "first": "Yoav",
                        "middle": [],
                        "last": "Goldberg",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Anders S\u00f8gaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In ACL.",
                "links": null
            },
            "BIBREF35": {
                "ref_id": "b35",
                "title": "Fast and accurate entity recognition with iterated dilated convolutions",
                "authors": [
                    {
                        "first": "Emma",
                        "middle": [],
                        "last": "Strubell",
                        "suffix": ""
                    },
                    {
                        "first": "Patrick",
                        "middle": [],
                        "last": "Verga",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Belanger",
                        "suffix": ""
                    },
                    {
                        "first": "Andrew",
                        "middle": [],
                        "last": "Mccallum",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "ENNLP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. In ENNLP.",
                "links": null
            },
            "BIBREF36": {
                "ref_id": "b36",
                "title": "Annotating longitudinal clinical narratives for de-identification: The 2014 i2b2/uthealth corpus",
                "authors": [
                    {
                        "first": "Amber",
                        "middle": [],
                        "last": "Stubbs",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Uzuner",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "J. Biomed. Inf",
                "volume": "58",
                "issue": "",
                "pages": "20--29",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Amber Stubbs and\u00d6zlem Uzuner. 2015. Annotating longitudinal clinical narratives for de-identification: The 2014 i2b2/uthealth corpus. J. Biomed. Inf., 58:20-29.",
                "links": null
            },
            "BIBREF37": {
                "ref_id": "b37",
                "title": "Introduction to the conll-2003 shared task: Language-independent named entity recognition",
                "authors": [
                    {
                        "first": "Erik F Tjong Kim",
                        "middle": [],
                        "last": "Sang",
                        "suffix": ""
                    },
                    {
                        "first": "Fien",
                        "middle": [],
                        "last": "De Meulder",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "NAACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In NAACL.",
                "links": null
            },
            "BIBREF38": {
                "ref_id": "b38",
                "title": "Evaluating the state-of-the-art in automatic deidentification",
                "authors": [
                    {
                        "first": "Ozlem",
                        "middle": [],
                        "last": "Uzuner",
                        "suffix": ""
                    },
                    {
                        "first": "Yuan",
                        "middle": [],
                        "last": "Luo",
                        "suffix": ""
                    },
                    {
                        "first": "Peter",
                        "middle": [],
                        "last": "Szolovits",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "J. Am Med Inform Assoc",
                "volume": "14",
                "issue": "5",
                "pages": "550--563",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ozlem Uzuner, Yuan Luo, and Peter Szolovits. 2007. Evaluating the state-of-the-art in automatic de- identification. J. Am Med Inform Assoc, 14(5):550- 563.",
                "links": null
            },
            "BIBREF40": {
                "ref_id": "b40",
                "title": "Individual comparisons by ranking methods",
                "authors": [
                    {
                        "first": "Frank",
                        "middle": [],
                        "last": "Wilcoxon",
                        "suffix": ""
                    }
                ],
                "year": 1945,
                "venue": "Biometrics bulletin",
                "volume": "1",
                "issue": "6",
                "pages": "80--83",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Frank Wilcoxon. 1945. Individual comparisons by ranking methods. Biometrics bulletin, 1(6):80-83.",
                "links": null
            },
            "BIBREF41": {
                "ref_id": "b41",
                "title": "A review on multi-label learning algorithms",
                "authors": [
                    {
                        "first": "Min-Ling",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Zhi-Hua",
                        "middle": [],
                        "last": "Zhou",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "IEEE transactions on knowledge and data engineering",
                "volume": "26",
                "issue": "8",
                "pages": "1819--1837",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Min-Ling Zhang and Zhi-Hua Zhou. 2014. A re- view on multi-label learning algorithms. IEEE transactions on knowledge and data engineering, 26(8):1819-1837.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "type_str": "figure",
                "num": null,
                "uris": null,
                "text": "Neural architecture for NER."
            },
            "FIGREF1": {
                "type_str": "figure",
                "num": null,
                "uris": null,
                "text": "NER multitasking architecture for 3 tag-sets."
            },
            "FIGREF2": {
                "type_str": "figure",
                "num": null,
                "uris": null,
                "text": "there is a directed path from c to d in the graph. For example, Sem(N ame) = {N ame, LastN ame, F irstN ame}."
            },
            "FIGREF3": {
                "type_str": "figure",
                "num": null,
                "uris": null,
                "text": "g is a plausible interpretation for y at the fine-grained tag level. For example, following Fig. 4, sequences ['Hospital', 'City'] and ['Street', 'City'] agree with ['Location', 'Location'], unlike ['City', 'Last Name']."
            },
            "TABREF3": {
                "type_str": "table",
                "html": null,
                "num": null,
                "content": "<table><tr><td/><td>I2B2'06</td><td>I2B2'14</td><td>Conll</td><td>Onto</td></tr><tr><td>Micro avg. F1</td><td>0.894</td><td>0.960</td><td>0.926</td><td>0.896</td></tr></table>",
                "text": "nlp.stanford.edu/data/glove.6B.zip"
            },
            "TABREF4": {
                "type_str": "table",
                "html": null,
                "num": null,
                "content": "<table><tr><td/><td colspan=\"4\">Tag Frequency in training / test (%)</td></tr><tr><td/><td>I2B2'06</td><td>I2B2'14</td><td>Conll</td><td>Onto</td></tr><tr><td>Name</td><td>1.4 / 1.3</td><td>1.0 / 1.0</td><td>4.3 / 4.9</td><td>3.1 / 2.9</td></tr><tr><td>Date</td><td>1.7 / 1.5</td><td>2.4 / 2.5</td><td>0 / 0</td><td>2.7 / 3.1</td></tr><tr><td>Location</td><td>0.1 / 0.1</td><td>0.2 / 0.3</td><td>3.2 / 3.4</td><td>2.7 / 3.2</td></tr><tr><td>Hospital</td><td>0.6 / 0.7</td><td>0.3 / 0.3</td><td>0 / 0</td><td>0 / 0</td></tr></table>",
                "text": "F1 for training and testing a single base NER model on the same dataset."
            },
            "TABREF5": {
                "type_str": "table",
                "html": null,
                "num": null,
                "content": "<table/>",
                "text": "Occurrence statistics for tags used in the tagset extension experiment, reported as % out of all tokens in the training and test sets of each dataset."
            },
            "TABREF7": {
                "type_str": "table",
                "html": null,
                "num": null,
                "content": "<table/>",
                "text": "F1 in the tag-set extension experiment, averaged over extending datasets for every base dataset."
            },
            "TABREF9": {
                "type_str": "table",
                "html": null,
                "num": null,
                "content": "<table><tr><td colspan=\"6\">: F1 for tag-set extensions with more than 100</td></tr><tr><td colspan=\"6\">collisions. Blank entries indicate fewer than 100 colli-</td></tr><tr><td colspan=\"6\">sions. (*) indicates all results that are statistically sig-</td></tr><tr><td colspan=\"4\">nificantly better than others in that row.</td><td/><td/></tr><tr><td>F1</td><td/><td/><td colspan=\"2\">Model</td><td/></tr><tr><td>Tag</td><td>Base</td><td>Extending</td><td>Hier</td><td>Indep</td><td>MTL</td></tr><tr><td>Location</td><td>I2B2'14</td><td>I2B2'06 Onto</td><td>0.953 0.954</td><td>0.919 0.899</td><td>0.919 0.887</td></tr><tr><td>Name</td><td>Conll</td><td>I2B2'06 Onto</td><td>0.846 0.895</td><td>0.827 0.888</td><td>0.809 0.890</td></tr></table>",
                "text": ""
            },
            "TABREF10": {
                "type_str": "table",
                "html": null,
                "num": null,
                "content": "<table><tr><td>: Examples for performance differences when</td></tr><tr><td>base datasets are extended with an in-domain dataset</td></tr><tr><td>compared to an out-of-domain dataset.</td></tr></table>",
                "text": ""
            },
            "TABREF12": {
                "type_str": "table",
                "html": null,
                "num": null,
                "content": "<table><tr><td>: F1 for combining I2B2'06 and I2B2'14. The</td></tr><tr><td>top two models were trained only on a single dataset.</td></tr><tr><td>The lower table part holds the number of collisions at</td></tr><tr><td>post-processing. (*) indicates results that are statisti-</td></tr><tr><td>cally significantly better than others in that column.</td></tr></table>",
                "text": ""
            },
            "TABREF13": {
                "type_str": "table",
                "html": null,
                "num": null,
                "content": "<table><tr><td>F1</td><td/><td/><td/><td>Model</td><td/></tr><tr><td>Tag</td><td>Base</td><td>Extending</td><td>Hier</td><td>Indep</td><td>MTL</td></tr><tr><td>Date</td><td>I2B2'14</td><td>I2B2'06</td><td>0.899</td><td>0.904</td><td>0.903</td></tr><tr><td/><td/><td>Onto</td><td>0.713</td><td>0.686</td><td>0.671</td></tr><tr><td/><td>I2B2'06</td><td>I2B2'14</td><td>0.871</td><td>0.840</td><td>0.875</td></tr><tr><td/><td/><td>Onto</td><td>0.641</td><td>0.681</td><td>0.698</td></tr><tr><td/><td>Onto</td><td>I2B2'14</td><td>0.837</td><td>0.830</td><td>0.831</td></tr><tr><td/><td/><td>I2B2'06</td><td>0.834</td><td>0.826</td><td>0.807</td></tr><tr><td>Hospital</td><td>I2B2'14</td><td>I2B2'06</td><td>0.931</td><td>0.941</td><td>0.918</td></tr><tr><td/><td>I2B2'06</td><td>I2B2'14</td><td>0.867</td><td>0.866</td><td>0.853</td></tr><tr><td>Location</td><td>Conll</td><td>I2B2'14</td><td>0.818</td><td>0.783</td><td>0.812</td></tr><tr><td/><td/><td>I2B2'06</td><td>0.748</td><td>0.739</td><td>0.730</td></tr><tr><td/><td/><td>Onto</td><td>0.836</td><td>0.830</td><td>0.836</td></tr><tr><td/><td>I2B2'14</td><td>Conll</td><td>0.954</td><td>0.899</td><td>0.887</td></tr><tr><td/><td/><td>I2B2'06</td><td>0.953</td><td>0.919</td><td>0.919</td></tr><tr><td/><td/><td>Onto</td><td>0.951</td><td>0.921</td><td>0.907</td></tr><tr><td/><td>I2B2'06</td><td>Conll</td><td>0.876</td><td>0.816</td><td>0.760</td></tr><tr><td/><td/><td>I2B2'14</td><td>0.886</td><td>0.883</td><td>0.888</td></tr><tr><td/><td/><td>Onto</td><td>0.869</td><td>0.847</td><td>0.812</td></tr><tr><td/><td>Onto</td><td>Conll</td><td>0.747</td><td>0.701</td><td>0.703</td></tr><tr><td/><td/><td>I2B2'14</td><td>0.793</td><td>0.691</td><td>0.707</td></tr><tr><td/><td/><td>I2B2'06</td><td>0.814</td><td>0.691</td><td>0.666</td></tr><tr><td>Name</td><td>Conll</td><td>I2B2'14</td><td>0.855</td><td>0.771</td><td>0.690</td></tr><tr><td/><td/><td>I2B2'06</td><td>0.827</td><td>0.666</td><td>0.631</td></tr><tr><td/><td/><td>Onto</td><td>0.860</td><td>0.841</td><td>0.867</td></tr><tr><td/><td>I2B2'14</td><td>Conll</td><td>0.900</td><td>0.863</td><td>0.890</td></tr><tr><td/><td/><td>I2B2'06</td><td>0.943</td><td>0.893</td><td>0.927</td></tr><tr><td/><td/><td>Onto</td><td>0.911</td><td>0.882</td><td>0.891</td></tr><tr><td/><td>I2B2'06</td><td>Conll</td><td>0.662</td><td>0.679</td><td>0.653</td></tr><tr><td/><td/><td>I2B2'14</td><td>0.834</td><td>0.824</td><td>0.808</td></tr><tr><td/><td/><td>Onto</td><td>0.726</td><td>0.726</td><td>0.727</td></tr><tr><td/><td>Onto</td><td>Conll</td><td>0.895</td><td>0.888</td><td>0.890</td></tr><tr><td/><td/><td>I2B2'14</td><td>0.892</td><td>0.872</td><td>0.886</td></tr><tr><td/><td/><td>I2B2'06</td><td>0.846</td><td>0.827</td><td>0.809</td></tr></table>",
                "text": "Full experiment results for Section 6.1"
            }
        }
    }
}