File size: 69,536 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
{
    "paper_id": "U07-1004",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T03:08:54.974167Z"
    },
    "title": "Entailment due to Syntactically Encoded Semantic Relationships",
    "authors": [
        {
            "first": "Elena",
            "middle": [],
            "last": "Akhmatova",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Macquarie University Sydney",
                "location": {
                    "country": "Australia"
                }
            },
            "email": ""
        },
        {
            "first": "Mark",
            "middle": [],
            "last": "Dras",
            "suffix": "",
            "affiliation": {
                "laboratory": "Centre for Language Technology Macquarie University Sydney",
                "institution": "",
                "location": {
                    "country": "Australia"
                }
            },
            "email": "madras@ics.mq.edu.au"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "The majority of the state-of-the-art approaches to recognizing textual entailment focus on defining a generic approach to RTE. A generic approach never works well for every single entailment pair: there are entailment pairs that are recognized poorly by all the generic systems. Automatic identification of such entailment pairs and applying to them an RTE algorithm that is specific to them could thus increase an overall performance of an entailment engine (that in this case will combine a generic RTE algorithm with a number of RTE algorithms for the problematic entailment pairs). We identify one subtype of entailment pairs and develop a two-part probabilistic model for their classification into true and false entailments and evaluate it relative both to a baseline and to the RTE systems. We show that the model performs better than the baseline and the average of the systems from the RTE2 on both the balanced and unbalanced datasets we have created for evaluation.",
    "pdf_parse": {
        "paper_id": "U07-1004",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "The majority of the state-of-the-art approaches to recognizing textual entailment focus on defining a generic approach to RTE. A generic approach never works well for every single entailment pair: there are entailment pairs that are recognized poorly by all the generic systems. Automatic identification of such entailment pairs and applying to them an RTE algorithm that is specific to them could thus increase an overall performance of an entailment engine (that in this case will combine a generic RTE algorithm with a number of RTE algorithms for the problematic entailment pairs). We identify one subtype of entailment pairs and develop a two-part probabilistic model for their classification into true and false entailments and evaluate it relative both to a baseline and to the RTE systems. We show that the model performs better than the baseline and the average of the systems from the RTE2 on both the balanced and unbalanced datasets we have created for evaluation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Recognizing Textual Entailment (RTE) is a task where, given two text snippets, the goal is to determine whether the meaning of one text snippet can be inferred from the meaning of the other . The first of the text snippets in such a pair is referred to as the text and the other one as the hypothesis. The pair of text and hypothesis is called a text-hypothesis pair or entailment pair, with the two names considered to be synonymous. The text is usually much longer than the hypothesis. It can be represented by one or more coherent sentences, while the hypothesis is usually one short sen-tence. It is the meaning of the hypothesis that might or might not be entailed from the text. Thus, given a text-hypothesis pair, we recognize the relation between the meanings of the text and the hypothesis in the pair as a true entailment if the meaning of the hypothesis is entailed from the meaning of the text. Otherwise, we recognize the relation between the meanings of the texts as a false entailment.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "There are several datasets for RTE. They contain text-hypothesis pairs marked yes if there is a relation of true entailment in a pair and no otherwise. These datasets are manually created annually for the RTE Challenges 1 and are freely available.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Most state-of-the-art approaches to RTE seek a generic approach to the task and do not differentiate between text-hypothesis pairs. However, a possible alternative is to consider subclasses of entailment pairs and build models to handle these specialties. An instance of this idea is proposed in Vanderwende and Dolan (2005) , where the complete set of entailment pairs is divided in two: those whose categorization could be accurately predicted based solely on syntactic cues and those where it is not the case. Their subsequent work (Vanderwende et al., 2006 ) presents an RTE system based on this work.",
                "cite_spans": [
                    {
                        "start": 296,
                        "end": 324,
                        "text": "Vanderwende and Dolan (2005)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 535,
                        "end": 560,
                        "text": "(Vanderwende et al., 2006",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The broader context of our work is to investigate different ways of subclassifying entailment pairs. In this framework, a generic system would have additional special components that take care of the special subclasses of entailment pairs. Such a component is involved when a pair of its subclass is recognized. Note that we do not envisage classifying all the entailment pairs to give a partitioning of the space, a probably infeasible task. We suggest dividing into classes the entailment pairs that are problematic for all the state-of-the-art generic systems and develop separate RTE algorithms for these par-ticular classes. The broad question that we aim to answer is whether this will improve the overall performance of the RTE engine.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper we are looking at one subtype of entailment pairs where a semantic relation expressed in the hypothesis is implicitly represented by a syntactic construction in the text. There are several reasons to work with this type of entailment pairs. First, it proves possible to recognize them well automatically and distinguish them from other entailment pairs using machine learning. Second, narrowing down the entailment pairs to this subset allows us to draw an analogy with, and develop an algorithm related to, the work by Lapata (2001) that finds the implicit relation between attributes to a head noun in the noun group. That together with a conditional probability model in a parallel with SMT will be taken as the basis of an algorithm for classification of entailment pairs of the chosen type. We evaluate the approach on the RTE2 annotated dataset.",
                "cite_spans": [
                    {
                        "start": 534,
                        "end": 547,
                        "text": "Lapata (2001)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The layout of the paper follows the general flow of the research. Section 2 defines the chosen type of entailment pairs. Section 3 describes an automatic classifier which distinguishes the desired type of the entailment pairs. Section 4 describes an algorithm for recognizing true and false entailments for the entailments of the chosen type, and gives some experimental results comparing our algorithm against a number of baselines. Section 5 presents the evaluation results and section 6 concludes the work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We looked through the RTE2 test set and partitioned the set into several groups of entailments. Though the entailment pairs are different, for every word in the hypothesis there is often a word in the text from which it is entailed. It is not always so and we focus on the entailment pairs where this is not the case.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Entailment types",
                "sec_num": "2"
            },
            {
                "text": "The entailment relationship we are focusing on is named an Entailment due to Syntactically Encoded Semantic Relationships (ESESR), as a specific syntactic construction in the text encodes a semantic relationship between its elements that is explicitly shown in the hypothesis.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "Being more precise, the text-hypothesis pairs of interest have the following characteristics:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "1. The hypothesis is a simple sentence. That is a sentence that consists of a subject, a predicate, and an object, and has no subordinate clauses.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "2. Both subject and object of the hypothesis (or their morphological variants) are found in the text.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "3. The predicate of the hypothesis has no match with anything in the text that is linked to the matches of the subject and the object of the hypothesis.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "4. The matches of the subject and the object in the hypothesis can be linked to each other in the text by any syntactic relationship except depending from the same verb or a derivative of it.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "Thus, the predicate of the hypothesis is the semantic relationship between its subject and object that is not explicitly defined in the text but is implicitly presented in the syntactic relationship between the matches of the subject and object of the hypothesis in the text.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "The most frequent syntactic relationships between the matches of the subject and the object of the hypothesis in the text in the RTE2 dataset are apposition, 2 a noun group and its prepositional attachment, and attributive relation within a noun group.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "Consider the examples of the entailments of the described type:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "(1) Text: From Les Combes, in the Italian Alps, yesterday, where the Pope is on vacation, the Vatican's Press Office Director, Joaquin Navarro Valls, responded with a written statement to the accusations made by the Israeli government against Benedict XVI.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "Hypothesis: Les Combes is located in the Italian Alps.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "The location Les Combes is in the relation of apposition to the Italian Alps. This syntactic relation implicitly encodes the semantic relation represented by the words is located in between the noun groups.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "(2) Text: Lt. Jim Bowell of the Butler Township Fire Department said the 4:45 a.m. accident set fire to about 100 yards of woods.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "Hypothesis: Jim Bowell is engaged by the Butler Township Fire Department.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "Lt. Jim Bowell is connected syntactically to the Butler Township Fire Department via a preposition. That implicitly encodes a relation between the person and organization, to be engaged by.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "(3) Text: Japan's Kyodo news agency said the US could be ready to set up a liaison office-the lowest level of diplomatic representation-in Pyongyang if it abandons its nuclear program.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "Hypothesis: Kyodo news agency is based in Japan.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "The attributive relationship between Kyodo news agency and Japan suggests but does not state explicitly the relationship is based in between them. The Kyodo news agency is based in Japan is entailed from the attributive relationships between the nouns.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syntactically encoded semantics",
                "sec_num": "2.1"
            },
            {
                "text": "The fact that most entailment engines rely on high word overlap, longest common substring and other features 3 implies an assumption that there must be a word in the text for every word in the hypothesis. That in its turn suggests the ESESR entailment pairs may not be recognized well. The RTE2 results confirm that. The mean recognition of the entailments of this subtype is 61.9% among all the 41 system submissions. This places the type we have defined around the middle: difficult enough to be a challenge, but not so difficult as to be infeasible. The agreement on the recognition of the true entailments is around 86%, and the false entailments are recognized correctly with an accuracy of less than 25%. The features mentioned above tend to guess the true entailment as the matches of the subject and the object of the hypothesis in the text give a good score for word overlap, longest common substring and dependency tree matches. The",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Recognition of the entailment types by RTE2 Challenge participants",
                "sec_num": "2.2"
            },
            {
                "text": "In this section we want to verify that entailment pairs of the ESESR subtype can be recognized. To do this we construct a machine learner. It marks entailment pairs as true if they are of the ESESR type and false otherwise.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Classification",
                "sec_num": "3"
            },
            {
                "text": "To extract the features we build first the word-toword alignment between the words of the text and hypothesis, based on WordNet. 4 The features for the machine learner are based on the syntactic and semantic relationships between the aligned parts of the text and the hypothesis. We build two sets of features: ones that tell that the entailment is of a given type, and ones that tell that the entailment is not of the given type.",
                "cite_spans": [
                    {
                        "start": 129,
                        "end": 130,
                        "text": "4",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Classification",
                "sec_num": "3"
            },
            {
                "text": "for: The syntactic features that are in favour of the ESESR type are the existence of a particular syntactic relationship between the matches of the subject and the object of the hypothesis in text, namely apposition, being within the same noun group, representing a noun group and its prepositional attachment or the combination of the above. against: The syntactic features that indicate that the entailment pair is not of the ES-ESR type show that the aligned parts of the hypothesis in the text are connected in the text by a predicate or represent the predicate themselves.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The syntactic features:",
                "sec_num": null
            },
            {
                "text": "The semantic features: For the semantic description of the text and the hypothesis we are inter- 4 Two words are aligned if there is a path bewteen them in WordNet of length \u2264 3. The Cartesian product of the set of the words of the text and the set of the words of the hypothesis yields a set of the candidate word pairs. We used WordNet 2.0 and the C++ API provided by the WordNet developers to look for the paths between the words. We consider the path travel#v#1 -walk#v#1 as a path of length 2, where walk#v#1 is a hyponym of travel#v#1, teakettle#n#1 -kettle#n#1 -pot#n#1 is a path of length 3. There can be any WordNet relationships between the nodes in the path except antonyms. ested in the number of the aligned words, predicates and named entities.",
                "cite_spans": [
                    {
                        "start": 97,
                        "end": 98,
                        "text": "4",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The syntactic features:",
                "sec_num": null
            },
            {
                "text": "We have 16 features all together. For a more detailed description of the features please refer to Akhmatova and Dras (2007) .",
                "cite_spans": [
                    {
                        "start": 98,
                        "end": 123,
                        "text": "Akhmatova and Dras (2007)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The syntactic features:",
                "sec_num": null
            },
            {
                "text": "The RTE2 test set consists of 800 entailment pairs. Only approximately one tenth of those pairs are ESESR entailments. To build the classifier we have duplicated all the ESESR entailment pairs several times to make the distribution of the entailment pairs equal. (We indeed took care later for the crossvalidation that the examples on which we test are not in the training set in this case.) The reason for this is that we are interested in true positives to apply to them an algorithm in section 4. Having only a small proportion of the set being of the ESESR type leads the machine learner to underweight these in the attempt to maximize the overall accuracy and gives a low TP, true positive, rate, which is the one we are interested in. We ran the J48 classifier on the dataset with the one-leaf-out cross validation test mode using the WEKA ML API (Witten and Frank, 1999) . The overall accuracy is 75% (see table 1).",
                "cite_spans": [
                    {
                        "start": 853,
                        "end": 877,
                        "text": "(Witten and Frank, 1999)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The syntactic features:",
                "sec_num": null
            },
            {
                "text": "The problem of assigning a value of true or false can be thought of probabilistically, evaluating the conditional probability of the hypothesis h given the text t, P (h|t). We can rewrite this using Bayes Rule as P (h|t) = P (t|h) \u00d7 P (h)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "4"
            },
            {
                "text": "P (t)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "4"
            },
            {
                "text": "An analogy with Statistical MT can be drawn here. As in SMT 5 we divide the calculation of P (h|t) into two parts, each of which we are able to estimate. One difference is that in SMT we find the argmax of this function to find the best target sentence for the source sentence. This allows us to ignore the denominator. In entailment we must find a threshold that will divide the true entailment pairs from false, so P (t) will constitute at least a scaling factor. It is true that P (t) may be different for each text, so whether the common threshold can be found is not obvious. However the related work of on defining probabilistic textual entailment shows that such a threshold is possible. In this paper we regard it as an empirical question; we discuss it further in Section 4.3.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "4"
            },
            {
                "text": "In SMT P (t|h) is generally referred to as the translation probability and P (h) as the language model; but P (h) is more generally speaking just a prior distribution, the knowledge available in the absence of the more detailed information. In the context of this work, when we know nothing about the extra semantic or syntactic relationships between the words of the text and the hypothesis, the estimation of the probability of the hypothesis sentence is a prior probability of the entailment relation in a pair.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "4"
            },
            {
                "text": "For example, if the text sentence contains Samuel L. Husk, executive director of the Council of Great City Schools, . . . (see example (4)) then it is more likely in the absence of other knowledge to entail that Samuel L. Husk works for the Council of Great City Schools, than that Samuel L. Husk threw a party in the Council of Great City Schools. Thus, our expectation is that the former sentence is a more probable sentence in the language than the latter, and that it can be supported by corpus statistics.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "4"
            },
            {
                "text": "To calculate a prior probability of the entailment relation, P (h), we adapt the work of Lapata (2001) . She was interested in disambiguation of a relationship between an adjective and a noun inside a noun group. Using corpus statistics it was estimated that the adjective fast and a noun planes in a noun group fast planes are much more probable to be in a relationship represented by the word to fly (the planes that fly fast) than in relationships to break or to land (the planes that break fast or the planes that land fast). Similar to that, we want to estimate that, if it is not stated otherwise, the most probable relationship between a person Samuel L. Husk and a company the Council of Great City Schools is to work for.",
                "cite_spans": [
                    {
                        "start": 89,
                        "end": 102,
                        "text": "Lapata (2001)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model: part I",
                "sec_num": "4.1"
            },
            {
                "text": "Thus, similar to Lapata (2001) , we calculate the probability of the hypothesis sentence as a probability of a triple consisting of a subject of the hypothesis sentence, NE 1 , its predicate, R, and a direct or indirect object, NE 2 , that is the probability P (NE 1 , R, NE 2 ). We had to take named entities instead of the actual subject and object, as firstly, subject and object very often belong to the set of ",
                "cite_spans": [
                    {
                        "start": 17,
                        "end": 30,
                        "text": "Lapata (2001)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model: part I",
                "sec_num": "4.1"
            },
            {
                "text": "P (h) := P (NE 1 , R, NE 2 ) = P (NE 1 |R, N E 2 ) \u00d7 P (R, N E 2 ) = P (NE 1 |R, N E 2 ) \u00d7 P (R) \u00d7 P (NE 2 |R).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model: part I",
                "sec_num": "4.1"
            },
            {
                "text": "We will make an approximation assuming that NE 1 is independent of NE 2",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model: part I",
                "sec_num": "4.1"
            },
            {
                "text": "P (NE 1 |R, N E 2 ) \u2248 P (NE 1 |R), thus P (h) = P (NE 1 |R) \u00d7 P (R) \u00d7 P (NE 2 |R).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model: part I",
                "sec_num": "4.1"
            },
            {
                "text": "We estimate the individual probabilities by corpus frequency counts (C(x) represents the counts of x)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model: part I",
                "sec_num": "4.1"
            },
            {
                "text": "P (h) = C(NE1, R) C(R) \u00d7 C(R) n i=1 (C(Ri)) \u00d7 C(NE2, R) C(R) = C(NE1, R) \u00d7 C(NE2, R) C(R) \u00d7 n i=1 (C(Ri)) .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model: part I",
                "sec_num": "4.1"
            },
            {
                "text": "These probabilities have been calculated pairwise for Location, Person, JobTitle and Organization. The corpus was the first 500,000 sentence of the Wikipedia XML corpus (Denoyer and Gallinari, 2006) parsed using the Minipar parser (Lin, 1998) and Annie plug-ing of the GATE development environment (Cunningham et al., 1996) . Table 2 shows a selection of the relations found in the RTE2 dataset. So, for example, Person work(s) in Location (at rank 93, with a \u2212 log 2 (P (h)) of 10.25) is much more frequent than Person represent(s) Location (at rank 775, with a \u2212 log 2 (P (h)) of 13.60).",
                "cite_spans": [
                    {
                        "start": 169,
                        "end": 198,
                        "text": "(Denoyer and Gallinari, 2006)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 231,
                        "end": 242,
                        "text": "(Lin, 1998)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 298,
                        "end": 323,
                        "text": "(Cunningham et al., 1996)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 326,
                        "end": 333,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Model: part I",
                "sec_num": "4.1"
            },
            {
                "text": "Whereas P (h) is a prior probability looking only at the relationship between subject and object in the hypothesis, P (t|h) looks at the aspects of the text that might suggest the entailment relationship. Consider example 4 below. There is no syntactic relationship between India's Meteorological Department and Indonesia, suggesting the hypothesis is not a valid entailment. Our approach to estimating P (t|h), then, is to decide whether particular relatioships in the text hold. To do this we built a classifier with various classes of features.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model: part II",
                "sec_num": "4.2"
            },
            {
                "text": "Features 1 and 2 syntactic structure of the text sentence: presence or absence of the syntactic connection between the aligned elements; type of the syntactic relationship, if present.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model: part II",
                "sec_num": "4.2"
            },
            {
                "text": "Features 3 -6 alignment: number of non-aligned words between the aligned noun groups, number of the non-aligned head elements of the aligned noun groups.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model: part II",
                "sec_num": "4.2"
            },
            {
                "text": "Features 7 and 8 syntactic structure of the aligned noun groups.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model: part II",
                "sec_num": "4.2"
            },
            {
                "text": "We have already briefly mentioned above the importance of the syntactic dependencies between the matches of the subject and object of the hypothesis in the text. The alignment features capture the fact that if there are too many missed words in the aligned noun groups then the hypothesis might have aquired different meaning from the one expressed in the text. Non-aligned head elements of the noun groups greatly increase the possibility of the meaning altering.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature 9 paraphrases.",
                "sec_num": null
            },
            {
                "text": "In determining the existence of syntactic relationships within the text, we use the Link Grammar Parser (Sleator and Temperley, 1991) . To give an example for the features 7 and 8, the link G, for example, connects proper noun words together. For example, MIT and Press in the MIT Press Bookstore, see example (6), as well as Iraq and War (see example (7)), will be connected by the link G. We would say that the hypothesis is closer to the text if from the noun groups MIT Press Bookstore and the Iraq War hero the whole parts MIT Press and Iraq War were present in the hypothesis, rather than just MIT or Iraq, for example. If it is not the case and one can see only the first parts of the MIT Press and Iraq War components of the text sentence, then we say that the G link is 'broken'. A broken G link reduces the probability of the true entailment between the text and the hypothesis.",
                "cite_spans": [
                    {
                        "start": 104,
                        "end": 133,
                        "text": "(Sleator and Temperley, 1991)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature 9 paraphrases.",
                "sec_num": null
            },
            {
                "text": "MIT Press Bookstore G Iraq War hero.n",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Feature 9 paraphrases.",
                "sec_num": null
            },
            {
                "text": "In the examples (6) and (7) the G relation in the noun groups was broken. The MIT Press was substituted with The MIT, Iraq War with Iraq. That led to the hypotheses that the meaning of the text is not entailed correctly.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G",
                "sec_num": null
            },
            {
                "text": "(6) Text: The MIT Press Bookstore stocks most of the books and journals published by The MIT Press as well as the best of other publishers books in related fields.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G",
                "sec_num": null
            },
            {
                "text": "Hypothesis: The MIT is a book store. Feature 9 is the number of paraphrased phrases.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G",
                "sec_num": null
            },
            {
                "text": "(9) Text:Mahmoud al-Zahar , a Hamas leader in Gaza, said so explicitly, dismissing Mr. Abba's arguments: History has proven that the rockets have been in the Palestinian interest.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G",
                "sec_num": null
            },
            {
                "text": "Hypothesis:Mahmoud al-Zahar is a member of Hamas.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G",
                "sec_num": null
            },
            {
                "text": "Leader and member are not synonyms, but they will be found to be paraphrases of each other by the algorithm proposed in Bannard and Callison-Burch (2005) . To acquire the paraphrases we used the PhraseBuilder 6 on English and Dutch corpuses of Europarl.",
                "cite_spans": [
                    {
                        "start": 120,
                        "end": 153,
                        "text": "Bannard and Callison-Burch (2005)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G",
                "sec_num": null
            },
            {
                "text": "We have selected the k-nearest neighbours method, which has quite a transparent method of calculating the probability for an instance to belong to a particular class (Mitchell, 1997) . We used WEKA API k-nearest neighbours method implementation for our work.",
                "cite_spans": [
                    {
                        "start": 166,
                        "end": 182,
                        "text": "(Mitchell, 1997)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Deriving a Probability",
                "sec_num": "4.2.1"
            },
            {
                "text": "We then derive a probability from our classifier. In classification, classified instances will fall at varying distances from the boundaries which define the class spaces. This can correspond, for example, to the certainty of classification, and various classification methods have a derived probability of classification. In our case, with classes being true entailment and false entailment, we can use this as an estimate of P (t|h).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Deriving a Probability",
                "sec_num": "4.2.1"
            },
            {
                "text": "The accuracy of the machine learner built on these features with k = 5 is not high, 54%, on the one-leaf-out approach. We are interested here though in the probabilities of belonging to a particular class rather than in the classification. P (true|instance) = 0.49 is the same for us here as P (true|instance) = 0.51. That means that the algorithm is not actually sure to which class the instance belongs. That the P (true|instance) is greater than, say, 80% would be an important clue in the class prediction.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Deriving a Probability",
                "sec_num": "4.2.1"
            },
            {
                "text": "For calculating our P (h|t), as defind at the start of the Section 4, we have estimates of P (h) and P (t|h). We will assume that P (t) is a constant for all entailment pairs and acts as a normalizing factor. (This may not be true, but we treat it here as an empirical question.)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Combining part I and part II",
                "sec_num": "4.3"
            },
            {
                "text": "We want then to find a threshold H for P (h|t), such that where P (h|t) \u2265 H the entailment pair is true, and false otherwise. The threshold H then incorporates the normalizing factor P (t).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Combining part I and part II",
                "sec_num": "4.3"
            },
            {
                "text": "We have created a balanced corpus of the true and false examples of the ESESR entailment pairs from the RTE2 dataset. Then, as the one-leaf-out approach suggests, for every instance (that is, for every entailment pair) we created a separate dataset not containing it to build the k-nearest neighbours classifier. The probability of the instance being a true entailment on this classifier is the outcome of the baseline unbalanced dataset performance balanced dataset performance 41 submissions mean 61.9% 50% best performing on ESESR system 86% 73% secondbest system 74% 55% default \"yes\" 78% 50% Table 3 : Baselines and their performance on the balanced and unbalanced datasets classification process, see the section 4.2.1. Then this probability is combined with the probability of the hypothesis P (h), described in the section 4.1. This process is repeated for every entailment pair. Thus, as a result, every entailment pair is associated with a value of the probability P (h|t).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 597,
                        "end": 604,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Combining part I and part II",
                "sec_num": "4.3"
            },
            {
                "text": "One possibility to find a good value of such an H is to carry out a search over possible values on a development set. As an alternative we used a machine learner again, a decision tree, with the single feature being the combined probability. The top node of the decision tree is the best split of data. Due to the fact that the probabilities P (h) are quite small numbers, we used as a feature for the decision tree also the product of the logarithms base two of the probabilities. Even though this is not strictly derivable from our model, it is still a ranking and we get a good threshold. The threshold H = 3.41 fits the training set best of all.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Combining part I and part II",
                "sec_num": "4.3"
            },
            {
                "text": "We compare the results of the approach on two datasets, an unbalanced dataset consisting of all the ESESR entailments from the RTE2 corpus; and a balanced dataset, the set of 50000 random balanced subsets of the unbalanced dataset containing all the false entailments and the same number of randomly chosen true entailments (refer to section 2.2).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "5"
            },
            {
                "text": "We take four baselines as a comparison for our approach:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "5"
            },
            {
                "text": "1. the mean of the accuracy of all the 41 submissions to the RTE2 Challenge;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "5"
            },
            {
                "text": "2. the best performing on the ESESR entailment pairs system;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "5"
            },
            {
                "text": "3. the second best system on ESESR entailment pairs; and 4. the default algorithm that gives \"yes\" to all the entailments, due to the fact that the majority of the ESESR entailment pairs in the RTE2 test set are true entailments",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "5"
            },
            {
                "text": "Refer to the Table 3 to find the evaluation of the performance with respect to the baselines. We are particularly interested in the balanced dataset, as we do not know the proportion of the true and false entailments of a given type in an arbitrary context.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 13,
                        "end": 20,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "5"
            },
            {
                "text": "Our system gets 80% accuracy on the unbalanced dataset and 59% accuracy on the balanced dataset. That means that our method performs noticeably better than the average of the methods from RTE2 Challenge and the \"yes\" to all baseline on both datasets. It scores about 18% higher than the average and 2% higher than the \"yes\" to all algorithm on the unbalanced dataset; and 9% higher than these two algorithms on the balanced dataset. Further, our results are higher for all but the best system in the Challenge for this subtype.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "5"
            },
            {
                "text": "In the current work we have identified a subtype of entailment pairs; presented a machine learner that distinguishes the subtype among the entailment pairs; and presented a probabilistic model that evaluates the conditional probability of the hypothesis given the text. We then evaluated the algorithm against a baseline and two other systems. The result is that the algorithm performs significantly better than the baseline (from 9% up to 18% better) and all but the best system in the Challenge for the type of entailment pairs we are interested in.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The conclusions and future work",
                "sec_num": "6"
            },
            {
                "text": "We plan to address other subtypes similar to ES-ESR entailment groups thus contributing more to the recognizing specific types of entailments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The conclusions and future work",
                "sec_num": "6"
            },
            {
                "text": "http://www.pascal-network.org/Challenges/RTE/ Proceedings of the Australasian Language Technology Workshop 2007, pages 4-12",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "We use the definition ofQuirk et al. (1985) here.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "See, for example, system descriptions in the proceedings of the RTE1 and RTE2 Challenges at http://www.cs.biu.ac.il/\u02dcglikmao/rte05 and http://ir-srv.cs.biu.ac.il:64080/RTE2/proceedings/ respectively. false entailment is not found as the predicate of the hypothesis, important in this case, is not taken into account by these generic features.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "See, for example, \"A Statistical MT Tutorial Workbook,\" unpublished, August 1999 at http://www.isi.edu/\u02dcknight/.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "we have used the PhraseBuilder by Simon Zwarts http://www.ics.mq.edu.au/\u02dcszwarts/Downloads.php",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Syntactically encoded semantic relationships type of entailment pairs",
                "authors": [
                    {
                        "first": "Elena",
                        "middle": [],
                        "last": "Akhmatova",
                        "suffix": ""
                    },
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Dras",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Elena Akhmatova and Mark Dras. 2007. Syn- tactically encoded semantic relationships type of entailment pairs. Available from http://www.ics.mq.edu.au/\u02dcelena/pub.html.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Paraphrasing with bilingual parallel corpora",
                "authors": [
                    {
                        "first": "Colin",
                        "middle": [
                            "J"
                        ],
                        "last": "Bannard",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Callison-Burch",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Colin J. Bannard and Chris Callison-Burch. 2005. Para- phrasing with bilingual parallel corpora. In ACL.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Gate-a general architecture for text engineering",
                "authors": [
                    {
                        "first": "Hamish",
                        "middle": [],
                        "last": "Cunningham",
                        "suffix": ""
                    },
                    {
                        "first": "Yorick",
                        "middle": [],
                        "last": "Wilks",
                        "suffix": ""
                    },
                    {
                        "first": "Robert",
                        "middle": [
                            "J"
                        ],
                        "last": "Gaizauskas",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "COLING",
                "volume": "",
                "issue": "",
                "pages": "1057--1060",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hamish Cunningham, Yorick Wilks, and Robert J. Gaizauskas. 1996. Gate-a general architecture for text engineering. In COLING, pages 1057-1060.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "The pascal recognising textual entailment challenge",
                "authors": [
                    {
                        "first": "Oren",
                        "middle": [],
                        "last": "Ido Dagan",
                        "suffix": ""
                    },
                    {
                        "first": "Bernardo",
                        "middle": [],
                        "last": "Glickman",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Magnini",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "MLCW",
                "volume": "",
                "issue": "",
                "pages": "177--190",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment chal- lenge. In MLCW, pages 177-190.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "The wikipedia xml corpus. SIGIR Forum",
                "authors": [
                    {
                        "first": "Ludovic",
                        "middle": [],
                        "last": "Denoyer",
                        "suffix": ""
                    },
                    {
                        "first": "Patrick",
                        "middle": [],
                        "last": "Gallinari",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "",
                "volume": "40",
                "issue": "",
                "pages": "64--69",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ludovic Denoyer and Patrick Gallinari. 2006. The wikipedia xml corpus. SIGIR Forum, 40(1):64-69.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "A lexical alignment model for probabilistic textual entailment",
                "authors": [
                    {
                        "first": "Oren",
                        "middle": [],
                        "last": "Glickman",
                        "suffix": ""
                    },
                    {
                        "first": "Ido",
                        "middle": [],
                        "last": "Dagan",
                        "suffix": ""
                    },
                    {
                        "first": "Moshe",
                        "middle": [],
                        "last": "Koppel",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "MLCW",
                "volume": "",
                "issue": "",
                "pages": "287--298",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Oren Glickman, Ido Dagan, and Moshe Koppel. 2005. A lexical alignment model for probabilistic textual en- tailment. In MLCW, pages 287-298.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "A corpus-based account of regular polysemy: The case of context-sensitive adjectives",
                "authors": [
                    {
                        "first": "Maria",
                        "middle": [],
                        "last": "Lapata",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "NAACL",
                "volume": "",
                "issue": "",
                "pages": "63--70",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Maria Lapata. 2001. A corpus-based account of regular polysemy: The case of context-sensitive adjectives. In NAACL, pages 63-70.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Dependency-based evaluation of minipar",
                "authors": [
                    {
                        "first": "Dekang",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Workshop on the Evaluation of Parsing Systems",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dekang Lin. 1998. Dependency-based evaluation of minipar. In Workshop on the Evaluation of Parsing Systems.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Machine Learning",
                "authors": [
                    {
                        "first": "Tom",
                        "middle": [],
                        "last": "Mitchell",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tom Mitchell. 1997. Machine Learning. McGraw Hill.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "A grammar of contemporary English",
                "authors": [
                    {
                        "first": "Randolth",
                        "middle": [],
                        "last": "Quirk",
                        "suffix": ""
                    },
                    {
                        "first": "Sidney",
                        "middle": [],
                        "last": "Greenbaum",
                        "suffix": ""
                    }
                ],
                "year": 1985,
                "venue": "Geoffrey Leech",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Randolth Quirk, Sidney Greenbaum, Geoffrey Leech, and Jan Svartvik. 1985. A grammar of contemporary English. Longman, Singapore.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Parsing english with a link grammar",
                "authors": [
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Sleator",
                        "suffix": ""
                    },
                    {
                        "first": "Davy",
                        "middle": [],
                        "last": "Temperley",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Daniel Sleator and Davy Temperley. 1991. Pars- ing english with a link grammar. Available at http://www.link.cs.cmu.edu/link/papers/index.html.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "What syntax can contribute in the entailment task",
                "authors": [
                    {
                        "first": "Lucy",
                        "middle": [],
                        "last": "Vanderwende",
                        "suffix": ""
                    },
                    {
                        "first": "William",
                        "middle": [
                            "B"
                        ],
                        "last": "Dolan",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "MLCW",
                "volume": "",
                "issue": "",
                "pages": "205--216",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lucy Vanderwende and William B. Dolan. 2005. What syntax can contribute in the entailment task. In MLCW, pages 205-216.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Microsoft research at rte-2: Syntactic contributions in the entailment task: an implementation",
                "authors": [
                    {
                        "first": "Lucy",
                        "middle": [],
                        "last": "Vanderwende",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "2nd PASCAL Challenges Workshop on Recognizing Textual Entailment",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lucy Vanderwende, Arul Menezes, and Rion Snow. 2006. Microsoft research at rte-2: Syntactic contribu- tions in the entailment task: an implementation. In 2nd PASCAL Challenges Workshop on Recognizing Tex- tual Entailment.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations",
                "authors": [
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Ian",
                        "suffix": ""
                    },
                    {
                        "first": "Eibe",
                        "middle": [],
                        "last": "Witten",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Frank",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ian H. Witten and Eibe Frank. 1999. Data Mining: Prac- tical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann.",
                "links": null
            }
        },
        "ref_entries": {
            "TABREF2": {
                "html": null,
                "num": null,
                "type_str": "table",
                "text": "Some relations extracted from the first 500,000 sentences of the Wikipedia XML corpus. The three columns give the relation, its rank in a sorted list, and the value \u2212 log 2 (P (h)) respectively. (4) Text: \"Relative size and the power of the purse are certainly key factors,\" says Samuel L. Husk, executive director of the Council of Great City Schools.Hypothesis: Samuel L. Husk works for the Council of Great City Schools.",
                "content": "<table><tr><td>There is a direct syntactic connection between</td></tr><tr><td>Samuel L. Husk and executive director of the Coun-</td></tr><tr><td>cil of Great City Schools. By contrast, consider ex-</td></tr><tr><td>ample 5.</td></tr><tr><td>(5) Text: Both aftershocks had their epicentre</td></tr><tr><td>around the Nicobar island group in the south</td></tr><tr><td>of archipelago that lies close to Indonesia,</td></tr><tr><td>India's Meteorological Department said.</td></tr><tr><td>Hypothesis: India's Meteorological</td></tr><tr><td>Department operates from Indonesia.</td></tr></table>"
            },
            "TABREF3": {
                "html": null,
                "num": null,
                "type_str": "table",
                "text": "Text: The State Department is making the unusual offer of giving expedited visas to the Cuban sons of Iraq War hero Sgt. Carlos Lazo, so they can visit him in the United States, people familiar with the case said Friday. Iraq War hero.n Sgt. Carlos Lazo GN MX connects modifying phrases with commas to preceding nouns. Thus, Sgt. Carlos Lazo is connected to the Iraq War hero in Iraq War hero Sgt. Carlos Lazo by the GN link. It is the same for the Maricopa County Superior Court Judge and Lindsay Ellis in the Maricopa County Superior Court Judge Lindsay Ellis, see example (8). In case the Iraq War hero and Sgt. Carlos Lazo were in the sentence in the relation of apposition, for example, Sgt. Carlos Lazo, an Iraq War hero, they would be connected by an MX link. That makes GN and MX links to be equivalent for us here. The parts connected by the links GN and MX are substitutable, Sgt. Carlos Lazo is a hero, Lindsay Ellis is a judge. Thus, if the head nouns in Maricopa County Superior Court Judge and Iraq War hero are not aligned the hypothesis still might be true.",
                "content": "<table><tr><td>(8) Text: Maricopa County Superior Court Judge</td></tr><tr><td>Lindsay Ellis also ordered Miss Bickel to pay</td></tr><tr><td>$5,000 in restitution to Miss Tomazin's family</td></tr><tr><td>and to perform 40 hours per week of</td></tr><tr><td>community service indefinitely.</td></tr><tr><td>Hypothesis: Lindsay Ellis occupies a post at</td></tr><tr><td>the Superior Court.</td></tr><tr><td>Hypothesis: Sgt. Carlos Lazo worked in Iraq.</td></tr><tr><td>The link GN connects a proper noun to a preced-</td></tr><tr><td>ing common noun which introduces it.</td></tr></table>"
            }
        }
    }
}