File size: 115,950 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
{
    "paper_id": "P15-1007",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:08:09.887561Z"
    },
    "title": "MultiGranCNN: An Architecture for General Matching of Text Chunks on Multiple Levels of Granularity",
    "authors": [
        {
            "first": "Wenpeng",
            "middle": [],
            "last": "Yin",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Munich",
                "location": {
                    "country": "Germany"
                }
            },
            "email": "wenpeng@cis.uni-muenchen.de"
        },
        {
            "first": "Hinrich",
            "middle": [],
            "last": "Sch\u00fctze",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Munich",
                "location": {
                    "country": "Germany"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We present MultiGranCNN, a general deep learning architecture for matching text chunks. MultiGranCNN supports multigranular comparability of representations: shorter sequences in one chunk can be directly compared to longer sequences in the other chunk. Multi-GranCNN also contains a flexible and modularized match feature component that is easily adaptable to different types of chunk matching. We demonstrate stateof-the-art performance of MultiGranCNN on clause coherence and paraphrase identification tasks.",
    "pdf_parse": {
        "paper_id": "P15-1007",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We present MultiGranCNN, a general deep learning architecture for matching text chunks. MultiGranCNN supports multigranular comparability of representations: shorter sequences in one chunk can be directly compared to longer sequences in the other chunk. Multi-GranCNN also contains a flexible and modularized match feature component that is easily adaptable to different types of chunk matching. We demonstrate stateof-the-art performance of MultiGranCNN on clause coherence and paraphrase identification tasks.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Many natural language processing (NLP) tasks can be posed as classifying the relationship between two TEXTCHUNKS (cf. , Bordes et al. (2014b) ) where a TEXTCHUNK can be a sentence, a clause, a paragraph or any other sequence of words that forms a unit.",
                "cite_spans": [
                    {
                        "start": 120,
                        "end": 141,
                        "text": "Bordes et al. (2014b)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Paraphrasing (Figure 1 , top) is one task that we address in this paper and that can be formalized as classifying a TEXTCHUNK relation. The two classes correspond to the sentences being (e.g., the pair <p, q + >) or not being (e.g., the pair <p, q \u2212 >) paraphrases of each other. Another task we look at is clause coherence (Figure 1 , bottom). Here the two TEXTCHUNK relation classes correspond to the second clause being (e.g., the pair <x, y + >) or not being (e.g., the pair <x, y \u2212 >) a discourse-coherent continuation of the first clause. Other tasks that can be formalized as TEXTCHUNK relations are question answering (QA) (is the second chunk an answer to the first?), textual inference (does the first chunk imply the second?) and machine translation (are the two chunks translations of each other?). p PDC will also almost certainly fan the flames of speculation about Longhorn's release.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 13,
                        "end": 22,
                        "text": "(Figure 1",
                        "ref_id": "FIGREF0"
                    },
                    {
                        "start": 324,
                        "end": 333,
                        "text": "(Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "q + PDC will also almost certainly reignite speculation about release dates of Microsoft 's new products. q \u2212 PDC is indifferent to the release of Longhorn.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "x The dollar suffered its worst one-day loss in a month, y + falling to 1.7717 marks . . . from 1.7925 marks yesterday. y \u2212 up from 112.78 yen in late New York trading yesterday. In this paper, we present MultiGranCNN, a general architecture for TEXTCHUNK relation classification. MultiGranCNN can be applied to a broad range of different TEXTCHUNK relations. This is a challenge because natural language has a complex structure -both sequential and hierarchicaland because this structure is usually not parallel in the two chunks that must be matched, further increasing the difficulty of the task. A successful detection algorithm therefore needs to capture not only the internal structure of TEXTCHUNKS, but also the rich pattern of their interactions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "MultiGranCNN is based on two innovations that are critical for successful TEXTCHUNK relation classification. First, the architecture is designed to ensure multigranular comparability. For general matching, we need the ability to match short sequences in one chunk with long sequences in the other chunk. For example, what is expressed by a single word in one chunk (\"reignite\" in q + in the figure) may be expressed by a sequence of several words in its paraphrase (\"fan the flames of\" in p). To meet this objective, we learn representations for words, phrases and the entire sentence that are all mutually comparable; in particular, these representations all have the same dimensionality and live in the same space.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Most prior work (e.g., Blacoe and Lapata (2012; Hu et al. (2014) ) has neglected the need for multigranular comparability and performed matching within fixed levels only, e.g., only words were matched with words or only sentences with sentences. For a general solution to the problem of matching, we instead need the ability to match a unit on a lower level of granularity in one chunk with a unit on a higher level of granularity in the other chunk. Unlike (Socher et al., 2011) , our model does not rely on parsing and it can more exhaustively search the hypothesis space of possible matchings, including matchings that correspond to conflicting segmentations of the input chunks (see Section 5).",
                "cite_spans": [
                    {
                        "start": 23,
                        "end": 47,
                        "text": "Blacoe and Lapata (2012;",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 48,
                        "end": 64,
                        "text": "Hu et al. (2014)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 458,
                        "end": 479,
                        "text": "(Socher et al., 2011)",
                        "ref_id": "BIBREF25"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Our second contribution is that MultiGranCNN contains a flexible and modularized match feature component. This component computes the basic features that measure how well phrases of the two chunks match. We investigate three different match feature models that demonstrate that a wide variety of different match feature models can be implemented. The match feature models can be swapped in and out of MultiGranCNN, depending on the characteristics of the task to be solved.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Prior work that has addressed matching tasks has usually focused on a single task like QA (Bordes et al., 2014a; Yu et al., 2014) or paraphrasing (Socher et al., 2011; Madnani et al., 2012; Ji and Eisenstein, 2013) . The ARC architectures proposed by Hu et al. (2014) are intended to be more general, but seem to be somewhat limited in their flexibility to model different matching relations; e.g., they do not perform well for paraphrasing.",
                "cite_spans": [
                    {
                        "start": 90,
                        "end": 112,
                        "text": "(Bordes et al., 2014a;",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 113,
                        "end": 129,
                        "text": "Yu et al., 2014)",
                        "ref_id": "BIBREF28"
                    },
                    {
                        "start": 146,
                        "end": 167,
                        "text": "(Socher et al., 2011;",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 168,
                        "end": 189,
                        "text": "Madnani et al., 2012;",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 190,
                        "end": 214,
                        "text": "Ji and Eisenstein, 2013)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 251,
                        "end": 267,
                        "text": "Hu et al. (2014)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Different match feature models may also be required by factors other than the characteristics of the task. If the amount of labeled training data is small, then we may prefer a match feature model with few parameters that is robust against overfitting. If there is lots of training data, then a richer match feature model may be the right choice. This motivates the need for an architecture like MultiGranCNN that allows selection of the taskappropriate match feature model from a range of different models and its seamless integration into the architecture.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In remaining parts, Section 2 introduces some related work; Section 3 gives an overview of the proposed MultiGranCNN; Section 4 shows how to learn representations for generalized phrases (gphrases); Section 5 describes the three matching models: DIRECTSIM, INDIRECTSIM and CON-CAT; Section 6 describes the two 2D pooling methods: grid-based pooling and phrase-based pooling; Section 7 describes the match feature CNN; Section 8 summarizes the architecture of MultiGran CNN; and Section 9 presents experiments; finally, Section 10 concludes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Paraphrase identification (PI) is a typical task of sentence matching and it has been frequently studied (Qiu et al., 2006; Blacoe and Lapata, 2012; Madnani et al., 2012; Ji and Eisenstein, 2013) . Socher et al. (2011) utilized parsing to model the hierarchical structure of sentences and uses unfolding recursive autoencoders to learn representations for single words and phrases acting as nonleaf nodes in the tree. The main difference to MultiGranCNN is that we stack multiple convolution layers to model flexible phrases and learn representations for them, and aim to address more general sentence correspondence. Bach et al. (2014) claimed that elementary discourse units obtained by segmenting sentences play an important role in paraphrasing. Their conclusion also endorses (Socher et al., 2011) 's and our work, for both take interactions between component phrases into account.",
                "cite_spans": [
                    {
                        "start": 105,
                        "end": 123,
                        "text": "(Qiu et al., 2006;",
                        "ref_id": "BIBREF24"
                    },
                    {
                        "start": 124,
                        "end": 148,
                        "text": "Blacoe and Lapata, 2012;",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 149,
                        "end": 170,
                        "text": "Madnani et al., 2012;",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 171,
                        "end": 195,
                        "text": "Ji and Eisenstein, 2013)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 198,
                        "end": 218,
                        "text": "Socher et al. (2011)",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 781,
                        "end": 802,
                        "text": "(Socher et al., 2011)",
                        "ref_id": "BIBREF25"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "QA is another representative sentence matching problem. Yu et al. (2014) modeled sentence representations in a simplified CNN, finally finding the match score by projecting question and answer candidates into the same space. Other relevant QA work includes (Bordes et al., 2014c; Bordes et al., 2014a; Yang et al., 2014; Iyyer et al., 2014) For more general matching, Chopra et al. (2005) and Liu (2013) used a Siamese architecture of shared-weight neural networks (NNs) to model two objects simultaneously, matching their representations and then learning a specific type of sentence relation. We adopt parts of their architecture, but we model phrase representations as well as sentence representations.",
                "cite_spans": [
                    {
                        "start": 56,
                        "end": 72,
                        "text": "Yu et al. (2014)",
                        "ref_id": "BIBREF28"
                    },
                    {
                        "start": 257,
                        "end": 279,
                        "text": "(Bordes et al., 2014c;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 280,
                        "end": 301,
                        "text": "Bordes et al., 2014a;",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 302,
                        "end": 320,
                        "text": "Yang et al., 2014;",
                        "ref_id": "BIBREF27"
                    },
                    {
                        "start": 321,
                        "end": 340,
                        "text": "Iyyer et al., 2014)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 368,
                        "end": 388,
                        "text": "Chopra et al. (2005)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 393,
                        "end": 403,
                        "text": "Liu (2013)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Li and Xu (2012) gave a comprehensive introduction to query-document matching and argued that query and document match at different levels: term, phrase, word sense, topic, structure etc. This also applies to sentence matching. Lu and Li (2013) addressed matching of short texts. Interactions between the two texts were obtained via LDA (Blei et al., 2003) and were then the basis for computing a matching score. Compared to MultiGranCNN, drawbacks of this approach are that LDA parameters are not optimized for the specific task and that the interactions are formed on the level of single words only. Gao et al. (2014) modeled interestingness between two documents with deep NNs. They mapped source-target document pairs to feature vectors in a latent space in such a way that the distance between the source document and its corresponding interesting target in that space was minimized. Interestingness is more like topic relevance, based mainly on the aggregated meaning of keywords, as opposed to more structural relationships as is the case for paraphrasing and clause coherence.",
                "cite_spans": [
                    {
                        "start": 228,
                        "end": 244,
                        "text": "Lu and Li (2013)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 333,
                        "end": 356,
                        "text": "LDA (Blei et al., 2003)",
                        "ref_id": null
                    },
                    {
                        "start": 602,
                        "end": 619,
                        "text": "Gao et al. (2014)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "We briefly discussed (Hu et al., 2014) 's ARC in Section 1. MultiGranCNN is partially inspired by ARC, but introduces multigranular comparability (thus enabling crosslevel matching) and supports a wider range of match feature models.",
                "cite_spans": [
                    {
                        "start": 21,
                        "end": 38,
                        "text": "(Hu et al., 2014)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Our unsupervised learning component (Section 4, last paragraph) resembles word2vec CBOW (Mikolov et al., 2013) , but learns representations of TEXTCHUNKS as well as words. It also resembles PV-DM (Le and Mikolov, 2014), but our TEXTCHUNK representation is derived using a hierarchical architecture based on convolution and pooling.",
                "cite_spans": [
                    {
                        "start": 88,
                        "end": 110,
                        "text": "(Mikolov et al., 2013)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "We use convolution-plus-pooling in two different components of MultiGranCNN. The first component, the generalized phrase CNN (gpCNN), will be introduced in Section 4. This component learns representations for generalized phrases (gphrases) where a generalized phrase is a general term for subsequences of all granularities: words, short phrases, long phrases and the sentence itself. The gpCNN architecture has L layers of convolution, corresponding (for L = 2) to words, short phrases, long phrases and the sentence. We test different values of L in our experiments. We train gpCNN on large data in an unsupervised manner and then fine-tune it on labeled training data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of MultiGranCNN",
                "sec_num": "3"
            },
            {
                "text": "Using a Siamese configuration, two copies of gpCNN, one for each of the two input TEXTCHUNKS, are the input to the match feature model, presented in Section 5. This model produces s 1 \u00d7 s 2 matching features, one for each pair of g-phrases in the two chunks, where s 1 , s 2 are the number of g-phrases in the two chunks, respectively.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of MultiGranCNN",
                "sec_num": "3"
            },
            {
                "text": "The s 1 \u00d7s 2 match feature matrix is first reduced to a fixed size by dynamic 2D pooling. The re-sulting fixed size matrix is then the input to the second convolution-plus-pooling component, the match feature CNN (mfCNN) whose output is fed to a multilayer perceptron (MLP) that produces the final match score. Section 6 will give details.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of MultiGranCNN",
                "sec_num": "3"
            },
            {
                "text": "We use convolution-plus-pooling for both word sequences and match features because we want to compute increasingly abstract features at multiple levels of granularity. To ensure that g-phrases are mutually comparable when computing the s 1 \u00d7 s 2 match feature matrix, we impose the constraint that all g-phrase representations live in the same space and have the same dimensionality. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of MultiGranCNN",
                "sec_num": "3"
            },
            {
                "text": "We use several stacked blocks, i.e., convolutionplus-pooling layers, to extract increasingly abstract features of the TEXTCHUNK. The input to the first block are the words of the TEXTCHUNK, represented by CW (Collobert and Weston, 2008) embeddings. Given a TEXTCHUNK of length |S|, let vector c i \u2208 R wd be the concatenated embeddings of words v i\u2212w+1 , . . . , v i where w = 5 is the filter width, d = 50 is the dimensionality of CW embeddings and 0 < i < |S| + w. Embeddings for words v i , i < 1 and i > |S|, are set to zero. We then generate the representation",
                "cite_spans": [
                    {
                        "start": 208,
                        "end": 236,
                        "text": "(Collobert and Weston, 2008)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "gpCNN: Learning Representations for g-Phrases",
                "sec_num": "4"
            },
            {
                "text": "p i \u2208 R d of the g-phrase v i\u2212w+1 , .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "gpCNN: Learning Representations for g-Phrases",
                "sec_num": "4"
            },
            {
                "text": ". . , v i using the convolution matrix W l \u2208 R d\u00d7wd :",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "gpCNN: Learning Representations for g-Phrases",
                "sec_num": "4"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "p i = tanh(W l c i + b l )",
                        "eq_num": "(1)"
                    }
                ],
                "section": "gpCNN: Learning Representations for g-Phrases",
                "sec_num": "4"
            },
            {
                "text": "where block index l = 1, bias b l \u2208 R d . We use wide convolution (i.e., we apply the convolution matrix W l to words v i , i < 1 and i > |S|) because this makes sure that each word v i , 1 \u2264 i \u2264 |S|, can be detected by all weights of W l -as opposed to only the rightmost (resp. leftmost) weights for initial (resp. final) words in narrow convolution. The configuration of convolution layers in following blocks (l > 1) is exactly the same except that the input vectors c i are not words, but the output of pooling from the previous layer of convolution -as we will explain presently. The configuration is the same (e.g., all W l \u2208 R d\u00d7wd ) because, by design, all g-phrase representations have the same dimensionality d. This also ensures that each g-phrase representation can be directly compared with each other g-phrase representation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "gpCNN: Learning Representations for g-Phrases",
                "sec_num": "4"
            },
            {
                "text": "We use dynamic k-max pooling to extract the k l top values from each dimension after convolution in the l th block and the k L top values in the final block. We set",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "gpCNN: Learning Representations for g-Phrases",
                "sec_num": "4"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "k l = max(\u03b1, L \u2212 l L |S| )",
                        "eq_num": "(2)"
                    }
                ],
                "section": "gpCNN: Learning Representations for g-Phrases",
                "sec_num": "4"
            },
            {
                "text": "where",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "gpCNN: Learning Representations for g-Phrases",
                "sec_num": "4"
            },
            {
                "text": "l = 1, \u2022 \u2022 \u2022 , L",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "gpCNN: Learning Representations for g-Phrases",
                "sec_num": "4"
            },
            {
                "text": "is the block index, and \u03b1 = 4 is a constant (cf. Kalchbrenner et al. (2014) ) that makes sure a reasonable minimum number of values is passed on to the next layer. We set k L = 1 (not 4, cf. Kalchbrenner et al. (2014) ) because our design dictates that all g-phrase representations, including the representation of the TEXTCHUNK itself, have the same dimensionality. Example: for L = 4, |S| = 20, the k i are [15, 10, 5, 1]. Dynamic k-max pooling keeps the most important features and allows us to stack multiple blocks to extract hiearchical features: units on consecutive layers correspond to larger and larger parts of the TEXTCHUNK thanks to the subset selection property of pooling. For many tasks, labeled data for training gpCNN is limited. We therefore employ unsupervised training to initialize gpCNN as shown in Figure 2 . Similar to CBOW (Mikolov et al., 2013) , we predict a sampled middle word v i from the average of seven vectors: the TEXTCHUNK representation (the final output of gpCNN) and the three words to the left and to the right of v i . We use noise-contrastive estimation (Mnih and Teh, 2012) for training: 10 noise words are sampled for each true example. Figure 3 : General illustration of match feature model. In this example, both S 1 and S 2 have 10 gphrases, so the match feature matrixF \u2208 R s 1 \u00d7s 2 has size 10 \u00d7 10.",
                "cite_spans": [
                    {
                        "start": 49,
                        "end": 75,
                        "text": "Kalchbrenner et al. (2014)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 191,
                        "end": 217,
                        "text": "Kalchbrenner et al. (2014)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 849,
                        "end": 871,
                        "text": "(Mikolov et al., 2013)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 1097,
                        "end": 1117,
                        "text": "(Mnih and Teh, 2012)",
                        "ref_id": "BIBREF22"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 822,
                        "end": 830,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    },
                    {
                        "start": 1182,
                        "end": 1190,
                        "text": "Figure 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "gpCNN: Learning Representations for g-Phrases",
                "sec_num": "4"
            },
            {
                "text": "Let g 1 , . . . , g s k be an enumeration of the s k gphrases of TEXTCHUNK S k . Let S k \u2208 R s k \u00d7d be the matrix, constructed by concatenating the four matrices of unigram, short phrase, long phrase and sentence representations shown in Figure 2 that contain the learned representations from Section 4 for these s k g-phrases; i.e., row S ki is the learned representation of g i .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 238,
                        "end": 246,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Match Feature Models",
                "sec_num": "5"
            },
            {
                "text": "The basic design of a match feature model is that we produce an s 1 \u00d7 s 2 matrixF for a pair of TEXTCHUNKS S 1 and S 2 , shown in Figure 3 . F i,j is a score that assesses the relationship between g-phrase g i of S 1 and g-phrase g j of S 2 with respect to the TEXTCHUNK relation of interest (paraphrasing, clause coherence etc). This scoreF i,j is computed based on the vector representations S 1i and S 2j of the two g-phrases. 1 We experiment with three different feature models to compute the match scoreF i,j because we would like our architecture to address a wide variety of different TEXTCHUNK relations. We can model a TEXTCHUNK relation like paraphrasing as \"for each meaning element in one sentence, there must be a similar meaning element in the other sentence\"; thus, a good candidate for the match scoreF i,j is simply vector similarity. In contrast, similarity is a less promising match score for clause coherence; for clause coherence, we want a score that models how good a continuation one g-phrase is for the other. These considerations motivate us to define three different match feature models that we will introduce now.",
                "cite_spans": [
                    {
                        "start": 430,
                        "end": 431,
                        "text": "1",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 130,
                        "end": 138,
                        "text": "Figure 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Match Feature Models",
                "sec_num": "5"
            },
            {
                "text": "The first match feature model is DIRECTSIM. This model computes the match score of two gphrases as their similarity using a radial basis function kernel:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Match Feature Models",
                "sec_num": "5"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "F i,j = exp( \u2212||S 1i \u2212 S 2j || 2 2\u03b2 )",
                        "eq_num": "(3)"
                    }
                ],
                "section": "Match Feature Models",
                "sec_num": "5"
            },
            {
                "text": "where we set \u03b2 = 2 (cf. Wu et al. (2013) ). DIRECTSIM is an appropriate feature model for TEXTCHUNK relations like paraphrasing because in that case direct similarity features are helpful in assessing meaning equivalence.",
                "cite_spans": [
                    {
                        "start": 24,
                        "end": 40,
                        "text": "Wu et al. (2013)",
                        "ref_id": "BIBREF26"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Match Feature Models",
                "sec_num": "5"
            },
            {
                "text": "The second match feature model is INDIRECT-SIM. Instead of computing the similarity directly as we do for DIRECTSIM, we first transform the representation of the g-phrase in one TEXTCHUNK using a transformation matrix M \u2208 R d\u00d7d , then compute the match score by inner product and sigmoid activation:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Match Feature Models",
                "sec_num": "5"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "F i,j = \u03c3(S 1i MS T 2j + b),",
                        "eq_num": "(4)"
                    }
                ],
                "section": "Match Feature Models",
                "sec_num": "5"
            },
            {
                "text": "Our motivation is that for a TEXTCHUNK relation like clause coherence, the two TEXTCHUNKS need not have any direct similarity. However, if we map the representations of TEXTCHUNK S 1 into an appropriate space then we can hope that similarity between these transformed representations of S 1 and the representations of TEXTCHUNK S 2 do yield useful features. We will see that this hope is borne out by our experiments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Match Feature Models",
                "sec_num": "5"
            },
            {
                "text": "The third match feature model is CONCAT. This is a general model that can learn any weighted combination of the values of the two vectors:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Match Feature Models",
                "sec_num": "5"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "F i,j = \u03c3(w T e i,j + b)",
                        "eq_num": "(5)"
                    }
                ],
                "section": "Match Feature Models",
                "sec_num": "5"
            },
            {
                "text": "where e i,j \u2208 R 2d is the concatenation of S 1i and S 2j . We can learn different combination weights w to solve different types of TEXTCHUNK matching.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Match Feature Models",
                "sec_num": "5"
            },
            {
                "text": "We call this match feature model CONCAT because we implement it by concatenating g-phrase vectors to form a tensor as shown in Figure 4 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 127,
                        "end": 135,
                        "text": "Figure 4",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Match Feature Models",
                "sec_num": "5"
            },
            {
                "text": "The match feature models implement multigranular comparability: they match all units in one TEXTCHUNK with all units in the other TEXTCHUNK. This is necessary because a general solution to matching must match a low-level unit like \"reignite\" to a higher-level unit like \"fan the flames of\" (Figure 1 ). Unlike (Socher et al., 2011) , our model does not rely on parsing; therefore, it can more exhaustively search the hypothesis space of possible matchings: mfCNN covers a wide variety of different, possibly overlapping units, not just those of a single parse tree.",
                "cite_spans": [
                    {
                        "start": 310,
                        "end": 331,
                        "text": "(Socher et al., 2011)",
                        "ref_id": "BIBREF25"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 290,
                        "end": 299,
                        "text": "(Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Match Feature Models",
                "sec_num": "5"
            },
            {
                "text": "The match feature models generate an s 1 \u00d7 s 2 matrix. Since it has variable size, we apply two different dynamic 2D pooling methods, grid-based pooling and phrase-focused pooling, to transform it to a fixed size matrix.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dynamic 2D Pooling",
                "sec_num": "6"
            },
            {
                "text": "We need to mapF \u2208 R s 1 \u00d7s 2 into a matrix F of fixed size s * \u00d7 s * where s * is a parameter. Gridbased pooling dividesF into s * \u00d7 s * nonoverlapping (dynamic) pools and copies the maximum value in each dynamic pool to F. This method is similar to (Socher et al., 2011) , but preserves locality better.",
                "cite_spans": [
                    {
                        "start": 250,
                        "end": 271,
                        "text": "(Socher et al., 2011)",
                        "ref_id": "BIBREF25"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Grid-based pooling",
                "sec_num": "6.1"
            },
            {
                "text": "F can be split into equal regions only if both s 1 and s 2 are divisible by s * . Otherwise, for s 1 > s * and if s 1 mod s * = b, the dynamic pools in the first s * \u2212 b splits each have s 1 s * rows while the remaining b splits each have s 1 s * + 1 rows. In Figure 5 , a s 1 \u00d7 s 2 = 4 \u00d7 5 matrix (left) is split into s * \u00d7 s * = 3 \u00d7 3 dynamic pools (middle): each row is split into [1, 1, 2] and each column is split into [1, 2, 2].",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 260,
                        "end": 268,
                        "text": "Figure 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Grid-based pooling",
                "sec_num": "6.1"
            },
            {
                "text": "If s 1 < s * , we first repeat all rows in batch style with size s 1 until no fewer than s * rows remain. Then the first s * rows are kept and split into s * dynamic pools. The same principle applies to the partitioning of columns. In Figure 5 (right) , the areas with dashed lines and dotted lines are repeated parts for rows and columns, respectively; each cell is its own dynamic pool.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 235,
                        "end": 251,
                        "text": "Figure 5 (right)",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Grid-based pooling",
                "sec_num": "6.1"
            },
            {
                "text": "In the match feature matrixF \u2208 R s 1 \u00d7s 2 , row i (resp. column j) contains all feature values for gphrase g i of S 1 (resp. g j of S 2 ). Phrase-focused pooling attempts to pick the largest match features Figure 5 : Partition methods in grid-based pooling. Original matrix with size 4 \u00d7 5 is mapped into matrix with size 3 \u00d7 3 and matrix with size 6 \u00d7 7, respectively. Each dynamic pool is distinguished by a border of empty white space around it.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 206,
                        "end": 214,
                        "text": "Figure 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Phrase-focused pooling",
                "sec_num": "6.2"
            },
            {
                "text": "for a g-phrase g on the assumption that they are the best basis for assessing the relation of g with other g-phrases. To implement this, we sort the values of each row i (resp. each column j) in decreasing order giving us a matrixF r \u2208 R s 1 \u00d7s 2 with sorted rows (resp.F c \u2208 R s 1 \u00d7s 2 with sorted columns). Then we concatenate the columns ofF r (resp. the rows ofF c ) resulting in list",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-focused pooling",
                "sec_num": "6.2"
            },
            {
                "text": "F r = {f r 1 , . . . , f r s 1 s 2 } (resp. F c = {f c 1 , . . . , f c s 1 s 2 }) where each f r (f c )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-focused pooling",
                "sec_num": "6.2"
            },
            {
                "text": "is an element ofF r (F c ). These two lists are merged into a list F by interleaving them so that members from F r and F c alternate. F is then used to fill the rows of F from top to bottom with each row being filled from left to right. 2",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Phrase-focused pooling",
                "sec_num": "6.2"
            },
            {
                "text": "The output of dynamic 2D pooling is further processed by the match feature CNN (mfCNN) as depicted in Figure 6 . mfCNN extracts increasingly abstract interaction features from lower-level interaction features, using several layers of 2D wide convolution and fixed-size 2D pooling.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 102,
                        "end": 110,
                        "text": "Figure 6",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "mfCNN: Match feature CNN",
                "sec_num": "7"
            },
            {
                "text": "We call the combination of a 2D wide convolution layer and a fixed-size 2D pooling layer a block, denoted by index b (b = 1, 2 . . .) . In general, let tensor T b \u2208 R c b \u00d7s b \u00d7s b denote the feature maps in block b; block b has c b feature maps, each of size s b \u00d7 s b (T 1 = F \u2208 R 1\u00d7s * \u00d7s * ). Let W b \u2208 R c b+1 \u00d7c b \u00d7f b \u00d7f b be the filter weights of 2D wide convolution in block b, f b \u00d7f b is then the size of sliding convolution regions. Then the convolution is performed as element-wise multiplication 2 IfF has fewer cells than F, then we simply repeat the filling procedure to fill all cells.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 117,
                        "end": 133,
                        "text": "(b = 1, 2 . . .)",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "mfCNN: Match feature CNN",
                "sec_num": "7"
            },
            {
                "text": "between W b and T b as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "mfCNN: Match feature CNN",
                "sec_num": "7"
            },
            {
                "text": "T b+1 m,i\u22121,j\u22121 = \u03c3( W b m,:,:,: T b :,i\u2212f b :i,j\u2212f b :j +b b m ) (6) where 0\u2264m<c b+1 , 1 \u2264 i, j < s b +f b , b b \u2208 R c b+1 .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "mfCNN: Match feature CNN",
                "sec_num": "7"
            },
            {
                "text": "Subsequently, fixed-size 2D pooling selects dominant features from k b \u00d7 k b non-overlapping windows ofT b+1 to form a tensor as input of block b + 1:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "mfCNN: Match feature CNN",
                "sec_num": "7"
            },
            {
                "text": "T b+1 m,i,j = max(T b+1 m,ik b :(i+1)k b ,jk b :(j+1)k b ) (7)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "mfCNN: Match feature CNN",
                "sec_num": "7"
            },
            {
                "text": "where Hu et al. (2014) used narrow convolution which would limit the number of blocks. 2D wide convolution in this work enables to stack multiple blocks of convolution and pooling to extract higher-level interaction features. We will study the influence of the number of blocks on performance below.",
                "cite_spans": [
                    {
                        "start": 6,
                        "end": 22,
                        "text": "Hu et al. (2014)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "mfCNN: Match feature CNN",
                "sec_num": "7"
            },
            {
                "text": "0 \u2264 i, j < s b +f b \u22121 k b .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "mfCNN: Match feature CNN",
                "sec_num": "7"
            },
            {
                "text": "For the experiments, we set",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "mfCNN: Match feature CNN",
                "sec_num": "7"
            },
            {
                "text": "s * = 40, c b = 50, f b = 5, k b = 2 (b = 1, 2, \u2022 \u2022 \u2022).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "mfCNN: Match feature CNN",
                "sec_num": "7"
            },
            {
                "text": "We can now describe the overall architecture of MultiGranCNN. First, using a Siamese configuration, two copies of gpCNN, one for each of the two input TEXTCHUNKS, produce g-phrase representations on different levels of abstraction (Figure 2 ). Then one of the three match feature models (DIRECTSIM, CONCAT or INDIRECTSIM) produces an s 1 \u00d7 s 2 match feature matrix, each cell of which assesses the match of a pair of gphrases from the two chunks. This match feature matrix is reduced to a fixed size matrix by dynamic 2D pooling (Section 6). As shown in Figure 6 , the resulting fixed size matrix is the input for mfCNN, which extracts interaction features of increasing complexity from the basic interaction features computed by the match feature model. Finally, the output of the last block of mfCNN is the input to an MLP that computes the match score.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 231,
                        "end": 240,
                        "text": "(Figure 2",
                        "ref_id": "FIGREF1"
                    },
                    {
                        "start": 554,
                        "end": 562,
                        "text": "Figure 6",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "MultiGranCNN",
                "sec_num": "8"
            },
            {
                "text": "MultiGranCNN bears resemblance to previous work on clause and sentence matching (e.g., Hu et al. (2014 ), Socher et al. (2011 ), but it is more general and more flexible. It learns representations of g-phrases, i.e., representations of parts of the TEXTCHUNK at multiple granularities, not just for a single level such as the sentence as ARC-I does (Hu et al., 2014) . MultiGranCNN explores the space of interactions between the two chunks more exhaustively by considering interactions between every unit in one chunk with every other unit in the other chunk, at all levels of granularity. Finally, MultiGranCNN supports a number of different match feature models; the corresponding module can be instantiated in a way that ensures that match features are best suited to support accurate decisions on the TEXTCHUNK relation task that needs to be addressed.",
                "cite_spans": [
                    {
                        "start": 87,
                        "end": 102,
                        "text": "Hu et al. (2014",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 103,
                        "end": 125,
                        "text": "), Socher et al. (2011",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 349,
                        "end": 366,
                        "text": "(Hu et al., 2014)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "MultiGranCNN",
                "sec_num": "8"
            },
            {
                "text": "Suppose the triple (x, y + , y \u2212 ) is given and x matches y + better than y \u2212 . Then our objective is the minimization of the following ranking loss:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training",
                "sec_num": "9.1"
            },
            {
                "text": "l(x, y + , y \u2212 ) = max(0, 1 + s(x, y \u2212 ) \u2212 s(x, y + ))",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training",
                "sec_num": "9.1"
            },
            {
                "text": "where s(x, y) is the predicted match score for (x, y). We use stochastic gradient descent with Adagrad (Duchi et al., 2011) , L 2 regularization and minibatch training.",
                "cite_spans": [
                    {
                        "start": 103,
                        "end": 123,
                        "text": "(Duchi et al., 2011)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training",
                "sec_num": "9.1"
            },
            {
                "text": "We set initial learning rate to 0.05, batch size to 70, L 2 weight to 5 \u2022 10 \u22124 .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training",
                "sec_num": "9.1"
            },
            {
                "text": "Recall that we employ unsupervised pretraining of representations for g-phrases. We can either freeze these representations in subsequent supervised training; or we can fine-tune them. We study the performance of both regimes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training",
                "sec_num": "9.1"
            },
            {
                "text": "As introduced by Hu et al. (2014), the clause coherence task determines for a pair (x, y) of clauses if the sentence \"xy\" is a coherent sentence. We construct a clause coherence dataset as follows (the set used by Hu et al. (2014) is not yet available). We consider all sentences from English Gigaword (Parker et al., 2009) that consist of two comma-separated clauses x and y, with each clause having between five and 30 words. For each y, we choose four clauses y . . . y randomly from the 1000 second clauses that have the highest similarity to y, where similarity is cosine similarity of TF-IDF vectors of the clauses; restricting the alternatives to similar clauses ensures that the task is hard. The clause coherence task then is to select y from the set y, y , . . . , y as the correct continuation of x. We create 21 million examples, each consisting of a first clause x and five second clauses. This set is divided into a training set of 19 million and development and test sets of one million each. An example from the training set is given in Figure 1 .",
                "cite_spans": [
                    {
                        "start": 214,
                        "end": 230,
                        "text": "Hu et al. (2014)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 302,
                        "end": 323,
                        "text": "(Parker et al., 2009)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 1053,
                        "end": 1061,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Clause Coherence Task",
                "sec_num": "9.2"
            },
            {
                "text": "Then, we study the performance variance of different MultiGranCNN setups from three perspectives: a) layers of CNN in both unsupervised (gpCNN) and supervised (mfCNN) training phases; b) different approaches for clause relation feature modeling; c) dynamic pooling methods for generating same-sized feature matrices. Figure 7 (top table) shows that (Hu et al., 2014) 's parameters are good choices for our setup as well. We get best result when both gpCNN and mfCNN have three blocks of convolution and pooling. This suggests that multiple layers of convolution succeed in extracting high-level features that are beneficial for clause coherence. Figure 7 (2nd table) shows that INDIRECTSIM and CONCAT have comparable performance and both outperform DIRECTSIM. DIRECTSIM is expected to perform poorly because the contents in the two clauses usually have little or no overlapping meaning. In contrast, we can imagine that INDIRECTSIM first transforms the first clause x into a counterpart and then matches this counterpart with the second clause y. In CONCAT, each of s 1 \u00d7s 2 pairs of g-phrases is concatentated and supervised training can then learn an unrestricted function to assess the importance of this pair for clause coherence (cf. Eq. 5). Again, this is clearly a more promising TEXTCHUNK relation model for clause coherence than one that relies on DIRECT-SIM. (3rd table) demonstrates that finetuning g-phrase representations gives better performance than freezing them. Also, grid-based and phrase-focused pooling outperform dynamic pooling (Socher et al., 2011) (4th table) . Phrasefocused pooling performs best. Table 1 compares MultiGranCNN to ARC-I and ARC-II, the architectures proposed by Hu et al. (2014) . We also test the five baseline systems from their paper: DeepMatch, WordEmbed, SEN-MLP, SENNA+MLP, URAE+MLP. For Multi-GranCNN, we use the best dev set settings: number of convolution layers in gpCNN and mfCNN is 3; INDIRECTSIM; phrase-focused pooling. Table 1 shows that MultiGranCNN outperforms all other approaches on clause coherence test set.",
                "cite_spans": [
                    {
                        "start": 349,
                        "end": 366,
                        "text": "(Hu et al., 2014)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 1551,
                        "end": 1572,
                        "text": "(Socher et al., 2011)",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 1705,
                        "end": 1721,
                        "text": "Hu et al. (2014)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 317,
                        "end": 325,
                        "text": "Figure 7",
                        "ref_id": "FIGREF4"
                    },
                    {
                        "start": 646,
                        "end": 654,
                        "text": "Figure 7",
                        "ref_id": "FIGREF4"
                    },
                    {
                        "start": 655,
                        "end": 666,
                        "text": "(2nd table)",
                        "ref_id": null
                    },
                    {
                        "start": 1369,
                        "end": 1380,
                        "text": "(3rd table)",
                        "ref_id": null
                    },
                    {
                        "start": 1573,
                        "end": 1584,
                        "text": "(4th table)",
                        "ref_id": null
                    },
                    {
                        "start": 1624,
                        "end": 1631,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    },
                    {
                        "start": 1977,
                        "end": 1984,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Clause Coherence Task",
                "sec_num": "9.2"
            },
            {
                "text": "We evaluate paraphrase identification (PI) on the PAN corpus (http://bit.ly/mt-para, (Madnani et al., 2012) ), consisting of training and test sets of 10,000 and 3000 sentence pairs, respectively. Sentences are about 40 words long on average.",
                "cite_spans": [
                    {
                        "start": 85,
                        "end": 107,
                        "text": "(Madnani et al., 2012)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Paraphrase Identification Task",
                "sec_num": "9.3"
            },
            {
                "text": "Since PI is a binary classification task, we replace the MLP with a logistic regression layer. As phrase-focused pooling was proven to be optimal, we directly use phrase-focused pooling in PI task without comparison, assuming that the choice of dynamic pooling is task independent.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Paraphrase Identification Task",
                "sec_num": "9.3"
            },
            {
                "text": "For parameter selection, we split the PAN training set into a core training set (core) of size 9000 and a development set (dev) of size 1000. We then train models on core and select parameters based on best performance on dev. The best results on dev are obtained for the following parameters: freezing g-phrase representations, DIRECT-SIM, two convolution layers in gpCNN, no convolution layers in mfCNN. We use these parameter settings to train a model on the entire training set and report performance in Table 2 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 508,
                        "end": 515,
                        "text": "Table 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Paraphrase Identification Task",
                "sec_num": "9.3"
            },
            {
                "text": "We compare MultiGranCNN to ARC-I/II (Hu et al., 2014) , and two previous papers reporting performance on PAN. Madnani et al. (2012) 94.9 94.7 Table 2 : Results on PAN. \"8MT\" = 8 MT metrics Table 2 shows that MultiGranCNN in combination with MT metrics obtains state-of-the-art performance on PAN. Freezing weights learned in unsupervised training ( Figure 2 ) performs better than fine-tuning them; also, Table 3 shows that the best result is achieved if no convolution is used in mfCNN. Thus, the best configuration for paraphrase identification is to \"forward\" fixed-size interaction matrices as input to the logistic regression, without any intermediate convolution layers.",
                "cite_spans": [
                    {
                        "start": 36,
                        "end": 53,
                        "text": "(Hu et al., 2014)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 110,
                        "end": 131,
                        "text": "Madnani et al. (2012)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 142,
                        "end": 149,
                        "text": "Table 2",
                        "ref_id": null
                    },
                    {
                        "start": 189,
                        "end": 196,
                        "text": "Table 2",
                        "ref_id": null
                    },
                    {
                        "start": 349,
                        "end": 357,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    },
                    {
                        "start": 405,
                        "end": 412,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Paraphrase Identification Task",
                "sec_num": "9.3"
            },
            {
                "text": "Freezing weights learned in unsupervised training and no convolution layers in mfCNN both protect against overfitting. Complex deep neural networks are in particular danger of overfitting when training sets are small as in the case of PAN (cf. Hu et al. (2014) ). In contrast, fine-tuning weights and several convolution layers were the optimal setup for clause coherence. For clause coherence, we have a much larger training set and therefore can successfully train a much larger number of parameters. Table 3 shows that CONCAT performs badly for PI while DIRECTSIM and INDIRECTSIM perform well. We can conceptualize PI as the task of determining if each meaning element in S 1 has a similar meaning element in S 2 . The s 1 \u00d7 s 2 DIRECT-SIM feature model directly models this task and the s 1 \u00d7s 2 INDIRECTSIM feature model also models it, but learning a transformation of g-phrase representations before applying similarity. In contrast, CONCAT can learn arbitrary relations between parts of the two sentences, a model that seems to be too unconstrained for PI if insufficient training resources are available.",
                "cite_spans": [
                    {
                        "start": 244,
                        "end": 260,
                        "text": "Hu et al. (2014)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 503,
                        "end": 510,
                        "text": "Table 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Paraphrase Identification Task",
                "sec_num": "9.3"
            },
            {
                "text": "In contrast, for the clause coherence task, concatentation worked well and DIRECTSIM worked poorly and we provided an explanation based on the specific properties of clause coherence (see discussion of Figure 7) . We conclude from these results that it is dependent on the task what the best feature model is for matching two linguistic objects. Interestingly, INDIRECTSIM performs well on both tasks. This suggests that INDIRECTSIM is a general feature model for matching, applicable to tasks with very different properties.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 202,
                        "end": 211,
                        "text": "Figure 7)",
                        "ref_id": "FIGREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Paraphrase Identification Task",
                "sec_num": "9.3"
            },
            {
                "text": "In this paper, we present MultiGranCNN, a general deep learning architecture for classifying the relation between two TEXTCHUNKS. Multi-GranCNN supports multigranular comparability of representations: shorter sequences in one TEXTCHUNK can be directly compared to longer sequences in the other TEXTCHUNK. Multi-GranCNN also contains a flexible and modularized match feature component that is easily adaptable to different TEXTCHUNK relations. We demonstrated state-of-the-art performance of MultiGranCNN on paraphrase identification and clause coherence tasks.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "10"
            },
            {
                "text": "In response to a reviewer question, recall that si is the total number of g-phrases of Si, so there is only one s1 \u00d7 s2 matrix, not several on different levels of granularity.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "Thanks to CIS members and anonymous reviewers for constructive comments. This work was supported by Baidu (through a Baidu scholarship awarded to Wenpeng Yin) and by Deutsche Forschungsgemeinschaft (grant DFG SCHU 2246/8-2, SPP 1335).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Exploiting discourse information to identify paraphrases. Expert Systems with Applications",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ngo Xuan",
                        "suffix": ""
                    },
                    {
                        "first": "Nguyen",
                        "middle": [],
                        "last": "Bach",
                        "suffix": ""
                    },
                    {
                        "first": "Akira",
                        "middle": [],
                        "last": "Le Minh",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Shimazu",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "",
                "volume": "41",
                "issue": "",
                "pages": "2832--2841",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ngo Xuan Bach, Nguyen Le Minh, and Akira Shi- mazu. 2014. Exploiting discourse information to identify paraphrases. Expert Systems with Applica- tions, 41(6):2832-2841.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "A comparison of vector-based representations for semantic composition",
                "authors": [
                    {
                        "first": "William",
                        "middle": [],
                        "last": "Blacoe",
                        "suffix": ""
                    },
                    {
                        "first": "Mirella",
                        "middle": [],
                        "last": "Lapata",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
                "volume": "",
                "issue": "",
                "pages": "546--556",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "William Blacoe and Mirella Lapata. 2012. A com- parison of vector-based representations for semantic composition. In Proceedings of the 2012 Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 546-556. Association for Compu- tational Linguistics.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Question answering with subgraph embeddings",
                "authors": [
                    {
                        "first": "Antoine",
                        "middle": [],
                        "last": "Bordes",
                        "suffix": ""
                    },
                    {
                        "first": "Sumit",
                        "middle": [],
                        "last": "Chopra",
                        "suffix": ""
                    },
                    {
                        "first": "Jason",
                        "middle": [],
                        "last": "Weston",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Antoine Bordes, Sumit Chopra, and Jason Weston. 2014a. Question answering with subgraph embed- dings. Proceedings of the 2014 Conference on Em- pirical Methods in Natural Language Processing.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "A semantic matching energy function for learning with multi-relational data",
                "authors": [
                    {
                        "first": "Antoine",
                        "middle": [],
                        "last": "Bordes",
                        "suffix": ""
                    },
                    {
                        "first": "Xavier",
                        "middle": [],
                        "last": "Glorot",
                        "suffix": ""
                    },
                    {
                        "first": "Jason",
                        "middle": [],
                        "last": "Weston",
                        "suffix": ""
                    },
                    {
                        "first": "Yoshua",
                        "middle": [],
                        "last": "Bengio",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Machine Learning",
                "volume": "94",
                "issue": "",
                "pages": "233--259",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014b. A semantic matching en- ergy function for learning with multi-relational data. Machine Learning, 94(2):233-259.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Open question answering with weakly supervised embedding models",
                "authors": [
                    {
                        "first": "Antoine",
                        "middle": [],
                        "last": "Bordes",
                        "suffix": ""
                    },
                    {
                        "first": "Jason",
                        "middle": [],
                        "last": "Weston",
                        "suffix": ""
                    },
                    {
                        "first": "Nicolas",
                        "middle": [],
                        "last": "Usunier",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Antoine Bordes, Jason Weston, and Nicolas Usunier. 2014c. Open question answering with weakly su- pervised embedding models. Proceedings of 2014",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases",
                "authors": [],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Learning a similarity metric discriminatively, with application to face verification",
                "authors": [
                    {
                        "first": "Sumit",
                        "middle": [],
                        "last": "Chopra",
                        "suffix": ""
                    },
                    {
                        "first": "Raia",
                        "middle": [],
                        "last": "Hadsell",
                        "suffix": ""
                    },
                    {
                        "first": "Yann",
                        "middle": [],
                        "last": "Lecun",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Computer Vision and Pattern Recognition",
                "volume": "1",
                "issue": "",
                "pages": "539--546",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 539-546. IEEE.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
                "authors": [
                    {
                        "first": "Ronan",
                        "middle": [],
                        "last": "Collobert",
                        "suffix": ""
                    },
                    {
                        "first": "Jason",
                        "middle": [],
                        "last": "Weston",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the 25th international conference on Machine learning",
                "volume": "",
                "issue": "",
                "pages": "160--167",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Pro- ceedings of the 25th international conference on Machine learning, pages 160-167. ACM.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Adaptive subgradient methods for online learning and stochastic optimization",
                "authors": [
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Duchi",
                        "suffix": ""
                    },
                    {
                        "first": "Elad",
                        "middle": [],
                        "last": "Hazan",
                        "suffix": ""
                    },
                    {
                        "first": "Yoram",
                        "middle": [],
                        "last": "Singer",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "The Journal of Machine Learning Research",
                "volume": "12",
                "issue": "",
                "pages": "2121--2159",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Ma- chine Learning Research, 12:2121-2159.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Modeling interestingness with deep neural networks",
                "authors": [
                    {
                        "first": "Jianfeng",
                        "middle": [],
                        "last": "Gao",
                        "suffix": ""
                    },
                    {
                        "first": "Patrick",
                        "middle": [],
                        "last": "Pantel",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Gamon",
                        "suffix": ""
                    },
                    {
                        "first": "Xiaodong",
                        "middle": [],
                        "last": "He",
                        "suffix": ""
                    },
                    {
                        "first": "Li",
                        "middle": [],
                        "last": "Deng",
                        "suffix": ""
                    },
                    {
                        "first": "Yelong",
                        "middle": [],
                        "last": "Shen",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jianfeng Gao, Patrick Pantel, Michael Gamon, Xi- aodong He, Li Deng, and Yelong Shen. 2014. Mod- eling interestingness with deep neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Convolutional neural network architectures for matching natural language sentences",
                "authors": [
                    {
                        "first": "Baotian",
                        "middle": [],
                        "last": "Hu",
                        "suffix": ""
                    },
                    {
                        "first": "Zhengdong",
                        "middle": [],
                        "last": "Lu",
                        "suffix": ""
                    },
                    {
                        "first": "Hang",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Qingcai",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Advances in Neural Information Processing Systems",
                "volume": "",
                "issue": "",
                "pages": "2042--2050",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network archi- tectures for matching natural language sentences. In Advances in Neural Information Processing Sys- tems, pages 2042-2050.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "A neural network for factoid question answering over paragraphs",
                "authors": [
                    {
                        "first": "Mohit",
                        "middle": [],
                        "last": "Iyyer",
                        "suffix": ""
                    },
                    {
                        "first": "Jordan",
                        "middle": [],
                        "last": "Boyd-Graber",
                        "suffix": ""
                    },
                    {
                        "first": "Leonardo",
                        "middle": [],
                        "last": "Claudino",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Socher",
                        "suffix": ""
                    },
                    {
                        "first": "Hal",
                        "middle": [],
                        "last": "Daum\u00e9",
                        "suffix": ""
                    },
                    {
                        "first": "Iii",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "633--644",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum\u00e9 III. 2014. A neural network for factoid question answering over para- graphs. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing, pages 633-644.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Discriminative improvements to distributional sentence similarity",
                "authors": [
                    {
                        "first": "Yangfeng",
                        "middle": [],
                        "last": "Ji",
                        "suffix": ""
                    },
                    {
                        "first": "Jacob",
                        "middle": [],
                        "last": "Eisenstein",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "891--896",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yangfeng Ji and Jacob Eisenstein. 2013. Discrimi- native improvements to distributional sentence sim- ilarity. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Process- ing, pages 891-896.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "A convolutional neural network for modelling sentences",
                "authors": [
                    {
                        "first": "Nal",
                        "middle": [],
                        "last": "Kalchbrenner",
                        "suffix": ""
                    },
                    {
                        "first": "Edward",
                        "middle": [],
                        "last": "Grefenstette",
                        "suffix": ""
                    },
                    {
                        "first": "Phil",
                        "middle": [],
                        "last": "Blunsom",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nal Kalchbrenner, Edward Grefenstette, and Phil Blun- som. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Distributed representations of sentences and documents",
                "authors": [
                    {
                        "first": "V",
                        "middle": [],
                        "last": "Quoc",
                        "suffix": ""
                    },
                    {
                        "first": "Tomas",
                        "middle": [],
                        "last": "Le",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Mikolov",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of The 31st International Conference on Machine Learning",
                "volume": "",
                "issue": "",
                "pages": "1188--1196",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Quoc V Le and Tomas Mikolov. 2014. Distributed rep- resentations of sentences and documents. Proceed- ings of The 31st International Conference on Ma- chine Learning, pages 1188-1196.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Beyond bag-of-words: machine learning for query-document matching in web search",
                "authors": [
                    {
                        "first": "Hang",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval",
                "volume": "",
                "issue": "",
                "pages": "1177--1177",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hang Li and Jun Xu. 2012. Beyond bag-of-words: machine learning for query-document matching in web search. In Proceedings of the 35th international ACM SIGIR conference on Research and develop- ment in information retrieval, pages 1177-1177. ACM.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Har: Hub, authority and relevance scores in multirelational data for query search",
                "authors": [
                    {
                        "first": "Xutao",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Michael",
                        "suffix": ""
                    },
                    {
                        "first": "Yunming",
                        "middle": [],
                        "last": "Ng",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ye",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 12th SIAM International Conference on Data Mining",
                "volume": "",
                "issue": "",
                "pages": "141--152",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Xutao Li, Michael K Ng, and Yunming Ye. 2012. Har: Hub, authority and relevance scores in multi- relational data for query search. In Proceedings of the 12th SIAM International Conference on Data Mining, pages 141-152. SIAM.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Probabilistic Siamese Network for Learning Representations",
                "authors": [
                    {
                        "first": "Chen",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chen Liu. 2013. Probabilistic Siamese Network for Learning Representations. Ph.D. thesis, University of Toronto.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "A deep architecture for matching short texts",
                "authors": [
                    {
                        "first": "Zhengdong",
                        "middle": [],
                        "last": "Lu",
                        "suffix": ""
                    },
                    {
                        "first": "Hang",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Advances in Neural Information Processing Systems",
                "volume": "",
                "issue": "",
                "pages": "1367--1375",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Zhengdong Lu and Hang Li. 2013. A deep architec- ture for matching short texts. In Advances in Neural Information Processing Systems, pages 1367-1375.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Re-examining machine translation metrics for paraphrase identification",
                "authors": [
                    {
                        "first": "Nitin",
                        "middle": [],
                        "last": "Madnani",
                        "suffix": ""
                    },
                    {
                        "first": "Joel",
                        "middle": [],
                        "last": "Tetreault",
                        "suffix": ""
                    },
                    {
                        "first": "Martin",
                        "middle": [],
                        "last": "Chodorow",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "",
                "issue": "",
                "pages": "182--190",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nitin Madnani, Joel Tetreault, and Martin Chodorow. 2012. Re-examining machine translation metrics for paraphrase identification. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 182-190. Asso- ciation for Computational Linguistics.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Distributed representations of words and phrases and their compositionality",
                "authors": [
                    {
                        "first": "Tomas",
                        "middle": [],
                        "last": "Mikolov",
                        "suffix": ""
                    },
                    {
                        "first": "Ilya",
                        "middle": [],
                        "last": "Sutskever",
                        "suffix": ""
                    },
                    {
                        "first": "Kai",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "Greg",
                        "middle": [
                            "S"
                        ],
                        "last": "Corrado",
                        "suffix": ""
                    },
                    {
                        "first": "Jeff",
                        "middle": [],
                        "last": "Dean",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Advances in Neural Information Processing Systems",
                "volume": "",
                "issue": "",
                "pages": "3111--3119",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems, pages 3111-3119.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "A fast and simple algorithm for training neural probabilistic language models",
                "authors": [
                    {
                        "first": "Andriy",
                        "middle": [],
                        "last": "Mnih",
                        "suffix": ""
                    },
                    {
                        "first": "Yee Whye",
                        "middle": [],
                        "last": "Teh",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 29th International Conference on Machine Learning",
                "volume": "",
                "issue": "",
                "pages": "1751--1758",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. In Proceedings of the 29th In- ternational Conference on Machine Learning, pages 1751-1758.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Linguistic Data Consortium",
                "authors": [
                    {
                        "first": "Robert",
                        "middle": [],
                        "last": "Parker",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Robert Parker, Linguistic Data Consortium, et al. 2009. English gigaword fourth edition. Linguistic Data Consortium.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Paraphrase recognition via dissimilarity significance classification",
                "authors": [
                    {
                        "first": "Long",
                        "middle": [],
                        "last": "Qiu",
                        "suffix": ""
                    },
                    {
                        "first": "Min-Yen",
                        "middle": [],
                        "last": "Kan",
                        "suffix": ""
                    },
                    {
                        "first": "Tat-Seng",
                        "middle": [],
                        "last": "Chua",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "18--26",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Long Qiu, Min-Yen Kan, and Tat-Seng Chua. 2006. Paraphrase recognition via dissimilarity significance classification. In Proceedings of the 2006 Confer- ence on Empirical Methods in Natural Language Processing, pages 18-26. Association for Compu- tational Linguistics.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "Dynamic pooling and unfolding recursive autoencoders for paraphrase detection",
                "authors": [
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Socher",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Eric",
                        "suffix": ""
                    },
                    {
                        "first": "Jeffrey",
                        "middle": [],
                        "last": "Huang",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Pennin",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Christopher",
                        "suffix": ""
                    },
                    {
                        "first": "Andrew Y",
                        "middle": [],
                        "last": "Manning",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ng",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Advances in Neural Information Processing Systems",
                "volume": "",
                "issue": "",
                "pages": "801--809",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Richard Socher, Eric H Huang, Jeffrey Pennin, Christo- pher D Manning, and Andrew Y Ng. 2011. Dy- namic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural In- formation Processing Systems, pages 801-809.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "Online multimodal deep similarity learning with application to image retrieval",
                "authors": [
                    {
                        "first": "Pengcheng",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "H"
                        ],
                        "last": "Steven",
                        "suffix": ""
                    },
                    {
                        "first": "Hao",
                        "middle": [],
                        "last": "Hoi",
                        "suffix": ""
                    },
                    {
                        "first": "Peilin",
                        "middle": [],
                        "last": "Xia",
                        "suffix": ""
                    },
                    {
                        "first": "Dayong",
                        "middle": [],
                        "last": "Zhao",
                        "suffix": ""
                    },
                    {
                        "first": "Chunyan",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Miao",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of the 21st ACM international conference on Multimedia",
                "volume": "",
                "issue": "",
                "pages": "153--162",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pengcheng Wu, Steven CH Hoi, Hao Xia, Peilin Zhao, Dayong Wang, and Chunyan Miao. 2013. Online multimodal deep similarity learning with application to image retrieval. In Proceedings of the 21st ACM international conference on Multimedia, pages 153- 162. ACM.",
                "links": null
            },
            "BIBREF27": {
                "ref_id": "b27",
                "title": "Joint relational embeddings for knowledge-based question answering",
                "authors": [
                    {
                        "first": "Min-Chul",
                        "middle": [],
                        "last": "Yang",
                        "suffix": ""
                    },
                    {
                        "first": "Nan",
                        "middle": [],
                        "last": "Duan",
                        "suffix": ""
                    },
                    {
                        "first": "Ming",
                        "middle": [],
                        "last": "Zhou",
                        "suffix": ""
                    },
                    {
                        "first": "Hae-Chang",
                        "middle": [],
                        "last": "Rim",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "645--650",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Min-Chul Yang, Nan Duan, Ming Zhou, and Hae- Chang Rim. 2014. Joint relational embeddings for knowledge-based question answering. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 645-650.",
                "links": null
            },
            "BIBREF28": {
                "ref_id": "b28",
                "title": "Deep learning for answer sentence selection",
                "authors": [
                    {
                        "first": "Lei",
                        "middle": [],
                        "last": "Yu",
                        "suffix": ""
                    },
                    {
                        "first": "Karl",
                        "middle": [
                            "Moritz"
                        ],
                        "last": "Hermann",
                        "suffix": ""
                    },
                    {
                        "first": "Phil",
                        "middle": [],
                        "last": "Blunsom",
                        "suffix": ""
                    },
                    {
                        "first": "Stephen",
                        "middle": [],
                        "last": "Pulman",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "NIPS deep learning workshop",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep learning for answer sentence selection. NIPS deep learning workshop.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "Examples for paraphrasing and clause coherence tasks"
            },
            "FIGREF1": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "gpCNN: learning g-phrase representations. This figure only shows two convolution layers (i.e., L = 2) for saving space."
            },
            "FIGREF2": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "CONCAT match feature model"
            },
            "FIGREF3": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "mfCNN & MLP for matching score learning. s * = 10, f b = 5, k b = 2, c b = 4 in this example."
            },
            "FIGREF4": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "Effect on dev acc (clause coherence) of different factors: # convolution blocks, match feature model, freeze vs. fine-tune, pooling method."
            },
            "FIGREF5": {
                "uris": null,
                "num": null,
                "type_str": "figure",
                "text": "Figure 7 (3rd table) demonstrates that finetuning g-phrase representations gives better performance than freezing them. Also, grid-based and phrase-focused pooling outperform dynamic pooling (Socher et al., 2011) (4th table). Phrasefocused pooling performs best. Table 1 compares MultiGranCNN to ARC-I and ARC-II, the architectures proposed by Hu et al."
            },
            "TABREF1": {
                "num": null,
                "type_str": "table",
                "text": "Performance on clause coherence test set. SEPIA), computed on entire sentences. Bach et al. (2014) applied MT metrics to elementary discourse units. We integrate these eight MT metrics from prior work.",
                "content": "<table><tr><td>used</td></tr></table>",
                "html": null
            }
        }
    }
}