File size: 94,307 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
{
    "paper_id": "P13-1018",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:37:16.527571Z"
    },
    "title": "Microblogs as Parallel Corpora",
    "authors": [
        {
            "first": "Wang",
            "middle": [],
            "last": "Ling",
            "suffix": "",
            "affiliation": {},
            "email": "lingwang@cs.cmu.edu"
        },
        {
            "first": "Guang",
            "middle": [],
            "last": "Xiang",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "INESC-ID",
                "location": {
                    "settlement": "Lisbon",
                    "country": "Portugal ("
                }
            },
            "email": "guangx@cs.cmu.edu"
        },
        {
            "first": "Chris",
            "middle": [],
            "last": "Dyer",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "INESC-ID",
                "location": {
                    "settlement": "Lisbon",
                    "country": "Portugal ("
                }
            },
            "email": "cdyer@cs.cmu.edu"
        },
        {
            "first": "Alan",
            "middle": [],
            "last": "Black",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "INESC-ID",
                "location": {
                    "settlement": "Lisbon",
                    "country": "Portugal ("
                }
            },
            "email": ""
        },
        {
            "first": "Isabel",
            "middle": [],
            "last": "Trancoso",
            "suffix": "",
            "affiliation": {},
            "email": "isabel.trancoso@inesc-id.pt"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "In the ever-expanding sea of microblog data, there is a surprising amount of naturally occurring parallel text: some users create post multilingual messages targeting international audiences while others \"retweet\" translations. We present an efficient method for detecting these messages and extracting parallel segments from them. We have been able to extract over 1M Chinese-English parallel segments from Sina Weibo (the Chinese counterpart of Twitter) using only their public APIs. As a supplement to existing parallel training data, our automatically extracted parallel data yields substantial translation quality improvements in translating microblog text and modest improvements in translating edited news commentary. The resources in described in this paper are available at http://www.cs.cmu.edu/ \u223c lingwang/utopia.",
    "pdf_parse": {
        "paper_id": "P13-1018",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "In the ever-expanding sea of microblog data, there is a surprising amount of naturally occurring parallel text: some users create post multilingual messages targeting international audiences while others \"retweet\" translations. We present an efficient method for detecting these messages and extracting parallel segments from them. We have been able to extract over 1M Chinese-English parallel segments from Sina Weibo (the Chinese counterpart of Twitter) using only their public APIs. As a supplement to existing parallel training data, our automatically extracted parallel data yields substantial translation quality improvements in translating microblog text and modest improvements in translating edited news commentary. The resources in described in this paper are available at http://www.cs.cmu.edu/ \u223c lingwang/utopia.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Microblogs such as Twitter and Facebook have gained tremendous popularity in the past 10 years. In addition to being an important form of communication for many people, they often contain extremely current, even breaking, information about world events. However, the writing style of microblogs tends to be quite colloquial, with frequent orthographic innovation (R U still with me or what?) and nonstandard abbreviations (idk! shm)-quite unlike the style found in more traditional, edited genres. This poses considerable problems for traditional NLP tools, which were developed with other domains in mind, which often make strong assumptions about orthographic uniformity (i.e., there is just one way to spell you). One approach to cope with this problem is to annotate in-domain data (Gimpel et al., 2011) .",
                "cite_spans": [
                    {
                        "start": 786,
                        "end": 807,
                        "text": "(Gimpel et al., 2011)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Machine translation suffers acutely from the domain-mismatch problem caused by microblog text. On one hand, standard models are probably suboptimal since they (like many models) assume orthographic uniformity in the input. However, more acutely, the data used to develop these systems and train their models is drawn from formal and carefully edited domains, such as parallel web pages and translated legal documents. MT training data seldom looks anything like microblog text.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "This paper introduces a method for finding naturally occurring parallel microblog text, which helps address the domain-mismatch problem. Our method is inspired by the perhaps surprising observation that a reasonable number of microblog users tweet \"in parallel\" in two or more languages. For instance, the American entertainer Snoop Dogg regularly posts parallel messages on Sina Weibo (Mainland China's equivalent of Twitter), for example, watup Kenny Mayne!! -Kenny Mayne\uff0c\u6700\u8fd1\u8fd9\u4e48\u6837\u554a\uff01\uff01, where an English message and its Chinese translation are in the same post, separated by a dash. Our method is able to identify and extract such translations. Briefly, this requires determining if a tweet contains more than one language, if these multilingual utterances contain translated material (or are due to something else, such as code switching), and what the translated spans are.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The paper is organized as follows. Section 2 describes the related work in parallel data extraction. Section 3 presents our model to extract parallel data within the same document. Section 4 describes our extraction pipeline. Section 5 describes the data we gathered from both Sina Weibo (Chinese-English) and Twitter (Chinese-English and Arabic-English). We then present experiments showing that our harvested data not only substantially improves translations of microblog text with existing (and arguably inappropriate) translation models, but that it improves the translation of more traditional MT genres, like newswire. We conclude in Section 6.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Automatic collection of parallel data is a wellstudied problem. Approaches to finding parallel web documents automatically have been particularly important (Resnik and Smith, 2003; Fukushima et al., 2006; Li and Liu, 2008; Uszkoreit et al., 2010; Ture and Lin, 2012) . These broadly work by identifying promising candidates using simple features, such as URL similarity or \"gist translations\" and then identifying truly parallel segments with more expensive classifiers. More specialized resources were developed using manual procedures to leverage special features of very large collections, such as Europarl (Koehn, 2005) .",
                "cite_spans": [
                    {
                        "start": 156,
                        "end": 180,
                        "text": "(Resnik and Smith, 2003;",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 181,
                        "end": 204,
                        "text": "Fukushima et al., 2006;",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 205,
                        "end": 222,
                        "text": "Li and Liu, 2008;",
                        "ref_id": null
                    },
                    {
                        "start": 223,
                        "end": 246,
                        "text": "Uszkoreit et al., 2010;",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 247,
                        "end": 266,
                        "text": "Ture and Lin, 2012)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 610,
                        "end": 623,
                        "text": "(Koehn, 2005)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Mining parallel or comparable messages from microblogs has mainly relied on Cross-Lingual Information Retrieval techniques (CLIR). Jelh et al. (2012) attempt to find pairs of tweets in Twitter using Arabic tweets as search queries in a CLIR system. Afterwards, the model described in (Xu et al., 2001 ) is applied to retrieve a set of ranked translation candidates for each Arabic tweet, which are then used as parallel candidates.",
                "cite_spans": [
                    {
                        "start": 131,
                        "end": 149,
                        "text": "Jelh et al. (2012)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 284,
                        "end": 300,
                        "text": "(Xu et al., 2001",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "The work on mining parenthetical translations (Lin et al., 2008) , which attempts to find translations within the same document, has some similarities with our work, since parenthetical translations are within the same document. However, parenthetical translations are generally used to translate names or terms, which is more limited than our work which extracts whole sentence translations.",
                "cite_spans": [
                    {
                        "start": 46,
                        "end": 64,
                        "text": "(Lin et al., 2008)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Finally, crowd-sourcing techniques to obtain translations have been previously studied and applied to build datasets for casual domains (Zbib et al., 2012; Post et al., 2012) . These approaches require remunerated workers to translate the messages, and the amount of messages translated per day is limited. We aim to propose a method that acquires large amounts of parallel data for free. The drawback is that there is a margin of error in the parallel segment identification and alignment. However, our system can be tuned for precision or for recall.",
                "cite_spans": [
                    {
                        "start": 136,
                        "end": 155,
                        "text": "(Zbib et al., 2012;",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 156,
                        "end": 174,
                        "text": "Post et al., 2012)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "We will first abstract from the domain of Microblogs and focus on the task of retrieving parallel segments from single documents. Prior work on finding parallel data attempts to reason about the probability that pairs of documents (x, y) are parallel. In contrast, we only consider one document at a time, defined by x = x 1 , x 2 , . . . , x n , and consisting of n tokens, and need to determine whether there is parallel data in x, and if so, where are the parallel segments and their languages. For simplicity, we assume that there are at most 2 continuous segments that are parallel.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Segment Retrieval",
                "sec_num": "3"
            },
            {
                "text": "As representation for the parallel segments within the document, we use the tuple ([p, q], l, [u, v] , r, a). The word indexes [p, q] and [u, v] are used to identify the left segment (from p to q) and right segment (from u to v), which are parallel. We shall refer [p, q] and [u, v] as the spans of the left and right segments. To avoid overlaps, we set the constraint p \u2264 q < u \u2264 v. Then, we use l and r to identify the language of the left and right segments, respectively. Finally, a represents the word alignment between the words in the left and the right segments.",
                "cite_spans": [
                    {
                        "start": 82,
                        "end": 100,
                        "text": "([p, q], l, [u, v]",
                        "ref_id": null
                    },
                    {
                        "start": 127,
                        "end": 144,
                        "text": "[p, q] and [u, v]",
                        "ref_id": null
                    },
                    {
                        "start": 265,
                        "end": 271,
                        "text": "[p, q]",
                        "ref_id": null
                    },
                    {
                        "start": 276,
                        "end": 282,
                        "text": "[u, v]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Segment Retrieval",
                "sec_num": "3"
            },
            {
                "text": "The main problem we address is to find the parallel data when the boundaries of the parallel segments are not defined explicitly. If we knew the indexes [p, q] and [u, v] , we could simply run a language detector for these segments to find l and r. Then, we would use an word alignment model (Brown et al., 1993; Vogel et al., 1996) , with source s = x p , . . . , x q , target t = x u , . . . , x v and lexical table \u03b8 l,r to calculate the Viterbi alignment a. Finally, from the probability of the word alignments, we can determine whether the segments are parallel. Thus, our model will attempt to find the optimal values for the segments [p, q] [u, v] , languages l, r and word alignments a jointly. However, there are two problems with this approach. Firstly, word alignment models generally attribute higher probabilities to smaller segments, since these are the result of a smaller product chain of probabilities. In fact, because our model can freely choose the segments to align, choosing only one word as the left segment that is well aligned to a word in the right segment would be the best choice. This is obviously not our goal, since we would not obtain any useful sentence pairs. Secondly, inference must be performed over the combination of all latent variables, which is intractable using a brute force algorithm. We shall describe our model to solve the first problem in 3.1 and our dynamic programming approach to make the inference tractable in 3.2.",
                "cite_spans": [
                    {
                        "start": 153,
                        "end": 159,
                        "text": "[p, q]",
                        "ref_id": null
                    },
                    {
                        "start": 164,
                        "end": 170,
                        "text": "[u, v]",
                        "ref_id": null
                    },
                    {
                        "start": 292,
                        "end": 312,
                        "text": "(Brown et al., 1993;",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 313,
                        "end": 332,
                        "text": "Vogel et al., 1996)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 641,
                        "end": 647,
                        "text": "[p, q]",
                        "ref_id": null
                    },
                    {
                        "start": 648,
                        "end": 654,
                        "text": "[u, v]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Segment Retrieval",
                "sec_num": "3"
            },
            {
                "text": "We propose a simple (non-probabilistic) threefactor model that models the spans of the parallel segments, their languages, and word alignments jointly. This model is defined as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "S([u, v], r, [p, q],l, a | x) = S \u03b1 S ([p, q], [u, v] | x)\u00d7 S \u03b2 L (l, r | [p, q], [u, v], x)\u00d7 S \u03b3 T (a | [p, q], l, [u, v], r, x)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "Each of the components is weighted by the parameters \u03b1, \u03b2 and \u03b3. We set these values empirically \u03b1 = 0.3, \u03b2 = 0.3 and \u03b3 = 0.4, and leave the optimization of these parameters as future work. We discuss the components of this model in turn.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "Span score S S . We define the score of hypothesized pair of spans [p, q] , [u, v] as:",
                "cite_spans": [
                    {
                        "start": 67,
                        "end": 73,
                        "text": "[p, q]",
                        "ref_id": null
                    },
                    {
                        "start": 76,
                        "end": 82,
                        "text": "[u, v]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "S S ([p, q], [u, v] | x) = (q \u2212 p + 1) + (v \u2212 u + 1) 0<p \u2264q <u \u2264v \u2264n (q \u2212 p + 1) + (v \u2212 u + 1) \u00d7 \u03c8([p, q], [u, v], x)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "The first factor is a distribution over all spans that assigns higher probability to segmentations that cover more words in the document. It is highest for segmentations that cover all the words in the document (this is desirable since there are many sentence pairs that can be extracted but we want to find the largest sentence pair in the document). The function \u03c8 takes on values of 0 or 1 depending on whether certain constraints are violated, these include: parenthetical constraints that enforce that spans must not break text within parenthetical characters and language constraints that ensure that we do break a sequence of Mandarin characters, Arabic words or Latin words.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "Language score S L . The language score",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "S L (l, r | [p, q], [u, v], x)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "indicates whether the language labels l, r are appropriate to the document contents:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "S L (l, r | [p, q], [u, v], x) = q i=p L(l, x i ) + v i=u L(r, x i ) n",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "where L(l, x) is a language detection function that yields 1 if the word x i is in language l, and 0 otherwise. We build the function simply by considering all words that are composed of Latin characters as English, Arabic characters as Arabic and Han characters as Mandarin. This approach is not perfect, but it is simple and works reasonably well for our purposes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "Translation score S T . The translation score",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "S T (a | [p, q], l, [u, v], r) indicates whether [p, q]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "is a reasonable translation of [u, v] with the alignment a. We rely on IBM Model 1 probabilities for this score:",
                "cite_spans": [
                    {
                        "start": 31,
                        "end": 37,
                        "text": "[u, v]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "S T (a | [p, q], l, [u, v], r, x) = 1 (q \u2212 p + 1) v\u2212u+2 v i=u P M1 (x i | x a i ).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "The lexical tables P M1 for the various language pairs are trained a priori using available parallel corpora. While IBM Model 1 produces worse alignments than other models, in our problem, we need to efficiently consider all possible spans, language pairs and word alignments, which makes the problem intractable. We will show that dynamic programing can be used to make this problem tractable, using Model 1. Furthermore, IBM Model 1 has shown good performance for sentence alignment systems previously (Xu et al., 2005; Braune and Fraser, 2010) .",
                "cite_spans": [
                    {
                        "start": 504,
                        "end": 521,
                        "text": "(Xu et al., 2005;",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 522,
                        "end": 546,
                        "text": "Braune and Fraser, 2010)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": "3.1"
            },
            {
                "text": "Our goal is to find the spans, language pair and alignments such that:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "arg max [p,q],l,[u,v],r,a S([p, q], l, [u, v], r, a | x) (1)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "A high score indicates that the predicted bispan is likely to correspond to a valid parallel span, so we set a constant threshold \u03c4 to determine whether a document has parallel data, i.e., the value of z:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "z * = max [u,v],r,[p,q],l,a S([u, v], r, [p, q], l, a | x) > \u03c4",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "Naively maximizing Eq. 1 would require O(|x| 6 ) operations, which is too inefficient to be practical on large datasets. To process millions of documents, this process would need to be optimized.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "The main bottleneck of the naive algorithm is finding new Viterbi Model 1 word alignments every time we change the spans. Thus, we propose an iterative approach to compute the Viterbi word alignments for IBM Model 1 using dynamic programming.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "Dynamic programming search. The insight we use to improve the runtime is that the Viterbi word alignment of a bispan can be reused to calculate the Viterbi word alignments of larger bispans. The algorithm operates on a 4-dimensional chart of bispans. It starts with the minimal valid span (i.e., [0, 0], [1, 1]) and progressively builds larger spans from smaller ones. Let A p,q,u,v represent the Viterbi alignment (under S T ) of the bispan [p, q] , [u, v] . The algorithm uses the following recursions defined in terms of four operations \u03bb {+v,+u,+p,+q} that manipulate a single dimension of the bispan to construct larger spans:",
                "cite_spans": [
                    {
                        "start": 442,
                        "end": 448,
                        "text": "[p, q]",
                        "ref_id": null
                    },
                    {
                        "start": 451,
                        "end": 457,
                        "text": "[u, v]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "\u2022 A p,q,u,v+1 = \u03bb +v (A p,q,u,v",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": ") adds one token to the end of the right span with index v + 1 and find the viterbi alignment for that token. This requires iterating over all the tokens in the left span, [p, q] and possibly updating their alignments. See Fig. 1 for an illustration.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 223,
                        "end": 229,
                        "text": "Fig. 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "\u2022 A p,q,u+1,v = \u03bb +u (A p,q,u,v",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": ") removes the first token of the right span with index u, so we only need to remove the alignment from u, which can be done in time O(1).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "\u2022 A p,q+1,u,v = \u03bb +q (A p,q,u,v",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": ") adds one token to the end of the left span with index q + 1, we need to check for each word in the right span, if aligning to the word in index q+1 yields a better translation probability. This update requires n\u2212 q + 1 operations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "\u2022 A p+1,q,u,v = \u03bb +p (A p,q,u,v",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": ") removes the first token of the left span with index p. After removing the token, we need to find new alignments for all tokens that were aligned to p. Thus, the number of operations for this update is K \u00d7 (q \u2212 p + 1), where K is the number of words that were aligned to p. In the best case, no words are aligned to the token in p, and we can simply remove it. In the worst case, if all target words were aligned to p, this update will result in the recalculation of all Viterbi Alignments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "The algorithm proceeds until all valid cells have been computed. One important aspect is that the update functions differ in complexity, so the sequence of updates we apply will impact the performance of the system. Most spans are reachable using any of the four update functions. For instance, the span A 2,3,4,5 can be reached using \u03bb +v (A 2,3,4,4 ), \u03bb +u (A 2,3,3,5 ), \u03bb +q (A 2,2,4,5 ) or \u03bb +p (A 1,3,4,5 ). However, we want to use \u03bb +u In this example, the parallel message contains a \"translation\" of a b to A B.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "whenever possible, since it only requires one operation, although that is not always possible. For instance, the state A 2,2,2,4 cannot be reached using \u03bb +u , since the state A 2,2,1,4 is not valid, because the spans overlap. If this happens, incrementally more expensive updates need to be used, such as \u03bb +v , then \u03bb +q , which are in the same order of complexity. Finally, we want to minimize the use of \u03bb +p , which is quadratic in the worst case. Thus, we use the following recursive formulation that guarantees the optimal outcome:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "A p,q,u,v = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03bb +u (A p,q,u\u22121,v ) if u > q + 1 \u03bb +v (A p,q,u,v\u22121 ) else if v > q + 1 \u03bb +p (A p\u22121,q,u,v ) else if q = p + 1 \u03bb +q (A p,q\u22121,u,v ) otherwise",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "This transition function applies the cheapest possible update to reach state A p,q,u,v .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "Complexity analysis. We can see that \u03bb +u is only needed in the following the cases",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "[0, 1][2, 2], [1, 2][3, 3], \u2022 \u2022 \u2022 , [n \u2212 2, n \u2212 1][n, n].",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "Since, this update is quadratic in the worst case, the complexity of this operations is O(n 3 ). The update \u03bb +q , is applied to the cases",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "[ * , 1][2, 2], [ * , 2][3, 3], \u2022 \u2022 \u2022 , [ * , n \u2212 1], [n, n],",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "where * denotes any number within the span constraints but not present in previous updates. Since, the update is linear and we need to iterate through all tokens twice, this update takes O(n 3 ) operations. The update \u03bb +v is applied for the cases",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "[ * , 1][2, * ], [ * , 2][3, * ], \u2022 \u2022 \u2022 , [ * , n \u2212 1], [n, * ].",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "Thus, with three degrees of freedom and a linear update, it runs in O(n 4 ) time. Finally, update \u03bb +u runs in constant time, but is run for all remaining cases, which constitute O(n 4 ) space. By summing the executions of all updates, we observe that the order of magnitude of our exact inference process is O(n 4 ). Note that for exact inference, it is not possible to get a lower order of magnitude, since we need to at least iterate through all possible span values once, which takes O(n 4 ) time.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": "3.2"
            },
            {
                "text": "We will now describe our method to extract parallel data from Microblogs. The target domains in this work are Twitter and Sina Weibo, and the main language pair is Chinese-English. Furthermore, we also run the system for the Arabic-English language pair using the Twitter data. For the Twitter domain, we use a previously crawled dataset from the years 2008 to 2013, where one million tweets are crawled every day. In total, we processed 1.6 billion tweets.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Data Extraction",
                "sec_num": "4"
            },
            {
                "text": "Regarding Sina Weibo, we built a crawler that continuously collects tweets from Weibo. We start from one seed user and collect his posts, and then we find the users he follows that we have not considered, and repeat. Due to the rate limiting established by the Weibo API 1 , we are restricted in terms of number of requests every hour, which greatly limits the amount of messages we can collect. Furthermore, each request can only fetch up to 100 posts from a user, and subsequent pages of 100 posts require additional API calls. Thus, to optimize the number of parallel posts we can collect per request, we only crawl all messages from users that have at least 10 parallel tweets in their first 100 posts. The number of parallel messages is estimated by running our alignment model, and checking if \u03c4 > \u03c6, where \u03c6 was set empirically initially, and optimized after obtaining annotated data, which will be detailed in 5.1. Using this process, we crawled 65 million tweets from Sina Weibo within 4 months.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Data Extraction",
                "sec_num": "4"
            },
            {
                "text": "In both cases, we first filter the collection of tweets for messages containing at least one trigram in each language of the target language pair, determined by their Unicode ranges. This means that for the Chinese-English language pair, we only keep tweets with more than 3 Mandarin characters and 3 latin words. Furthermore, based on the work in (Jelh et al., 2012) , if a tweet A is identified as a retweet, meaning that it references another tweet B, we also consider the hypothesis that these tweets may be mutual translations. Thus, if A and B contain trigrams in different languages, 1 http://open.weibo.com/wiki/API\u6587\u6863/en these are also considered for the extraction of parallel data. This is done by concatenating tweets A and B, and adding the constraint that [p, q] must be within A and [u, v] must be within B. Finally, identical duplicate tweets are removed.",
                "cite_spans": [
                    {
                        "start": 348,
                        "end": 367,
                        "text": "(Jelh et al., 2012)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Data Extraction",
                "sec_num": "4"
            },
            {
                "text": "After filtering, we obtained 1124k ZH-EN tweets from Sina Weibo, 868k ZH-EN and 136k AR-EN tweets from Twitter. These language pairs are not definite, since we simply check if there is a trigram in each language.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Data Extraction",
                "sec_num": "4"
            },
            {
                "text": "Finally, we run our alignment model described in section 3, and obtain the parallel segments and their scores, which measure how likely those segments are parallel. In this process, lexical tables for EN-ZH language pair used by Model 1 were built using the FBIS dataset (LDC2003E14) for both directions, a corpus of 300K sentence pairs from the news domain. Likewise, for the EN-AR language pair, we use a fraction of the NIST dataset, by removing the data originated from UN, which leads to approximately 1M sentence pairs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Data Extraction",
                "sec_num": "4"
            },
            {
                "text": "We evaluate our method in two ways. First, intrinsically, by observing how well our method identifies tweets containing parallel data, the language pair and what their spans are. Second, extrinsically, by looking at how well the data improves a translation task. This methodology is similar to that of Smith et al. (2010) .",
                "cite_spans": [
                    {
                        "start": 302,
                        "end": 321,
                        "text": "Smith et al. (2010)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments",
                "sec_num": "5"
            },
            {
                "text": "Data. Our method needs to determine if a given tweet contains parallel data, and if so, what is the language pair of the data, and what segments are parallel. Thus, we had a native Mandarin speaker, also fluent in English, to annotate 2000 tweets sampled from crawled Weibo tweets. One important question of answer is what portion of the Microblogs contains parallel data. Thus, we also use the random sample Twitter and annotated 1200 samples, identifying whether each sample contains parallel data, for the EN-ZH and AR-EN filtered tweets.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Data Extraction",
                "sec_num": "5.1"
            },
            {
                "text": "Metrics. To test the accuracy of the score S, we ordered all 2000 samples by score. Then, we calculate the precision, recall and accuracy at increasing intervals of 10% of the top samples. We count as a true positive (tp) if we correctly identify a parallel tweet, and as a false positive (f p) spuriously detect a parallel tweet. Finally, a true negative (tn) occurs when we correctly detect a non-parallel tweet, and a false negative (f n) if we miss a parallel tweet. Then, we set the precision as tp tp+f p , recall as tp tp+f n and accuracy as tp+tn tp+f p+tn+f n . For language identification, we calculate the accuracy based on the number of instances that were identified with the correct language pair. Finally, to evaluate the segment alignment, we use the Word Error Rate (WER) metric, without substitutions, where we compare the left and right spans of our system and the respective spans of the reference. We count an insertion error (I) for each word in our system's spans that is not present in the reference span and a deletion error (D) for each word in the reference span that is not present in our system's spans. Thus, we set W ER = D+I N , where N is the number of tokens in the tweet. To compute this score for the whole test set, we compute the average of the W ER for each sample.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Data Extraction",
                "sec_num": "5.1"
            },
            {
                "text": "Results. The precision, recall and accuracy curves are shown in Figure 2 . The quality of the parallel sentence detection did not vary significantly with different setups, so we will only show the results for the best setup, which is the baseline model with span constraints. Figure 2 : Precision, recall and accuracy curves for parallel data detection. The y-axis denotes the scores for each metric, and the x-axis denotes the percentage of the highest scoring sentence pairs that are kept. From the precision and recall curves, we observe that most of the parallel data can be found at the top 30% of the filtered tweets, where 5 in 6 tweets are detected correctly as parallel, and only 1 in every 6 parallel sentences is lost. We will denote the score threshold at this point as \u03c6, which is a good threshold to estimate on whether the tweet is parallel. However, this parameter can be tuned for precision or recall. We also see that in total, 30% of the filtered tweets are parallel. If we generalize this ratio for the complete set with 1124k tweets, we can expect approximately 337k parallel sentences. Finally, since 65 million tweets were extracted to generate the 337k tweets, we estimate that approximately 1 parallel tweet can be found for every 200 tweets we process using our targeted approach. On the other hand, from the 1200 tweets from Twitter, we found that 27 had parallel data in the ZH-EN pair, if we extrapolate for the whole 868k filtered tweets, we expect that we can find 19530. 19530 parallel sentences from 1.6 billion tweets crawled randomly, represents 0.001% of the total corpora. For AR-EN, a similar result was obtained where we expect 12407 tweets out of the 1.6 billion to be parallel. This shows that targeted approaches can substantially reduce the crawling effort required to find parallel tweets. Still, considering that billions of tweets are posted daily, this is a substantial source of parallel data. The remainder of the tests will be performed on the Weibo dataset, which contains more parallel data. Tests on the Twitter data will be conducted as future work, when we process Twitter data on a larger scale to obtain more parallel sentences. For the language identification task, we had an accuracy of 99.9%, since distinguishing English and Mandarin is trivial. The small percentage of errors originated from other latin languages (Ex: French) due to our naive language detector.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 64,
                        "end": 72,
                        "text": "Figure 2",
                        "ref_id": null
                    },
                    {
                        "start": 276,
                        "end": 284,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Parallel Data Extraction",
                "sec_num": "5.1"
            },
            {
                "text": "As for the segment alignment task. Our baseline system with no constraints obtains a WER of 12.86%, and this can be improved to 11.66% by adding constraints to possible spans. This shows that, on average, approximately 1 in 9 words on the parallel segments is incorrect. However, translation models are generally robust to such kinds of errors and can learn good translations even in the presence of imperfect sentence pairs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Data Extraction",
                "sec_num": "5.1"
            },
            {
                "text": "Among the 578 tweets that are parallel, 496 were extracted within the same tweet and 82 were extracted from retweets. Thus, we see that the majority of the parallel data comes from within the same tweet.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Data Extraction",
                "sec_num": "5.1"
            },
            {
                "text": "Topic analysis. To give an intuition about the contents of the parallel data we found, we looked at the distribution over topics of the parallel dataset inferred by LDA (Blei et al., 2003) . Thus, we grouped the Weibo filtered tweets by users, and ran LDA over the predicted English segments, with 12 topics. The 7 most interpretable topics are shown in Table 1 . We see that the data contains a # Topic Most probable words in topic 1 (Dating) love time girl live mv back word night rt wanna 2 (Entertainment) news video follow pong image text great day today fans 3 (Music) cr day tour cn url amazon music full concert alive 4 (Religion) man god good love life heart would give make lord 5 (Nightlife) cn url beijing shanqi party adj club dj beijiner vt 6 (Chinese News) china chinese year people world beijing years passion country government 7 (Fashion) street fashion fall style photo men model vogue spring magazine Table 1 : Most probable words inferred using LDA in several topics from the parallel data extracted from Weibo. Topic labels (in parentheses) were assigned manually for illustration purposes.",
                "cite_spans": [
                    {
                        "start": 169,
                        "end": 188,
                        "text": "(Blei et al., 2003)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 354,
                        "end": 361,
                        "text": "Table 1",
                        "ref_id": null
                    },
                    {
                        "start": 921,
                        "end": 928,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Parallel Data Extraction",
                "sec_num": "5.1"
            },
            {
                "text": "variety of topics, both formal (Chinese news, religion) and informal (entertainment, music).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Data Extraction",
                "sec_num": "5.1"
            },
            {
                "text": "Example sentence pairs. To gain some perspective on the type of sentence pairs we are extracting, we will illustrate some sentence pairs we crawled and aligned automatically. Table 2 contains 5 English-Mandarin and 4 English-Arabic sentence pairs that were extracted automatically. These were chosen, since they contain some aspects that are characteristic of the text present in Microblogs and Social Media. These are:",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 175,
                        "end": 182,
                        "text": "Table 2",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Parallel Data Extraction",
                "sec_num": "5.1"
            },
            {
                "text": "\u2022 Abbreviations -In most sentence pairs examples, we can witness the use of abbreviated forms of English words, such as wanna, TMI, 4 and imma. These can be normalized as want to, too much information, for and I am going to, respectively. In sentence 5, we observe that this phenomena also occurs in Mandarin. We find that TMD is a popular way to write \u4ed6\u5988\u7684 whose Pinyin rendering is t\u0101 m\u0101 de. The meaning of this expression depends on the context it is used, and can convey a similar connotation as adding the intensifier the hell to an English sentence. \u2022 Jargon -Another common phenomena is the appearance of words that are only used in subcommunities. For instance, in sentence pair 4, we the jargon word cday is used, which is a colloquial variant for birthday. \u2022 Emoticons -In sentence 8, we observe the presence of the emoticon :), which is frequently used in this media. We found that emoticons are either translated as they are or simply removed, in most cases. \u2022 Syntax errors -In the domain of microblogs, it is also common that users do not write strictly syntactic sentences, for instance, in sentence pair 7, the sentence onni this gift only 4 u, is clearly not syntactically correct. Firstly, onni is a named entity, yet it is not capitalized. Secondly, a comma should follow onni. Thirdly, the verb is should be used after gift. Having examples of these sentences in the training set, with common mistakes (intentional or not), might become a key factor in training MT systems that can be robust to such errors. \u2022 Dialects -We can observe a much broader range of dialects in our data, since there are no dialect standards in microblogs. For instance, in sentence pair 6, we observe an arabic word (in bold) used in the spoken Arabic dialect used in some countries along the shores of the Persian Gulf, which means means the next. In standard Arabic, a significantly different form is used.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Data Extraction",
                "sec_num": "5.1"
            },
            {
                "text": "We can also see in sentence pair 9 that our aligner does not alway make the correct choice when determining spans. In this case, the segment RT @MARYAMALKHAWAJA: was included in the English segment spuriously, since it does not correspond to anything in the Arabic counterpart.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parallel Data Extraction",
                "sec_num": "5.1"
            },
            {
                "text": "We report on machine translation experiments using our harvested data in two domains: edited news and microblogs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Machine Translation Experiments",
                "sec_num": "5.2"
            },
            {
                "text": "News translation. For the news test, we created a new test set from a crawl of the Chinese-English documents on the Project Syndicate website 2 , which contains news commentary articles. We chose to use this data set, rather than more standard NIST test sets to ensure that we had recent documents in the test set (the most recent NIST test sets contain documents published in 2007, well before our microblog data was created). We extracted 1386 parallel sentences for tuning and another 1386 sentences for testing, from the manually aligned segments. For this test set, we used 8 million sentences from the full NIST parallel dataset as the language model training data. We shall call this test set Syndicate. Microblog translation. To carry out the microblog translation experiments, we need a high quality parallel test set. Since we are not aware of such a test set, we created one by manually selecting parallel messages from Weibo. Our procedure was as follows. We selected 2000 candidate Weibo posts from users who have a high number of parallel tweets according to our automatic method (at least 2 in every 5 tweets). To these, we added another 2000 messages from our targeted Weibo crawl, but these had no requirement on the proportion of parallel tweets they had produced. We identified 2374 parallel segments, of which we used 1187 for development and 1187 for testing. We refer to this test set as Weibo. 3 Obviously, we removed the development and test sets from our training data. Furthermore, to ensure that our training data was not too similar to the test set in the Weibo translation task, we filtered the training data to remove near duplicates by computing edit distance between each parallel sentence in the heldout set and each training instance. If either the source or the target sides of the a training instance had an edit distance of less than 10%, we removed it. 4 As for the language models, we collected a further 10M tweets from Twitter for the English language model and another 10M tweets from Weibo for the Chinese language model. 3 We acknowledge that self-translated messages are probably not a typically representative sample of all microblog messages. However, we do not have the resources to produce a carefully curated test set with a more broadly representative distribution. Still, we believe these results are informative as long as this is kept in mind. 4 Approximately 150,000 training instances removed. Baselines. We report results on these test sets using different training data. First, we use the FBIS dataset which contains 300K high quality sentence pairs, mostly in the broadcast news domain. Second, we use the full 2012 NIST Chinese-English dataset (approximately 8M sentence pairs, including FBIS). Finally, we use our crawled data (referred as Weibo) by itself and also combined with the two previous training sets.",
                "cite_spans": [
                    {
                        "start": 1417,
                        "end": 1418,
                        "text": "3",
                        "ref_id": null
                    },
                    {
                        "start": 1891,
                        "end": 1892,
                        "text": "4",
                        "ref_id": null
                    },
                    {
                        "start": 2065,
                        "end": 2066,
                        "text": "3",
                        "ref_id": null
                    },
                    {
                        "start": 2398,
                        "end": 2399,
                        "text": "4",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Machine Translation Experiments",
                "sec_num": "5.2"
            },
            {
                "text": "Setup. We use the Moses phrase-based MT system with standard features (Koehn et al., 2003) . For reordering, we use the MSD reordering model (Axelrod et al., 2005) . As the language model, we use a 5-gram model with Kneser-Ney smoothing. The weights were tuned using MERT . Results are presented with BLEU-4 (Papineni et al., 2002) .",
                "cite_spans": [
                    {
                        "start": 70,
                        "end": 90,
                        "text": "(Koehn et al., 2003)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 141,
                        "end": 163,
                        "text": "(Axelrod et al., 2005)",
                        "ref_id": null
                    },
                    {
                        "start": 308,
                        "end": 331,
                        "text": "(Papineni et al., 2002)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syndicate Weibo",
                "sec_num": null
            },
            {
                "text": "Results. The BLEU scores for the different parallel corpora are shown in Table 3 and the top 10 out-of-vocabulary (OOV) words for each dataset are shown in Table 4 . We observe that for the Syndicate test set, the NIST and FBIS datasets 7hollande (5) wolfowitz (7) itunes (8) iheartradio (5) zeman (2) gaddafi 7wikileaks (4) revolutions (7) havoc 8xoxo (4) @yaptv (2) merkel 7wilders (3) qaddafi (7) sammy (6) snoop (4) witnessing (2) fats 7rant (3) geopolitical (7) obama (6) shinoda (4) whoohooo (2) dialogue 7esm (3) genome (7) lol (6) scrapbook (4) wbr 2Table 4: The most frequent out-of-vocabulary (OOV) words and their counts for the two English-source test sets with three different training sets.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 73,
                        "end": 80,
                        "text": "Table 3",
                        "ref_id": "TABREF3"
                    },
                    {
                        "start": 156,
                        "end": 163,
                        "text": "Table 4",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Syndicate Weibo",
                "sec_num": null
            },
            {
                "text": "perform better than our extracted parallel data. This is to be expected, since our dataset was extracted from an extremely different domain. However, by combining the Weibo parallel data with this standard data, improvements in BLEU are obtained. Error analysis indicates that one major factor is that names from current events, such as Romney and Wikileaks do not occur in the older NIST and FBIS datasets, but they are represented in the Weibo dataset. Furthermore, we also note that the system built on the Weibo dataset does not perform substantially worse than the one trained on the FBIS dataset, a further indication that harvesting parallel microblog data yields a diverse collection of translated material. For the Weibo test set, a significant improvement over the news datasets can be achieved using our crawled parallel data. Once again newer terms, such as iTunes, are one of the reasons older datasets perform less well. However, in this case, the top OOV words of the news domain datasets are not the most accurate representation of coverage problems in this domain. This is because many frequent words in microblogs, e.g., nonstandard abbreviations, like u and 4 are found in the news domain as words, albeit with different meanings. Thus, the OOV table gives an incomplete picture of the translation problems when using the news domain corpora to translate microblogs. Also, some structural errors occur when training with the news domain datasets, one such example is shown in table 5, where the character \u8bf4 is incorrectly translated to said. This occurs because this type of constructions is infrequent in news datasets. Furthermore, we can see that compound expressions, such as the translation from \u6d3e\u5bf9\u65f6 \u523b to party time are also learned.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syndicate Weibo",
                "sec_num": null
            },
            {
                "text": "Finally, we observe that combining the datasets Source \u5bf9sam farrar \u8bf4\uff0c\u6d3e\u5bf9\u65f6\u523b Reference to sam farrar , party time FBIS farrar to sam said , in time NIST to sam farrar said , the moment WEIBO to sam farrar , party time ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Syndicate Weibo",
                "sec_num": null
            },
            {
                "text": "We presented a framework to crawl parallel data from microblogs. We find parallel data from single posts, with translations of the same sentence in two languages. We show that a considerable amount of parallel sentence pairs can be crawled from microblogs and these can be used to improve Machine Translation by updating our translation tables with translations of newer terms. Furthermore, the in-domain data can substantially improve the translation quality on microblog data. The resources described in this paper and further developments are available to the general public at http://www.cs.cmu.edu/ \u223c lingwang/utopia.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "http://www.project-syndicate.org/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "The PhD thesis of Wang Ling is supported by FCT grant SFRH/BD/51157/2010. The authors wish to express their gratitude to thank William Cohen, Noah Smith, Waleed Ammar, and the anonymous reviewers for their insight and comments. We are also extremely grateful to Brendan O'Connor for providing the Twitter data and to Philipp Koehn and Barry Haddow for providing the Project Syndicate data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Edinburgh system description for the 2005 iwslt speech translation evaluation",
                "authors": [],
                "year": 2005,
                "venue": "Proceedings of the International Workshop on Spoken Language Translation",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "References [Axelrod et al.2005] Amittai Axelrod, Ra Birch Mayne, Chris Callison-burch, Miles Osborne, and David Talbot. 2005. Edinburgh system description for the 2005 iwslt speech translation evaluation. In Pro- ceedings of the International Workshop on Spoken Language Translation (IWSLT.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Latent dirichlet allocation",
                "authors": [
                    {
                        "first": "[",
                        "middle": [],
                        "last": "Blei",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "J. Mach. Learn. Res",
                "volume": "3",
                "issue": "",
                "pages": "993--1022",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Blei et al.2003] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993-1022, March.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Improved unsupervised sentence alignment for symmetrical and asymmetrical parallel corpora",
                "authors": [
                    {
                        "first": "Fabienne",
                        "middle": [],
                        "last": "Braune",
                        "suffix": ""
                    },
                    {
                        "first": "Alexander",
                        "middle": [],
                        "last": "Fraser",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING '10",
                "volume": "",
                "issue": "",
                "pages": "81--89",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Braune and Fraser2010] Fabienne Braune and Alexan- der Fraser. 2010. Improved unsupervised sentence alignment for symmetrical and asymmetrical paral- lel corpora. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING '10, pages 81-89, Stroudsburg, PA, USA. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "The mathematics of statistical machine translation: parameter estimation",
                "authors": [
                    {
                        "first": "[",
                        "middle": [],
                        "last": "Brown",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Comput. Linguist",
                "volume": "19",
                "issue": "",
                "pages": "263--311",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Brown et al.1993] Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mer- cer. 1993. The mathematics of statistical machine translation: parameter estimation. Comput. Lin- guist., 19:263-311, June.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "A fast and accurate method for detecting English-Japanese parallel texts",
                "authors": [
                    {
                        "first": "[",
                        "middle": [],
                        "last": "Fukushima",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proceedings of the Workshop on Multilingual Language Resources and Interoperability",
                "volume": "",
                "issue": "",
                "pages": "60--67",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Fukushima et al.2006] Ken'ichi Fukushima, Kenjiro Taura, and Takashi Chikayama. 2006. A fast and accurate method for detecting English-Japanese par- allel texts. In Proceedings of the Workshop on Mul- tilingual Language Resources and Interoperability, pages 60-67, Sydney, Australia, July. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Partof-speech tagging for twitter: annotation, features, and experiments",
                "authors": [
                    {
                        "first": "[",
                        "middle": [],
                        "last": "Gimpel",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers",
                "volume": "2",
                "issue": "",
                "pages": "42--47",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Gimpel et al.2011] Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Ja- cob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part- of-speech tagging for twitter: annotation, features, and experiments. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers -Volume 2, HLT '11, pages 42-47, Strouds- burg, PA, USA. Association for Computational Lin- guistics.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Twitter translation using translationbased cross-lingual retrieval",
                "authors": [
                    {
                        "first": "Laura",
                        "middle": [],
                        "last": "Jelh",
                        "suffix": ""
                    },
                    {
                        "first": "Felix",
                        "middle": [],
                        "last": "Hiebel",
                        "suffix": ""
                    },
                    {
                        "first": "Stefan",
                        "middle": [],
                        "last": "Riezler",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation",
                "volume": "",
                "issue": "",
                "pages": "410--421",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Jelh et al.2012] Laura Jelh, Felix Hiebel, and Stefan Riezler. 2012. Twitter translation using translation- based cross-lingual retrieval. In Proceedings of the Seventh Workshop on Statistical Machine Transla- tion, pages 410-421, Montr\u00e9al, Canada, June. Asso- ciation for Computational Linguistics.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Statistical phrase-based translation",
                "authors": [
                    {
                        "first": "[",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
                "volume": "1",
                "issue": "",
                "pages": "48--54",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Koehn et al.2003] Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology -Volume 1, NAACL '03, pages 48-54, Morristown, NJ, USA. Association for Computa- tional Linguistics.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Europarl: A Parallel Corpus for Statistical Machine Translation",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proceedings of the 3rd International Joint Conference on Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "79--86",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn. 2005. Europarl: A Par- allel Corpus for Statistical Machine Translation. In Proceedings of the tenth Machine Translation Sum- mit, pages 79-86, Phuket, Thailand. AAMT, AAMT. [Li and Liu2008] Bo Li and Juan Liu. 2008. Mining Chinese-English parallel corpora from the web. In Proceedings of the 3rd International Joint Confer- ence on Natural Language Processing (IJCNLP).",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Mining parenthetical translations from the web by word alignment",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of ACL-08: HLT",
                "volume": "",
                "issue": "",
                "pages": "994--1002",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Lin et al.2008] Dekang Lin, Shaojun Zhao, Benjamin Van Durme, and Marius Pa\u015fca. 2008. Mining par- enthetical translations from the web by word align- ment. In Proceedings of ACL-08: HLT, pages 994- 1002, Columbus, Ohio, June. Association for Com- putational Linguistics.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Minimum error rate training in statistical machine translation",
                "authors": [
                    {
                        "first": "Franz Josef",
                        "middle": [],
                        "last": "Och",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "160--167",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Pro- ceedings of the 41st Annual Meeting on Association for Computational Linguistics -Volume 1, ACL '03, pages 160-167, Stroudsburg, PA, USA. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Bleu: a method for automatic evaluation of machine translation",
                "authors": [
                    {
                        "first": "[",
                        "middle": [],
                        "last": "Papineni",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02",
                "volume": "",
                "issue": "",
                "pages": "311--318",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine trans- lation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, pages 311-318, Stroudsburg, PA, USA. Associ- ation for Computational Linguistics.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Constructing parallel corpora for six indian languages via crowdsourcing",
                "authors": [],
                "year": 2012,
                "venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation",
                "volume": "",
                "issue": "",
                "pages": "401--409",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Post et al.2012] Matt Post, Chris Callison-Burch, and Miles Osborne. 2012. Constructing parallel cor- pora for six indian languages via crowdsourcing. In Proceedings of the Seventh Workshop on Statisti- cal Machine Translation, pages 401-409, Montr\u00e9al, Canada, June. Association for Computational Lin- guistics.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "The web as a parallel corpus",
                "authors": [
                    {
                        "first": "Philip",
                        "middle": [],
                        "last": "Resnik",
                        "suffix": ""
                    },
                    {
                        "first": "Noah",
                        "middle": [
                            "A"
                        ],
                        "last": "Smith",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Computational Linguistics",
                "volume": "29",
                "issue": "",
                "pages": "349--380",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Resnik and Smith2003] Philip Resnik and Noah A. Smith. 2003. The web as a parallel corpus. Compu- tational Linguistics, 29:349-380.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Extracting parallel sentences from comparable corpora using document level alignment",
                "authors": [
                    {
                        "first": "[",
                        "middle": [],
                        "last": "Smith",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Smith et al.2010] Jason R. Smith, Chris Quirk, and Kristina Toutanova. 2010. Extracting parallel sen- tences from comparable corpora using document level alignment. In Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Why not grab a free lunch? mining large corpora for parallel sentences to improve translation modeling",
                "authors": [
                    {
                        "first": "Ferhan",
                        "middle": [],
                        "last": "Ture",
                        "suffix": ""
                    },
                    {
                        "first": "Jimmy",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "",
                "issue": "",
                "pages": "626--630",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Ture and Lin2012] Ferhan Ture and Jimmy Lin. 2012. Why not grab a free lunch? mining large corpora for parallel sentences to improve translation modeling. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 626-630, Montr\u00e9al, Canada, June. Associa- tion for Computational Linguistics.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Large scale parallel document mining for machine translation",
                "authors": [
                    {
                        "first": "[",
                        "middle": [],
                        "last": "Uszkoreit",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "1101--1109",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Uszkoreit et al.2010] Jakob Uszkoreit, Jay Ponte, Ashok C. Popat, and Moshe Dubiner. 2010. Large scale parallel document mining for machine transla- tion. In Proceedings of the 23rd International Con- ference on Computational Linguistics, pages 1101- 1109.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Hmm-based word alignment in statistical translation",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Vogel",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Proceedings of the 16th conference on Computational linguistics",
                "volume": "",
                "issue": "",
                "pages": "836--841",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Vogel et al.1996] Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. Hmm-based word align- ment in statistical translation. In Proceedings of the 16th conference on Computational linguistics -Vol- ume 2, COLING '96, pages 836-841, Stroudsburg, PA, USA. Association for Computational Linguis- tics.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Evaluating a probabilistic model for cross-lingual information retrieval",
                "authors": [
                    {
                        "first": "[",
                        "middle": [],
                        "last": "Xu",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '01",
                "volume": "",
                "issue": "",
                "pages": "105--110",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Xu et al.2001] Jinxi Xu, Ralph Weischedel, and Chanh Nguyen. 2001. Evaluating a probabilistic model for cross-lingual information retrieval. In Proceed- ings of the 24th annual international ACM SIGIR conference on Research and development in infor- mation retrieval, SIGIR '01, pages 105-110, New York, NY, USA. ACM.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Sentence segmentation using ibm word alignment model 1",
                "authors": [
                    {
                        "first": "[",
                        "middle": [],
                        "last": "Xu",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proceedings of EAMT 2005 (10th Annual Conference of the European Association for Machine Translation",
                "volume": "",
                "issue": "",
                "pages": "280--287",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Xu et al.2005] Jia Xu, Richard Zens, and Hermann Ney. 2005. Sentence segmentation using ibm word alignment model 1. In Proceedings of EAMT 2005 (10th Annual Conference of the European Associa- tion for Machine Translation, pages 280-287.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Machine translation of Arabic dialects",
                "authors": [
                    {
                        "first": "[",
                        "middle": [],
                        "last": "Zbib",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "[Zbib et al.2012] Rabih Zbib, Erika Malchiodi, Jacob Devlin, David Stallard, Spyros Matsoukas, Richard Schwarz, John Makhoul, Omar F. Zaidan, and Chris Callison-Burch. 2012. Machine translation of Ara- bic dialects. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "uris": null,
                "text": "Illustration of the \u03bb +v operator. The light gray boxes show the parallel span and the dark boxes show the span's Viterbi alignment.",
                "type_str": "figure",
                "num": null
            },
            "TABREF0": {
                "content": "<table><tr><td>ENGLISH</td><td>MANDARIN</td></tr><tr><td>1 i wanna live in a wes anderson world 2 ENGLISH</td><td>\u6211\u60f3\u8981\u751f\u6d3b\u5728Wes Anderson\u7684\u4e16\u754c\u91cc ARABIC</td></tr><tr><td>6 It's gonna be a warm week!</td><td/></tr><tr><td>7 onni this gift only 4 u</td><td/></tr><tr><td>8 sunset in aqaba :)</td><td>(:</td></tr><tr><td>9 RT @MARYAMALKHAWAJA: there is a call for widespread protests in #bahrain tmrw</td><td/></tr></table>",
                "num": null,
                "html": null,
                "text": "Chicken soup, corn never truly digests. TMI. \u9e21\u6c64\u5427\uff0c\u7389\u7c73\u795e\u9a6c\u7684\u4ece\u6765\u6ca1\u6709\u771f\u6b63\u6d88\u5316\u8fc7.\u6076\u5fc3 3 To DanielVeuleman yea iknw imma work on that \u5bf9DanielVeuleman\u8bf4\uff0c\u662f\u7684\u6211\u77e5\u9053\uff0c\u6211\u6b63\u5728\u5411\u90a3\u65b9\u9762\u52aa\u529b 4 msg 4 Warren G his cday is today 1 yr older.\u53d1\u4fe1\u606f\u7ed9Warren G\uff0c\u4eca\u5929\u662f\u4ed6\u7684\u751f\u65e5\uff0c\u53c8\u8001\u4e86\u4e00\u5c81\u4e86\u3002 5 Where the hell have you been all these years? \u8fd9\u4e9b\u5e74\u4f60TMD\u5230\u54ea\u53bb\u4e86",
                "type_str": "table"
            },
            "TABREF1": {
                "content": "<table/>",
                "num": null,
                "html": null,
                "text": "Examples of English-Mandarin and English-Arabic sentence pairs. The English-Mandarin sentences were extracted from Sina Weibo and the English-Arabic sentences were extracted from Twitter. Some messages have been shorted to fit into the table. Some interesting aspects of these sentence pairs are marked in bold.",
                "type_str": "table"
            },
            "TABREF3": {
                "content": "<table/>",
                "num": null,
                "html": null,
                "text": "BLEU scores for different datasets in different translation directions (left to right), broken with different training corpora (top to bottom).",
                "type_str": "table"
            },
            "TABREF5": {
                "content": "<table/>",
                "num": null,
                "html": null,
                "text": "Translation Examples using different training sets. yields another gain over individual datasets, both in the Syndicate and in the Weibo test sets.",
                "type_str": "table"
            }
        }
    }
}