File size: 106,389 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
{
    "paper_id": "P13-1006",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T09:33:48.132975Z"
    },
    "title": "Grounded Language Learning from Video Described with Sentences",
    "authors": [
        {
            "first": "Haonan",
            "middle": [],
            "last": "Yu",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Purdue University",
                "location": {
                    "addrLine": "465 Northwestern Ave. West Lafayette",
                    "postCode": "47907-2035",
                    "region": "IN",
                    "country": "USA"
                }
            },
            "email": "haonan@haonanyu.com"
        },
        {
            "first": "Jeffrey",
            "middle": [
                "Mark"
            ],
            "last": "Siskind",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Purdue University",
                "location": {
                    "addrLine": "465 Northwestern Ave. West Lafayette",
                    "postCode": "47907-2035",
                    "region": "IN",
                    "country": "USA"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We present a method that learns representations for word meanings from short video clips paired with sentences. Unlike prior work on learning language from symbolic input, our input consists of video of people interacting with multiple complex objects in outdoor environments. Unlike prior computer-vision approaches that learn from videos with verb labels or images with noun labels, our labels are sentences containing nouns, verbs, prepositions, adjectives, and adverbs. The correspondence between words and concepts in the video is learned in an unsupervised fashion, even when the video depicts simultaneous events described by multiple sentences or when different aspects of a single event are described with multiple sentences. The learned word meanings can be subsequently used to automatically generate description of new video.",
    "pdf_parse": {
        "paper_id": "P13-1006",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We present a method that learns representations for word meanings from short video clips paired with sentences. Unlike prior work on learning language from symbolic input, our input consists of video of people interacting with multiple complex objects in outdoor environments. Unlike prior computer-vision approaches that learn from videos with verb labels or images with noun labels, our labels are sentences containing nouns, verbs, prepositions, adjectives, and adverbs. The correspondence between words and concepts in the video is learned in an unsupervised fashion, even when the video depicts simultaneous events described by multiple sentences or when different aspects of a single event are described with multiple sentences. The learned word meanings can be subsequently used to automatically generate description of new video.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "People learn language through exposure to a rich perceptual context. Language is grounded by mapping words, phrases, and sentences to meaning representations referring to the world. Siskind (1996) has shown that even with referential uncertainty and noise, a system based on crosssituational learning can robustly acquire a lexicon, mapping words to word-level meanings from sentences paired with sentence-level meanings. However, it did so only for symbolic representations of word-and sentence-level meanings that were not perceptually grounded. An ideal system would not require detailed word-level labelings to acquire word meanings from video but rather could learn language in a largely unsupervised fashion, just as a child does, from video paired with sentences.",
                "cite_spans": [
                    {
                        "start": 182,
                        "end": 196,
                        "text": "Siskind (1996)",
                        "ref_id": "BIBREF19"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "There has been recent research on grounded language learning. Roy (2002) pairs training sentences with vectors of real-valued features extracted from synthesized images which depict 2D blocks-world scenes, to learn a specific set of features for adjectives, nouns, and adjuncts. Yu and Ballard (2004) paired training images containing multiple objects with spoken name candidates for the objects to find the correspondence between lexical items and visual features. Dominey and Boucher (2005) paired narrated sentences with symbolic representations of their meanings, automatically extracted from video, to learn object names, spatial-relation terms, and event names as a mapping from the grammatical structure of a sentence to the semantic structure of the associated meaning representation. Chen and Mooney (2008) learned the language of sportscasting by determining the mapping between game commentaries and the meaning representations output by a rulebased simulation of the game. Kwiatkowski et al. (2012) present an approach that learns Montaguegrammar representations of word meanings together with a combinatory categorial grammar (CCG) from child-directed sentences paired with first-order formulas that represent their meaning.",
                "cite_spans": [
                    {
                        "start": 62,
                        "end": 72,
                        "text": "Roy (2002)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 279,
                        "end": 300,
                        "text": "Yu and Ballard (2004)",
                        "ref_id": "BIBREF25"
                    },
                    {
                        "start": 466,
                        "end": 492,
                        "text": "Dominey and Boucher (2005)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 793,
                        "end": 815,
                        "text": "Chen and Mooney (2008)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 985,
                        "end": 1010,
                        "text": "Kwiatkowski et al. (2012)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Although most of these methods succeed in learning word meanings from sentential descriptions they do so only for symbolic or simple visual input (often synthesized); they fail to bridge the gap between language and computer vision, i.e., they do not attempt to extract meaning representations from complex visual scenes. On the other hand, there has been research on training object and event models from large corpora of complex images and video in the computer-vision community (Kuznetsova et al., 2012; Sadanand and Corso, 2012; Ordonez et al., 2011; Yao et al., 2010) . However, most such work requires training data that labels individual concepts with individual words (i.e., ob-jects delineated via bounding boxes in images as nouns and events that occur in short video clips as verbs). There is no attempt to model phrasal or sentential meaning, let alone acquire the object or event models from training data labeled with phrasal or sentential annotations. Moreover, such work uses distinct representations for different parts of speech; i.e., object and event recognizers use different representations.",
                "cite_spans": [
                    {
                        "start": 481,
                        "end": 506,
                        "text": "(Kuznetsova et al., 2012;",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 507,
                        "end": 532,
                        "text": "Sadanand and Corso, 2012;",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 533,
                        "end": 554,
                        "text": "Ordonez et al., 2011;",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 555,
                        "end": 572,
                        "text": "Yao et al., 2010)",
                        "ref_id": "BIBREF24"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper, we present a method that learns representations for word meanings from short video clips paired with sentences. Our work differs from prior work in three ways. First, our input consists of realistic video filmed in an outdoor environment. Second, we learn the entire lexicon, including nouns, verbs, prepositions, adjectives, and adverbs, simultaneously from video described with whole sentences. Third we adopt a uniform representation for the meanings of words in all parts of speech, namely Hidden Markov Models (HMMs) whose states and distributions allow for multiple possible interpretations of a word or a sentence in an ambiguous perceptual context.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We employ the following representation to ground the meanings of words, phrases, and sentences in video clips. We first run an object detector on each video frame to yield a set of detections, each a subregion of the frame. In principle, the object detector need just detect the objects rather than classify them. In practice, we employ a collection of class-, shape-, pose-, and viewpoint-specific detectors and pool the detections to account for objects whose shape, pose, and viewpoint may vary over time. Our methods can learn to associate a single noun with detections produced by multiple detectors. We then string together detections from individual frames to yield tracks for objects that temporally span the video clip. We associate a feature vector with each frame (detection) of each such track. This feature vector can encode image features (including the identity of the particular detector that produced that detection) that correlate with object class; region color, shape, and size features that correlate with object properties; and motion features, such as linear and angular object position, velocity, and acceleration, that correlate with event properties. We also compute features between pairs of tracks to encode the relative position and motion of the pairs of objects that participate in events that involve two participants. In principle, we can also compute features between tuples of any number of tracks.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Following Yamoto et al. (1992) , Siskind and Morris (1996) , and Starner et al. (1998) , we represent the meaning of an intransitive verb, like jump, as a two-state HMM over the velocity-direction feature, modeling the requirement that the participant move upward then downward. We represent the meaning of a transitive verb, like pick up, as a two-state HMM over both single-object and object-pair features: the agent moving toward the patient while the patient is as rest, followed by the agent moving together with the patient. We extend this general approach to other parts of speech. Nouns, like person, can be represented as one-state HMMs over image features that correlate with the object classes denoted by those nouns. Adjectives, like red, round, and big, can be represented as one-state HMMs over region color, shape, and size features that correlate with object properties denoted by such adjectives. Adverbs, like quickly, can be represented as one-state HMMs over object-velocity features. Intransitive prepositions, like leftward, can be represented as one-state HMMs over velocity-direction features. Static transitive prepositions, like to the left of, can be represented as one-state HMMs over the relative position of a pair of objects. Dynamic transitive prepositions, like towards, can be represented as HMMs over the changing distance between a pair of objects. Note that with this formulation, the representation of a verb, like approach, might be the same as a dynamic transitive preposition, like towards. While it might seem like overkill to represent the meanings of words as one-state-HMMs, in practice, we often instead encode such concepts with multiple states to allow for temporal variation in the associated features due to changing pose and viewpoint as well as deal with noise and occlusion. Moreover, the general framework of modeling word meanings as temporally variant time series via multi-state HMMs allows one to model denominalized verbs, i.e., nouns that denote events, as in The jump was fast.",
                "cite_spans": [
                    {
                        "start": 10,
                        "end": 30,
                        "text": "Yamoto et al. (1992)",
                        "ref_id": "BIBREF23"
                    },
                    {
                        "start": 33,
                        "end": 58,
                        "text": "Siskind and Morris (1996)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 61,
                        "end": 86,
                        "text": "and Starner et al. (1998)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Our HMMs are parameterized with varying arity.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Some, like jump(\u03b1), person(\u03b1), red(\u03b1), round(\u03b1), big(\u03b1), quickly(\u03b1), and leftward(\u03b1) have one argument, while others, like pick-up(\u03b1, \u03b2), to-the-left-of(\u03b1, \u03b2), and towards(\u03b1, \u03b2), have two arguments (In principle, any arity can be supported.). HMMs are instantiated by mapping their arguments to tracks. This involves computing the associated feature vector for that HMM over the detections in the tracks chosen to fill its arguments. This is done with a two-step process to support compositional semantics. The meaning of a multi-word phrase or sentence is represented as a joint likelihood of the HMMs for the words in that phrase or sentence. Compositionality is handled by linking or coindexing the arguments of the conjoined HMMs. Thus a sentence like The person to the left of the backpack approached the trashcan would be represented as a conjunction of person(p 0 ), to-the-left-of(p 0 , p 1 ), backback(p 1 ), approached(p 0 , p 2 ), and trash-can(p 2 ) over the three participants p 0 , p 1 , and p 2 . This whole sentence is then grounded in a particular video by mapping these participants to particular tracks and instantiating the associated HMMs over those tracks, by computing the feature vectors for each HMM from the tracks chosen to fill its arguments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Our algorithm makes six assumptions. First, we assume that we know the part of speech C m associated with each lexical entry m, along with the part-of-speech dependent number of states I c in the HMMs used to represent word meanings in that part of speech, the part-of-speech dependent number of features N c in the feature vectors used by HMMs to represent word meanings in that part of speech, and the part-of-speech dependent feature-vector computation \u03a6 c used to compute the features used by HMMs to represent word meanings in that part of speech. Second, we pair individual sentences each with a short video clip that depicts that sentence. The algorithm is not able to determine the alignment between multiple sentences and longer video segments. Note that there is no requirement that the video depict only that sentence. Other objects may be present and other events may occur. In fact, nothing precludes a training corpus with multiple copies of the same video, each paired with a different sentence describing a different aspect of that video. Moreover, our algorithm potentially can handle a small amount of noise, where a video clip is paired with an incorrect sentence that the video does not depict. Third, we assume that we already have (pre-trained) low-level object detectors capable of detecting instances of our target event participants in individual frames of the video. We allow such detections to be unreliable; our method can handle a moderate amount of false positives and false negatives. We do not need to know the mapping from these object-detection classes to words; our algorithm determines that. Fourth, we assume that we know the arity of each word in the corpus, i.e., the number of arguments that that word takes. For example, we assume that we know that the word person(\u03b1) takes one argument and the word approached(\u03b1, \u03b2) takes two arguments. Fifth, we assume that we know the total number of distinct participants that collectively fill all of the arguments for all of the words in each training sentence. For example, for the sentence The person to the left of the backpack approached the trash-can, we assume that we know that there are three distinct objects that participate in the event denoted. Sixth, we assume that we know the argument-to-participant mapping for each training sentence. Thus, for example, for the above sentence we would know",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "person(p 0 ), to-the-left-of(p 0 , p 1 ), backback(p 1 ), approached(p 0 , p 2 )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": ", and trash-can(p 2 ). The latter two items can be determined by parsing the sentence, which is what we do. One can imagine learning the ability to automatically perform the latter two items, and even the fourth item above, by learning the grammar and the part of speech of each word, such as done by Kwiatkowski et al. (2012) . We leave such for future work. Figure 1 illustrates a single frame from a potential training sample provided as input to our learner. It consists of a video clip paired with a sentence, where the arguments of the words in the sentence are mapped to participants. From a sequence of such training samples, our learner determines the objects tracks and the mapping from participants to those tracks, together with the meanings of the words.",
                "cite_spans": [
                    {
                        "start": 301,
                        "end": 326,
                        "text": "Kwiatkowski et al. (2012)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 360,
                        "end": 368,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The remainder of the paper is organized as follows. Section 2 generally describes our problem of lexical acquisition from video. Section 3 introduces our work on the sentence tracker, a method for jointly tracking the motion of multiple objects in a video that participate in a sententiallyspecified event. Section 4 elaborates on the details of our problem formulation in the context of this sentence tracker. Section 5 describes how to generalize and extend the sentence tracker so that it can be used to support lexical acquisition. We demonstrate this lexical acquisition algorithm on a small example in Section 6. Finally, we conclude with a discussion in Section 7. Figure 1: An illustration of our problem. Each word in the sentence has one or more arguments (\u03b1 and possibly \u03b2), each argument of each word is assigned to a participant (p 0 , . . . , p 3 ) in the event described by the sentence, and each participant can be assigned to any object track in the video. This figure shows a possible (but erroneous) interpretation of the sentence where the mapping is:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "p 0 \u2192 Track 3, p 1 \u2192 Track 0, p 2 \u2192 Track 1,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "and p 3 \u2192 Track 2, which might (incorrectly) lead the learner to conclude that the word person maps to the backpack, the word backpack maps to the chair, the word trash-can maps to the trash-can, and the word chair maps to the person.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Throughout this paper, lowercase letters are used for variables or hidden quantities while uppercase ones are used for constants or observed quantities. We are given a lexicon {1, . . . , M }, letting m denote a lexical entry. We are given a sequence D = (D 1 , . . . , D R ) of video clips D r , each paired with a sentence S r from a sequence S = (S 1 , . . . , S R ) of sentences. We refer to D r paired with S r as a training sample. Each sentence S r is a sequence (S r,1 , . . . , S r,Lr ) of words S r,l , each an entry from the lexicon. A given entry may potentially appear in multiple sentences and even multiple times in a given sentence. For example, the third word in the first sentence might be the same entry as the second word in the fourth sentence, in which case S 1,3 = S 4,2 . This is what allows cross-situational learning in our algorithm.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "General Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "Let us assume, for a moment, that we can process each video clip D r to yield a sequence (\u03c4 r,1 , . . . , \u03c4 r,Ur ) of object tracks \u03c4 r,u . Let us also assume that D r is paired with a sen-tence S r = The person approached the chair, specified to have two participants, p r,0 and p r,1 , with the mapping person(p r,0 ), chair(p r,1 ), and approached(p r,0 , p r,1 ). Let us further assume, for a moment, that we are given a mapping from participants to object tracks, say p r,0 \u2192 \u03c4 r,39 and p r,1 \u2192 \u03c4 r,51 . This would allow us to instantiate the HMMs with object tracks for a given video clip: person(\u03c4 r,39 ), chair(\u03c4 r,51 ), and approached(\u03c4 r,39 , \u03c4 r,51 ). Let us further assume that we can score each such instantiated HMM and aggregate the scores for all of the words in a sentence to yield a sentence score and further aggregate the scores for all of the sentences in the corpus to yield a corpus score. However, we don't know the parameters of the HMMs. These constitute the unknown meanings of the words in our corpus which we wish to learn. The problem is to simultaneously determine (a) those parameters along with (b) the object tracks and (c) the mapping from participants to object tracks. We do this by finding (a)-(c) that maximizes the corpus score.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "General Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "3 The Sentence Tracker Barbu et al. (2012a) presented a method that first determines object tracks from a single video clip and then uses these fixed tracks with HMMs to recognize actions corresponding to verbs and construct sentential descriptions with templates. Later Barbu et al. (2012b) addressed the problem of solving (b) and (c), for a single object track constrained by a single intransitive verb, without solving (a), in the context of a single video clip. Our group has generalized this work to yield an algorithm called the sentence tracker which operates by way of a factorial HMM framework. We introduce that here as the foundation of our extension.",
                "cite_spans": [
                    {
                        "start": 23,
                        "end": 43,
                        "text": "Barbu et al. (2012a)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 271,
                        "end": 291,
                        "text": "Barbu et al. (2012b)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "General Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "Each video clip D r contains T r frames. We run an object detector on each frame to yield a set D t r of detections. Since our object detector is unreliable, we bias it to have high recall but low precision, yielding multiple detections in each frame. We form an object track by selecting a single detection for that track for each frame. For a moment, let us consider a single video clip with length T , with detections D t in frame t. Further, let us assume that we seek a single object track in that video clip. Let j t denote the index of the detection from D t in frame t that is selected to form the track. The object detector scores each detection. Let F (D t , j t ) denote that score. More-over, we wish the track to be temporally coherent; we want the objects in a track to move smoothly over time and not jump around the field of view. Let G(D t\u22121 , j t\u22121 , D t , j t ) denote some measure of coherence between two detections in adjacent frames. (One possible such measure is consistency of the displacement of D t relative to D t\u22121 with the velocity of D t\u22121 computed from the image by optical flow.) One can select the detections to yield a track that maximizes both the aggregate detection score and the aggregate temporal coherence score.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "General Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "max j 1 ,...,j T \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed T t=1 F (D t , j t ) + T t=2 G(D t\u22121 , j t\u22121 , D t , j t ) \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8",
                        "eq_num": "(1)"
                    }
                ],
                "section": "General Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "This can be determined with the Viterbi (1967) algorithm and is known as detection-based tracking (Viterbi, 1971) . Recall that we model the meaning of an intransitive verb as an HMM over a time series of features extracted for its participant in each frame. Let \u03bb denote the parameters of this HMM, (q 1 , . . . , q T ) denote the sequence of states q t that leads to an observed track, B(D t , j t , q t , \u03bb) denote the conditional log probability of observing the feature vector associated with the detection selected by j t among the detections D t in frame t, given that the HMM is in state q t , and A(q t\u22121 , q t , \u03bb) denote the log transition probability of the HMM. For a given track (j 1 , . . . , j T ), the state sequence that yields the maximal likelihood is given by:",
                "cite_spans": [
                    {
                        "start": 32,
                        "end": 46,
                        "text": "Viterbi (1967)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 98,
                        "end": 113,
                        "text": "(Viterbi, 1971)",
                        "ref_id": "BIBREF22"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "General Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "max q 1 ,...,q T \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed T t=1 B(D t , j t , q t , \u03bb) + T t=2 A(q t\u22121 , q t , \u03bb) \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8",
                        "eq_num": "(2)"
                    }
                ],
                "section": "General Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "which can also be found by the Viterbi algorithm. A given video clip may depict multiple objects, each moving along its own trajectory. There may be both a person jumping and a ball rolling. How are we to select one track over the other? The key insight of the sentence tracker is to bias the selection of a track so that it matches an HMM. This is done by combining the cost function of Eq. 1 with the cost function of Eq. 2 to yield Eq. 3, which can also be determined using the Viterbi algorithm. This is done by forming the cross product of the two lattices. This jointly selects the optimal detections to form the track, together with the optimal state sequence, and scores that combination.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "General Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "max j 1 ,...,j T q 1 ,...,q T \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed T t=1 F (D t , j t ) + B(D t , j t , q t , \u03bb) + T t=2 G(D t\u22121 , j t\u22121 , D t , j t ) + A(q t\u22121 , q t , \u03bb) \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 (3)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "General Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "While we formulated the above around a single track and a word that contains a single participant, it is straightforward to extend this so that it supports multiple tracks and words of higher arity by forming a larger cross product. When doing so, we generalize j t to denote a sequence of detections from D t , one for each of the tracks. We further need to generalize F so that it computes the joint score of a sequence of detections, one for each track, G so that it computes the joint measure of coherence between a sequence of pairs of detections in two adjacent frames, and B so that it computes the joint conditional log probability of observing the feature vectors associated with the sequence of detections selected by j t . When doing this, note that Eqs. 1 and 3 maximize over j 1 , . . . , j T which denotes T sequences of detection indices, rather than T individual indices.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "General Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "It is further straightforward to extend the above to support a sequence (S 1 , . . . , S L ) of words S l denoting a sentence, each of which applies to different subsets of the multiple tracks, again by forming a larger cross product. When doing so, we generalize q t to denote a sequence (q t 1 , . . . , q t L ) of states q t l , one for each word l in the sentence, and use q l to denote the sequence (q 1 l , . . . , q T l ) and q to denote the sequence (q 1 , . . . , q L ). We further need to generalize B so that it computes the joint conditional log probability of observing the feature vectors for the detections in the tracks that are assigned to the arguments of the HMM for each word in the sentence and A so that it computes the joint log transition probability for the HMMs for all words in the sentence. This allows selection of an optimal sequence of tracks that yields the highest score for the sentential meaning of a sequence of words. Modeling the meaning of a sentence through a sequence of words whose meanings are modeled by HMMs, defines a factorial HMM for that sentence, since the overall Markov process for that sentence can be factored into inde-pendent component processes (Brand et al., 1997; Zhong and Ghosh, 2001 ) for the individual words. In this view, q denotes the state sequence for the combined factorial HMM and q l denotes the factor of that state sequence for word l. The remainder of this paper wraps this sentence tracker in Baum Welch (Baum et al., 1970; Baum, 1972) .",
                "cite_spans": [
                    {
                        "start": 1202,
                        "end": 1222,
                        "text": "(Brand et al., 1997;",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 1223,
                        "end": 1244,
                        "text": "Zhong and Ghosh, 2001",
                        "ref_id": "BIBREF26"
                    },
                    {
                        "start": 1479,
                        "end": 1498,
                        "text": "(Baum et al., 1970;",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 1499,
                        "end": 1510,
                        "text": "Baum, 1972)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "General Problem Formulation",
                "sec_num": "2"
            },
            {
                "text": "We adapt the sentence tracker to training a corpus of R video clips, each paired with a sentence. Thus we augment our notation, generalizing j t to j t r and q t l to q t r,l . Below, we use j r to denote (j 1 r , . . . , j Tr r ), j to denote (j 1 , . . . , j R ), q r,l to denote (q 1 r,l , . . . , q Tr r,l ), q r to denote (q r,1 , . . . , q r,Lr ), and q to denote (q 1 , . . . , q R ).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Detailed Problem Formulation",
                "sec_num": "4"
            },
            {
                "text": "We use discrete features, namely natural numbers, in our feature vectors, quantized by a binning process. We assume the part of speech of entry m is known as C m . The length of the feature vector may vary across parts of speech. Let N c denote the length of the feature vector for part of speech c, x r,l denote the time-series (x 1 r,l , . . . , x Tr r,l ) of feature vectors x t r,l , associated with S r,l (which recall is some entry m), and x r denote the sequence (x r,1 , . . . , x r,Lr ). We assume that we are given a function \u03a6 c (D t r , j t r ) that computes the feature vector x t r,l for the word S r,l whose part of speech is C S r,l = c. Note that we allow \u03a6 to be dependent on c allowing different features to be computed for different parts of speech, since we can determine m and thus C m from S r,l . We choose to have N c and \u03a6 c depend on the part of speech c and not on the entry m since doing so would be tantamount to encoding the to-be-learned word meaning in the provided feature-vector computation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Detailed Problem Formulation",
                "sec_num": "4"
            },
            {
                "text": "The goal of training is to find a sequence \u03bb = (\u03bb 1 , . . . , \u03bb M ) of parameters \u03bb m that best explains the R training samples. The parameters \u03bb m constitute the meaning of the entry m in the lexicon. Collectively, these are the initial state probabilities a m 0,k , for 1 \u2264 k \u2264 I Cm , the transition probabilities a m i,k , for 1 \u2264 i, k \u2264 I Cm , and the output probabilities b m i,n (x), for 1 \u2264 i \u2264 I Cm and 1 \u2264 n \u2264 N Cm , where I Cm denotes the number of states in the HMM for entry m. Like before, we could have a distinct I m for each entry m but instead have I Cm depend only on the part of speech of entry m, and assume that we know the fixed I for each part of speech. In our case, b m i,n is a discrete distribution because the features are binned.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Detailed Problem Formulation",
                "sec_num": "4"
            },
            {
                "text": "Instantiating the above approach requires a definition for what it means to best explain the R training samples. Towards this end, we define the score of a video clip D r paired with sentence S r given the parameter set \u03bb to characterize how well this training sample is explained. While the cost function in Eq. 3 may qualify as a score, it is easier to fit a likelihood calculation into the Baum-Welch framework than a MAP estimate. Thus we replace the max in Eq. 3 with a and redefine our scoring function as follows:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Learning Algorithm",
                "sec_num": "5"
            },
            {
                "text": "L(D r ; S r , \u03bb) = jr P (j r |D r )P (x r |S r , \u03bb) (4)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Learning Algorithm",
                "sec_num": "5"
            },
            {
                "text": "The score in Eq. 4 can be interpreted as an expectation of the HMM likelihood over all possible mappings from participants to all possible tracks. By definition, P (j r |D r ) = V (Dr,jr) j r V (Dr,j r ) , where the numerator is the score of a particular track sequence j r while the denominator sums the scores over all possible track sequences. The log of the numerator V (D r , j r ) is simply Eq. 1 without the max. The log of the denominator can be computed efficiently by the forward algorithm (Baum and Petrie, 1966) . The likelihood for a factorial HMM can be computed as:",
                "cite_spans": [
                    {
                        "start": 178,
                        "end": 187,
                        "text": "V (Dr,jr)",
                        "ref_id": null
                    },
                    {
                        "start": 500,
                        "end": 523,
                        "text": "(Baum and Petrie, 1966)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Learning Algorithm",
                "sec_num": "5"
            },
            {
                "text": "P (x r |S r , \u03bb) = qr l P (x r,l , q r,l |S r,l , \u03bb) (5)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Learning Algorithm",
                "sec_num": "5"
            },
            {
                "text": "i.e., summing the likelihoods for all possible state sequences. Each summand is simply the joint likelihood for all the words in the sentence conditioned on a state sequence q r . For HMMs we have",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Learning Algorithm",
                "sec_num": "5"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "P (x r,l , q r,l |S r,l , \u03bb) = t a S r,l q t\u22121 r,l ,q t r,l n b S r,l q t r,l ,n (x t r,l,n )",
                        "eq_num": "(6)"
                    }
                ],
                "section": "The Learning Algorithm",
                "sec_num": "5"
            },
            {
                "text": "Finally, for a training corpus of R samples, we seek to maximize the joint score:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Learning Algorithm",
                "sec_num": "5"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "L(D; S, \u03bb) = r L(D r ; S r , \u03bb)",
                        "eq_num": "(7)"
                    }
                ],
                "section": "The Learning Algorithm",
                "sec_num": "5"
            },
            {
                "text": "A local maximum can be found by employing the Baum-Welch algorithm (Baum et al., 1970; Baum, 1972) . By constructing an auxiliary function (Bilmes, 1997) , one can derive the reestimation formulas in Eq. 8, where x t r,l,n = h denotes the selection of all possible j t r such that the nth",
                "cite_spans": [
                    {
                        "start": 67,
                        "end": 86,
                        "text": "(Baum et al., 1970;",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 87,
                        "end": 98,
                        "text": "Baum, 1972)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 139,
                        "end": 153,
                        "text": "(Bilmes, 1997)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Learning Algorithm",
                "sec_num": "5"
            },
            {
                "text": "a m i,k = \u03b8 m i R r=1 Lr l=1 s.t.S r,l =m Tr t=1 L(q t\u22121 r,l = i, q t r,l = k, D r ; S r , \u03bb ) L(D r ; S r , \u03bb ) \u03be(r,l,i,k,t) b m i,n (h) = \u03c8 m i,n R r=1 Lr l=1 s.t.S r,l =m Tr t=1 L(q t r,l = i, x t r,l,n = h, D r ; S r , \u03bb ) L(D r ; S r , \u03bb ) \u03b3(r,l,n,i,h,t) (8) feature computed by \u03a6 Cm (D t r , j t r )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Learning Algorithm",
                "sec_num": "5"
            },
            {
                "text": "is h. The coefficients \u03b8 m i and \u03c8 m i,n are for normalization. The reestimation formulas involve occurrence counting. However, since we use a factorial HMM that involves a cross-product lattice and use a scoring function derived from Eq. 3 that incorporates both tracking (Eq. 1) and word models (Eq. 2), we need to count the frequency of transitions in the whole cross-product lattice. As an example of such cross-product occurrence counting, when counting the transitions from state i to k for the lth word from frame t \u2212 1 to t, i.e., \u03be(r, l, i, k, t), we need to count all the possible paths through the adjacent factorial states",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Learning Algorithm",
                "sec_num": "5"
            },
            {
                "text": "(j t\u22121 r , q t\u22121 r,1 , . . . , q t\u22121 r,Lr ) and (j t r , q t r,1 , . . . , q t r,Lr ) such that q t\u22121 r,l = i and q t r,l = k.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Learning Algorithm",
                "sec_num": "5"
            },
            {
                "text": "Similarly, when counting the frequency of being at state i while observing h as the nth feature in frame t for the lth word of entry m, i.e., \u03b3(r, l, n, i, h, t), we need to count all the possible paths through the factorial state (j t r , q t r,1 , . . . , q t r,Lr ) such that q t r,l = i and the nth feature computed by \u03a6 Cm (D t r , j t r ) is h. The reestimation of a single component HMM can depend on the previous estimate for other component HMMs. This dependence happens because of the argument-to-participant mapping which coindexes arguments of different component HMMs to the same track. It is precisely this dependence that leads to cross-situational learning of two kinds: both inter-sentential and intra-sentential. Acquisition of a word meaning is driven across sentences by entries that appear in more than one training sample and within sentences by the requirement that the meanings of all of the individual words in a sentence be consistent with the collective sentential meaning.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Learning Algorithm",
                "sec_num": "5"
            },
            {
                "text": "We filmed 61 video clips (each 3-5 seconds at 640\u00d7480 resolution and 40 fps) that depict a variety of different compound events. Each clip depicts multiple simultaneous events between some Table 1 : The grammar used for our annotation and generation. Our lexicon contains 1 determiner, 4 nouns, 2 spatial relation prepositions, 4 verbs, 2 adverbs, and 2 motion prepositions for a total of 15 lexical entries over 6 parts of speech. subset of four objects: a person, a backpack, a chair, and a trash-can. These clips were filmed in three different outdoor environments which we use for cross validation. We manually annotated each video with several sentences that describe what occurs in that video. The sentences were constrained to conform to the grammar in Table 1 . Our corpus of 159 training samples pairs some videos with more than one sentence and some sentences with more than one video, with an average of 2.6 sentences per video 1 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 189,
                        "end": 196,
                        "text": "Table 1",
                        "ref_id": null
                    },
                    {
                        "start": 760,
                        "end": 767,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Experiment",
                "sec_num": "6"
            },
            {
                "text": "S \u2192 NP VP NP \u2192 D N [PP] D \u2192 the N \u2192 person | backpack | trash-can | chair PP \u2192 P NP P \u2192 to the left of | to the right of VP \u2192 V NP [ADV] [PPM] V \u2192 picked up | put down | carried | approached ADV \u2192 quickly | slowly PPM \u2192 PM NP PM \u2192 towards | away from",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment",
                "sec_num": "6"
            },
            {
                "text": "We model and learn the semantics of all words except determiners. Table 2 specifies the arity, the state number I c , and the features computed by \u03a6 c for the semantic models for words of each part of speech c. While we specify a different subset of features for each part of speech, we presume that, in principle, with enough training data, we could include all features in all parts of speech and automatically learn which ones are noninformative and lead to uniform distributions.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 66,
                        "end": 73,
                        "text": "Table 2",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Experiment",
                "sec_num": "6"
            },
            {
                "text": "We use an off-the-shelf object detector (Felzenszwalb et al., 2010a; Felzenszwalb et al., 2010b) which outputs detections in the form of scored axis-aligned rectangles. We trained four object detectors, one for each of the four object classes in our corpus: person, backpack, chair, and trashcan. For each frame, we pick the two highestscoring detections produced by each object detector and pool the results yielding eight detections per frame. Having a larger pool of detections per frame can better compensate for false negatives in the object detection and potentially yield smoother tracks but it increases the size of the lattice and the concomitant running time and does not lead to appreciably better performance on our corpus. We compute continuous features, such as velocity, distance, size ratio, and x-position solely from the detection rectangles and quantize the features into bins as follows: velocity To reduce noise, we compute the velocity of a participant by averaging the optical flow in the detection rectangle. The velocity magnitude is quantized into 5 levels: absolutely stationary, stationary, moving, fast moving, and quickly. The velocity orientation is quantized into 4 directions: left, up, right, and down. distance We compute the Euclidean distance between the detection centers of two participants, which is quantized into 3 levels: near, normal, and far away. size ratio We compute the ratio of detection area of the first participant to the detection area of the second participant, quantized into 2 possibilities: larger/smaller than. x-position We compute the difference between the x-coordinates of the participants, quantized into 2 possibilities: to the left/right of. The binning process was determined by a preprocessing step that clustered a subset of the training data. We also incorporate the index of the detector that produced the detection as a feature. The par-ticular features computed for each part of speech are given in Table 2 . Note that while we use English phrases, like to the left of, to refer to particular bins of particular features, and we have object detectors which we train on samples of a particular object class such as backpack, such phrases are only mnemonic of the clustering and object-detector training process. We do not have a fixed correspondence between the lexical entries and any particular feature value. Moreover, that correspondence need not be oneto-one: a given lexical entry may correspond to a (time variant) constellation of feature values and any given feature value may participate in the meaning of multiple lexical entries.",
                "cite_spans": [
                    {
                        "start": 40,
                        "end": 68,
                        "text": "(Felzenszwalb et al., 2010a;",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 69,
                        "end": 96,
                        "text": "Felzenszwalb et al., 2010b)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 1972,
                        "end": 1979,
                        "text": "Table 2",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Experiment",
                "sec_num": "6"
            },
            {
                "text": "We perform a three-fold cross validation, taking the test data for each fold to be the videos filmed in a given outdoor environment and the training data for that fold to be all training samples that contain other videos. For testing, we hand selected 24 sentences generated by the grammar in Table 1 , where each sentence is true for at least one test video. Half of these sentences (designated NV) contain only nouns and verbs while the other half (designated ALL) contain other parts of speech. The latter are longer and more complicated than the former. We score each testing video paired with every sentence in both NV and ALL. To evaluate our results, we manually annotated the correctness of each such pair.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 293,
                        "end": 300,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Experiment",
                "sec_num": "6"
            },
            {
                "text": "Video-sentence pairs could be scored with Eq. 4. However, the score depends on the sentence length, the collective numbers of states and features in the HMMs for words in that sentence, and the length of the video clip. To render the scores comparable across such variation we incorporate a sentence prior to the per-frame score:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment",
                "sec_num": "6"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "L(D r , S r ; \u03bb) = [L(D r ; S r , \u03bb)] 1 Tr \u03c0(S r ) (9) where \u03c0(S r ) = exp Lr l=1 \uf8eb \uf8ec \uf8ec \uf8ed E(I C S r,l ) + N C S r,l n=1 E(Z C S r,l ,n ) \uf8f6 \uf8f7 \uf8f7 \uf8f8",
                        "eq_num": "(10)"
                    }
                ],
                "section": "Experiment",
                "sec_num": "6"
            },
            {
                "text": "In the above, Z C S r,l ,n is the number of bins for the nth feature of S r,l of part of speech C S r,l and",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment",
                "sec_num": "6"
            },
            {
                "text": "E(Y ) = \u2212 Y y=1 1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment",
                "sec_num": "6"
            },
            {
                "text": "Y log 1 Y = log Y is the entropy of a uniform distribution over Y bins. This prior prefers longer sentences which describe more information in the video. The scores are thresholded to decide hits, which together with the manual annotations, can generate TP, TN, FP, and FN counts. We select the threshold that leads to the maximal F1 score on the training set, use this threshold to compute F1 scores on the test set in each fold, and average F1 scores across the folds.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment",
                "sec_num": "6"
            },
            {
                "text": "The F1 scores are listed in the column labeled Our in Table 3 . For comparison, we also report F1 scores for three baselines: Chance, Blind, and Hand. The Chance baseline randomly classifies a video-sentence pair as a hit with probability 0.5. The Blind baseline determines hits by potentially looking at the sentence but never looking at the video. We can find an upper bound on the F1 score that any blind method could have on each of our test sets by solving a 0-1 fractional programming problem (Dinkelbach, 1967 ) (see Appendix A for details). The Hand baseline determines hits with hand-coded HMMs, carefully designed to yield what we believe is near-optimal performance. As can be seen from Table 3 , our trained models perform substantially better than the Chance and Blind baselines and approach the performance of the ideal Hand baseline. One can further see from the ROC curves in Figure 2 , comparing the trained and hand-written models on both NV and ALL, that the trained models are close to optimal. Note that performance on ALL exceeds that on NV with the trained models. This is because longer sentences with varied parts of speech incorporate more information into the scoring process.",
                "cite_spans": [
                    {
                        "start": 499,
                        "end": 516,
                        "text": "(Dinkelbach, 1967",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 54,
                        "end": 61,
                        "text": "Table 3",
                        "ref_id": "TABREF3"
                    },
                    {
                        "start": 698,
                        "end": 705,
                        "text": "Table 3",
                        "ref_id": "TABREF3"
                    },
                    {
                        "start": 892,
                        "end": 900,
                        "text": "Figure 2",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Experiment",
                "sec_num": "6"
            },
            {
                "text": "We presented a method that learns word meanings from video paired with sentences. Unlike prior work, our method deals with realistic video scenes labeled with whole sentences, not individual words labeling hand delineated objects or events. The experiment shows that it can correctly learn the meaning representations in terms of HMM parameters for our lexical entries, from highly ambiguous training data. Our maximumlikelihood method makes use of only positive sentential labels. As such, it might require more training data for convergence than a method that also makes use of negative training sentences that are not true of a given video. Such can be handled with discriminative training, a topic we plan to address in the future. We believe that this will allow learning larger lexicons from more complex video without excessive amounts of training data. 0060. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either express or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes, notwithstanding any copyright notation herein.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "7"
            },
            {
                "text": "A Blind algorithm makes identical decisions on the same sentence paired with different video clips. An optimal algorithm will try to find a decision s i for each test sentence i that maximizes the F1 score. Suppose, the ground-truth yields FP i false positives and TP i true positives on the test set when s i = 1. Also suppose that setting s i = 0 yields FN i false negatives. Then the F1 score is",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A An Upper Bound on the F1 Score of any Blind Method",
                "sec_num": null
            },
            {
                "text": "F 1 = 1 1 + i s i FP i + (1 \u2212 s i )FN i i 2s i TP i \u2206",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A An Upper Bound on the F1 Score of any Blind Method",
                "sec_num": null
            },
            {
                "text": "Thus we want to minimize the term \u2206. This is an instance of a 0-1 fractional programming problem which can be solved by binary search or Dinkelbach's algorithm (Dinkelbach, 1967) .",
                "cite_spans": [
                    {
                        "start": 160,
                        "end": 178,
                        "text": "(Dinkelbach, 1967)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A An Upper Bound on the F1 Score of any Blind Method",
                "sec_num": null
            },
            {
                "text": "Our code, videos, and sentential annotations are available at http://haonanyu.com/research/ acl2013/.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "This research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-10-2-",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Video in sentences out",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Barbu",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Bridge",
                        "suffix": ""
                    },
                    {
                        "first": "Z",
                        "middle": [],
                        "last": "Burchill",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Coroian",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Dickinson",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Fidler",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Michaux",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Mussman",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Siddharth",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Salvi",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Schmidt",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Shangguan",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "M"
                        ],
                        "last": "Siskind",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Waggoner",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Wei",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Yin",
                        "suffix": ""
                    },
                    {
                        "first": "Z",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence",
                "volume": "",
                "issue": "",
                "pages": "102--112",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. Barbu, A. Bridge, Z. Burchill, D. Coroian, S. Dick- inson, S. Fidler, A. Michaux, S. Mussman, N. Sid- dharth, D. Salvi, L. Schmidt, J. Shangguan, J. M. Siskind, J. Waggoner, S. Wang, J. Wei, Y. Yin, and Z. Zhang. 2012a. Video in sentences out. In Pro- ceedings of the Twenty-Eighth Conference on Un- certainty in Artificial Intelligence, pages 102-112.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Simultaneous object detection, tracking, and event recognition",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Barbu",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Siddharth",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Michaux",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "M"
                        ],
                        "last": "Siskind",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Advances in Cognitive Systems",
                "volume": "2",
                "issue": "",
                "pages": "203--220",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. Barbu, N. Siddharth, A. Michaux, and J. M. Siskind. 2012b. Simultaneous object detection, tracking, and event recognition. Advances in Cognitive Systems, 2:203-220, December.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Statistical inference for probabilistic functions of finite state Markov chains",
                "authors": [
                    {
                        "first": "L",
                        "middle": [
                            "E"
                        ],
                        "last": "Baum",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Petrie",
                        "suffix": ""
                    }
                ],
                "year": 1966,
                "venue": "The Annals of Mathematical Statistics",
                "volume": "37",
                "issue": "",
                "pages": "1554--1563",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "L. E. Baum and T. Petrie. 1966. Statistical inference for probabilistic functions of finite state Markov chains. The Annals of Mathematical Statistics, 37:1554-1563.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "A maximization technique occuring in the statistical analysis of probabilistic functions of Markov chains",
                "authors": [
                    {
                        "first": "L",
                        "middle": [
                            "E"
                        ],
                        "last": "Baum",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Petrie",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Soules",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Weiss",
                        "suffix": ""
                    }
                ],
                "year": 1970,
                "venue": "The Annals of Mathematical Statistics",
                "volume": "41",
                "issue": "1",
                "pages": "164--171",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "L. E. Baum, T. Petrie, G. Soules, and N. Weiss. 1970. A maximization technique occuring in the statistical analysis of probabilistic functions of Markov chains. The Annals of Mathematical Statistics, 41(1):164- 171.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "An inequality and associated maximization technique in statistical estimation of probabilistic functions of a Markov process",
                "authors": [
                    {
                        "first": "L",
                        "middle": [
                            "E"
                        ],
                        "last": "Baum",
                        "suffix": ""
                    }
                ],
                "year": 1972,
                "venue": "Inequalities",
                "volume": "3",
                "issue": "",
                "pages": "1--8",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "L. E. Baum. 1972. An inequality and associated maxi- mization technique in statistical estimation of proba- bilistic functions of a Markov process. Inequalities, 3:1-8.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Bilmes",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Bilmes. 1997. A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaus- sian mixture and hidden Markov models. Technical Report TR-97-021, ICSI.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Coupled hidden Markov models for complex action recognition",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Brand",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Oliver",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Pentland",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
                "volume": "",
                "issue": "",
                "pages": "994--999",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "M. Brand, N. Oliver, and A. Pentland. 1997. Coupled hidden Markov models for complex action recog- nition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 994-999.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Learning to sportscast: A test of grounded language acquisition",
                "authors": [
                    {
                        "first": "D",
                        "middle": [
                            "L"
                        ],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [
                            "J"
                        ],
                        "last": "Mooney",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of the 25th International Conference on Machine Learning",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "D. L. Chen and R. J. Mooney. 2008. Learning to sportscast: A test of grounded language acquisition. In Proceedings of the 25th International Conference on Machine Learning.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "On nonlinear fractional programming",
                "authors": [
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Dinkelbach",
                        "suffix": ""
                    }
                ],
                "year": 1967,
                "venue": "Management Science",
                "volume": "13",
                "issue": "7",
                "pages": "492--498",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "W. Dinkelbach. 1967. On nonlinear fractional pro- gramming. Management Science, 13(7):492-498.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Learning to talk about events from narrated video in a construction grammar framework",
                "authors": [
                    {
                        "first": "P",
                        "middle": [
                            "F"
                        ],
                        "last": "Dominey",
                        "suffix": ""
                    },
                    {
                        "first": "J.-D",
                        "middle": [],
                        "last": "Boucher",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Artificial Intelligence",
                "volume": "167",
                "issue": "12",
                "pages": "31--61",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "P. F. Dominey and J.-D. Boucher. 2005. Learning to talk about events from narrated video in a construc- tion grammar framework. Artificial Intelligence, 167(12):31-61.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Object detection with discriminatively trained part-based models",
                "authors": [
                    {
                        "first": "P",
                        "middle": [
                            "F"
                        ],
                        "last": "Felzenszwalb",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [
                            "B"
                        ],
                        "last": "Girshick",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Mcallester",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Ramanan",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
                "volume": "32",
                "issue": "9",
                "pages": "1627--1645",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. 2010a. Object detection with discrim- inatively trained part-based models. IEEE Transac- tions on Pattern Analysis and Machine Intelligence, 32(9):1627-1645.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Cascade object detection with deformable part models",
                "authors": [
                    {
                        "first": "P",
                        "middle": [
                            "F"
                        ],
                        "last": "Felzenszwalb",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [
                            "B"
                        ],
                        "last": "Girshick",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "A"
                        ],
                        "last": "Mcallester",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
                "volume": "",
                "issue": "",
                "pages": "2241--2248",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "P. F. Felzenszwalb, R. B. Girshick, and D. A. McAllester. 2010b. Cascade object detection with deformable part models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 2241-2248.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Baby talk: Understanding and generating simple image descriptions",
                "authors": [
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Kulkarni",
                        "suffix": ""
                    },
                    {
                        "first": "V",
                        "middle": [],
                        "last": "Premraj",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Dhar",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Choi",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [
                            "C"
                        ],
                        "last": "Berg",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [
                            "L"
                        ],
                        "last": "Berg",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
                "volume": "",
                "issue": "",
                "pages": "1601--1608",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "G. Kulkarni, V. Premraj, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. L. Berg. 2011. Baby talk: Understand- ing and generating simple image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1601-1608.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Collective generation of natural image descriptions",
                "authors": [
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Kuznetsova",
                        "suffix": ""
                    },
                    {
                        "first": "V",
                        "middle": [],
                        "last": "Ordonez",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [
                            "C"
                        ],
                        "last": "Berg",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [
                            "L"
                        ],
                        "last": "Berg",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Choi",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
                "volume": "1",
                "issue": "",
                "pages": "359--368",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "P. Kuznetsova, V. Ordonez, A. C. Berg, T. L. Berg, and Y. Choi. 2012. Collective generation of natural im- age descriptions. In Proceedings of the 50th Annual Meeting of the Association for Computational Lin- guistics: Long Papers -Volume 1, pages 359-368.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "A probabilistic model of syntactic and semantic acquisition from child-directed utterances and their meanings",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Kwiatkowski",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Goldwater",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Zettlemoyer",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Steedman",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "234--244",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "T. Kwiatkowski, S. Goldwater, L. Zettlemoyer, and M. Steedman. 2012. A probabilistic model of syn- tactic and semantic acquisition from child-directed utterances and their meanings. In Proceedings of the 13th Conference of the European Chapter of the As- sociation for Computational Linguistics, pages 234- 244.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Im2text: Describing images using 1 million captioned photographs",
                "authors": [
                    {
                        "first": "V",
                        "middle": [],
                        "last": "Ordonez",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Kulkarni",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [
                            "L"
                        ],
                        "last": "Berg",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Proceedings of Neural Information Processing Systems",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "V. Ordonez, G. Kulkarni, and T. L. Berg. 2011. Im2text: Describing images using 1 million cap- tioned photographs. In Proceedings of Neural In- formation Processing Systems.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Learning visually-grounded words and syntax for a scene description task",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Roy",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Computer Speech and Language",
                "volume": "16",
                "issue": "",
                "pages": "353--385",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "D. Roy. 2002. Learning visually-grounded words and syntax for a scene description task. Computer Speech and Language, 16:353-385.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Action bank: A high-level representation of activity in video",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Sadanand",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "J"
                        ],
                        "last": "Corso",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
                "volume": "",
                "issue": "",
                "pages": "1234--1241",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S. Sadanand and J. J. Corso. 2012. Action bank: A high-level representation of activity in video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1234-1241.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "A maximumlikelihood approach to visual event classification",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "M"
                        ],
                        "last": "Siskind",
                        "suffix": ""
                    },
                    {
                        "first": "Q",
                        "middle": [],
                        "last": "Morris",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Proceedings of the Fourth European Conference on Computer Vision",
                "volume": "",
                "issue": "",
                "pages": "347--360",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. M. Siskind and Q. Morris. 1996. A maximum- likelihood approach to visual event classification. In Proceedings of the Fourth European Conference on Computer Vision, pages 347-360.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "A computational study of crosssituational techniques for learning word-to-meaning mappings",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "M"
                        ],
                        "last": "Siskind",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Cognition",
                "volume": "61",
                "issue": "",
                "pages": "39--91",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. M. Siskind. 1996. A computational study of cross- situational techniques for learning word-to-meaning mappings. Cognition, 61:39-91.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Realtime American Sign Language recognition using desk and wearable computer based video",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Starner",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Weaver",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Pentland",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
                "volume": "20",
                "issue": "12",
                "pages": "1371--1375",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "T. Starner, J. Weaver, and A. Pentland. 1998. Real- time American Sign Language recognition using desk and wearable computer based video. IEEE Transactions on Pattern Analysis and Machine In- telligence, 20(12):1371-1375.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Error bounds for convolutional codes and an asymtotically optimum decoding algorithm",
                "authors": [
                    {
                        "first": "A",
                        "middle": [
                            "J"
                        ],
                        "last": "Viterbi",
                        "suffix": ""
                    }
                ],
                "year": 1967,
                "venue": "IEEE Transactions on Information Theory",
                "volume": "13",
                "issue": "",
                "pages": "260--267",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. J. Viterbi. 1967. Error bounds for convolutional codes and an asymtotically optimum decoding algo- rithm. IEEE Transactions on Information Theory, 13:260-267.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Convolutional codes and their performance in communication systems",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Viterbi",
                        "suffix": ""
                    }
                ],
                "year": 1971,
                "venue": "IEEE Transactions on Communication Technology",
                "volume": "19",
                "issue": "5",
                "pages": "751--772",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. Viterbi. 1971. Convolutional codes and their per- formance in communication systems. IEEE Trans- actions on Communication Technology, 19(5):751- 772.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Recognizing human action in time-sequential images using hidden Markov model",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Yamoto",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Ohya",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Ishii",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
                "volume": "",
                "issue": "",
                "pages": "379--385",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Yamoto, J. Ohya, and K. Ishii. 1992. Recogniz- ing human action in time-sequential images using hidden Markov model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 379-385.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "I2T: Image parsing to text description. Proceedings of the IEEE",
                "authors": [
                    {
                        "first": "B",
                        "middle": [
                            "Z"
                        ],
                        "last": "Yao",
                        "suffix": ""
                    },
                    {
                        "first": "X",
                        "middle": [],
                        "last": "Yang",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Lin",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "W"
                        ],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "S.-C",
                        "middle": [],
                        "last": "Zhu",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "",
                "volume": "98",
                "issue": "",
                "pages": "1485--1508",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "B. Z. Yao, X. Yang, L. Lin, M. W. Lee, and S.-C. Zhu. 2010. I2T: Image parsing to text description. Pro- ceedings of the IEEE, 98(8):1485-1508, August.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "On the integration of grounding language and learning objects",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Yu",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "H"
                        ],
                        "last": "Ballard",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of the 19th National Conference on Artifical intelligence",
                "volume": "",
                "issue": "",
                "pages": "488--493",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "C. Yu and D. H. Ballard. 2004. On the integration of grounding language and learning objects. In Pro- ceedings of the 19th National Conference on Artifi- cal intelligence, pages 488-493.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "A new formulation of coupled hidden Markov models",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Zhong",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Ghosh",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S. Zhong and J. Ghosh. 2001. A new formulation of coupled hidden Markov models. Technical report, Department of Electrical and Computer Engineer- ing, The University of Texas at Austin.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "the left of the backpack carried the trash-can towards the chair.",
                "num": null,
                "type_str": "figure",
                "uris": null
            },
            "FIGREF2": {
                "text": "ROC curves of trained models and handwritten models.",
                "num": null,
                "type_str": "figure",
                "uris": null
            },
            "TABREF1": {
                "num": null,
                "type_str": "table",
                "content": "<table><tr><td>: Arguments and model configurations for</td></tr><tr><td>different parts of speech c. VEL stands for veloc-</td></tr><tr><td>ity, MAG for magnitude, ORIENT for orientation,</td></tr><tr><td>and DIST for distance.</td></tr></table>",
                "text": "",
                "html": null
            },
            "TABREF3": {
                "num": null,
                "type_str": "table",
                "content": "<table/>",
                "text": "F1 scores of different methods.",
                "html": null
            }
        }
    }
}