File size: 87,523 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
{
    "paper_id": "P09-1010",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T08:55:13.421394Z"
    },
    "title": "Reinforcement Learning for Mapping Instructions to Actions",
    "authors": [
        {
            "first": "S",
            "middle": [
                "R K"
            ],
            "last": "Branavan",
            "suffix": "",
            "affiliation": {
                "laboratory": "Artificial Intelligence Laboratory",
                "institution": "Massachusetts Institute of Technology",
                "location": {}
            },
            "email": "branavan@csail.mit.edu"
        },
        {
            "first": "Harr",
            "middle": [],
            "last": "Chen",
            "suffix": "",
            "affiliation": {
                "laboratory": "Artificial Intelligence Laboratory",
                "institution": "Massachusetts Institute of Technology",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Luke",
            "middle": [
                "S"
            ],
            "last": "Zettlemoyer",
            "suffix": "",
            "affiliation": {
                "laboratory": "Artificial Intelligence Laboratory",
                "institution": "Massachusetts Institute of Technology",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Regina",
            "middle": [],
            "last": "Barzilay",
            "suffix": "",
            "affiliation": {
                "laboratory": "Artificial Intelligence Laboratory",
                "institution": "Massachusetts Institute of Technology",
                "location": {}
            },
            "email": "regina@csail.mit.edu"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "In this paper, we present a reinforcement learning approach for mapping natural language instructions to sequences of executable actions. We assume access to a reward function that defines the quality of the executed actions. During training, the learner repeatedly constructs action sequences for a set of documents, executes those actions, and observes the resulting reward. We use a policy gradient algorithm to estimate the parameters of a log-linear model for action selection. We apply our method to interpret instructions in two domains-Windows troubleshooting guides and game tutorials. Our results demonstrate that this method can rival supervised learning techniques while requiring few or no annotated training examples. 1",
    "pdf_parse": {
        "paper_id": "P09-1010",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "In this paper, we present a reinforcement learning approach for mapping natural language instructions to sequences of executable actions. We assume access to a reward function that defines the quality of the executed actions. During training, the learner repeatedly constructs action sequences for a set of documents, executes those actions, and observes the resulting reward. We use a policy gradient algorithm to estimate the parameters of a log-linear model for action selection. We apply our method to interpret instructions in two domains-Windows troubleshooting guides and game tutorials. Our results demonstrate that this method can rival supervised learning techniques while requiring few or no annotated training examples. 1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "The problem of interpreting instructions written in natural language has been widely studied since the early days of artificial intelligence (Winograd, 1972; Di Eugenio, 1992) . Mapping instructions to a sequence of executable actions would enable the automation of tasks that currently require human participation. Examples include configuring software based on how-to guides and operating simulators using instruction manuals. In this paper, we present a reinforcement learning framework for inducing mappings from text to actions without the need for annotated training examples.",
                "cite_spans": [
                    {
                        "start": 141,
                        "end": 157,
                        "text": "(Winograd, 1972;",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 158,
                        "end": 175,
                        "text": "Di Eugenio, 1992)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "For concreteness, consider instructions from a Windows troubleshooting guide on deleting temporary folders, shown in Figure 1 . We aim to map Figure 1 : A Windows troubleshooting article describing how to remove the \"msdownld.tmp\" temporary folder. this text to the corresponding low-level commands and parameters. For example, properly interpreting the third instruction requires clicking on a tab, finding the appropriate option in a tree control, and clearing its associated checkbox.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 117,
                        "end": 125,
                        "text": "Figure 1",
                        "ref_id": null
                    },
                    {
                        "start": 142,
                        "end": 150,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this and many other applications, the validity of a mapping can be verified by executing the induced actions in the corresponding environment and observing their effects. For instance, in the example above we can assess whether the goal described in the instructions is achieved, i.e., the folder is deleted. The key idea of our approach is to leverage the validation process as the main source of supervision to guide learning. This form of supervision allows us to learn interpretations of natural language instructions when standard supervised techniques are not applicable, due to the lack of human-created annotations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Reinforcement learning is a natural framework for building models using validation from an environment (Sutton and Barto, 1998) . We assume that supervision is provided in the form of a reward function that defines the quality of executed actions. During training, the learner repeatedly constructs action sequences for a set of given documents, executes those actions, and observes the resulting reward. The learner's goal is to estimate a policy -a distribution over actions given instruction text and environment state -that maximizes future expected reward. Our policy is modeled in a log-linear fashion, allowing us to incorporate features of both the instruction text and the environment. We employ a policy gradient algorithm to estimate the parameters of this model. We evaluate our method on two distinct applications: Windows troubleshooting guides and puzzle game tutorials. The key findings of our experiments are twofold. First, models trained only with simple reward signals achieve surprisingly high results, coming within 11% of a fully supervised method in the Windows domain. Second, augmenting unlabeled documents with even a small fraction of annotated examples greatly reduces this performance gap, to within 4% in that domain. These results indicate the power of learning from this new form of automated supervision.",
                "cite_spans": [
                    {
                        "start": 103,
                        "end": 127,
                        "text": "(Sutton and Barto, 1998)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Grounded Language Acquisition Our work fits into a broader class of approaches that aim to learn language from a situated context (Mooney, 2008a; Mooney, 2008b; Fleischman and Roy, 2005; Yu and Ballard, 2004; Siskind, 2001; Oates, 2001) . Instances of such approaches include work on inferring the meaning of words from video data (Roy and Pentland, 2002; Barnard and Forsyth, 2001) , and interpreting the commentary of a simulated soccer game (Chen and Mooney, 2008) . Most of these approaches assume some form of parallel data, and learn perceptual cooccurrence patterns. In contrast, our emphasis is on learning language by proactively interacting with an external environment.",
                "cite_spans": [
                    {
                        "start": 130,
                        "end": 145,
                        "text": "(Mooney, 2008a;",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 146,
                        "end": 160,
                        "text": "Mooney, 2008b;",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 161,
                        "end": 186,
                        "text": "Fleischman and Roy, 2005;",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 187,
                        "end": 208,
                        "text": "Yu and Ballard, 2004;",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 209,
                        "end": 223,
                        "text": "Siskind, 2001;",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 224,
                        "end": 236,
                        "text": "Oates, 2001)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 331,
                        "end": 355,
                        "text": "(Roy and Pentland, 2002;",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 356,
                        "end": 382,
                        "text": "Barnard and Forsyth, 2001)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 454,
                        "end": 467,
                        "text": "Mooney, 2008)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Reinforcement Learning for Language Processing Reinforcement learning has been previously applied to the problem of dialogue management (Scheffler and Young, 2002; Roy et al., 2000; Litman et al., 2000; Singh et al., 1999) . These systems converse with a human user by taking actions that emit natural language utterances. The reinforcement learning state space encodes information about the goals of the user and what they say at each time step. The learning problem is to find an optimal policy that maps states to actions, through a trial-and-error process of repeated interaction with the user.",
                "cite_spans": [
                    {
                        "start": 136,
                        "end": 163,
                        "text": "(Scheffler and Young, 2002;",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 164,
                        "end": 181,
                        "text": "Roy et al., 2000;",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 182,
                        "end": 202,
                        "text": "Litman et al., 2000;",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 203,
                        "end": 222,
                        "text": "Singh et al., 1999)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Reinforcement learning is applied very differently in dialogue systems compared to our setup.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "In some respects, our task is more easily amenable to reinforcement learning. For instance, we are not interacting with a human user, so the cost of interaction is lower. However, while the state space can be designed to be relatively small in the dialogue management task, our state space is determined by the underlying environment and is typically quite large. We address this complexity by developing a policy gradient algorithm that learns efficiently while exploring a small subset of the states.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Our task is to learn a mapping between documents and the sequence of actions they express. Figure 2 shows how one example sentence is mapped to three actions.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 91,
                        "end": 99,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "3"
            },
            {
                "text": "Mapping Text to Actions As input, we are given a document d, comprising a sequence of sentences (u 1 , . . . , u ), where each u i is a sequence of words. Our goal is to map d to a sequence of actions a = (a 0 , . . . , a n\u22121 ). Actions are predicted and executed sequentially. 2 An action a = (c, R, W ) encompasses a command c, the command's parameters R, and the words W specifying c and R. Elements of R refer to objects available in the environment state, as described below. Some parameters can also refer to words in document d. Additionally, to account for words that do not describe any actions, c can be a null command.",
                "cite_spans": [
                    {
                        "start": 278,
                        "end": 279,
                        "text": "2",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "3"
            },
            {
                "text": "The Environment The environment state E specifies the set of objects available for interaction, and their properties. In Figure 2 , E is shown on the right. The environment state E changes in response to the execution of command c with parameters R according to a transition distribution p(E |E, c, R). This distribution is a priori unknown to the learner. As we will see in Section 5, our approach avoids having to directly estimate this distribution.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 121,
                        "end": 129,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "3"
            },
            {
                "text": "State To predict actions sequentially, we need to track the state of the document-to-actions mapping over time. A mapping state s is a tuple (E, d, j, W ), where E refers to the current environment state; j is the index of the sentence currently being interpreted in document d; and W contains words that were mapped by previous actions for Figure 2 : A three-step mapping from an instruction sentence to a sequence of actions in Windows 2000. For each step, the figure shows the words selected by the action, along with the corresponding system command and its parameters. The words of W are underlined, and the words of W are highlighted in grey.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 341,
                        "end": 349,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "3"
            },
            {
                "text": "the same sentence. The mapping state s is observed after each action.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "3"
            },
            {
                "text": "The initial mapping state",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "3"
            },
            {
                "text": "s 0 for document d is (E d , d, 0, \u2205); E d is the unique starting environment state for d. Performing action a in state s = (E, d, j, W ) leads to a new state s according to distribution p(s |s, a), defined as follows: E tran- sitions according to p(E |E, c, R), W",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "3"
            },
            {
                "text": "is updated with a's selected words, and j is incremented if all words of the sentence have been mapped. For the applications we consider in this work, environment state transitions, and consequently mapping state transitions, are deterministic.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "3"
            },
            {
                "text": "Training During training, we are provided with a set D of documents, the ability to sample from the transition distribution, and a reward function r(h). Here, h = (s 0 , a 0 , . . . , s n\u22121 , a n\u22121 , s n ) is a history of states and actions visited while interpreting one document. r(h) outputs a realvalued score that correlates with correct action selection. 3 We consider both immediate reward, which is available after each action, and delayed reward, which does not provide feedback until the last action. For example, task completion is a delayed reward that produces a positive value after the final action only if the task was completed successfully. We will also demonstrate how manually annotated action sequences can be incorporated into the reward.",
                "cite_spans": [
                    {
                        "start": 361,
                        "end": 362,
                        "text": "3",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "3"
            },
            {
                "text": "The goal of training is to estimate parameters \u03b8 of the action selection distribution p(a|s, \u03b8), called the policy. Since the reward correlates with action sequence correctness, the \u03b8 that maximizes expected reward will yield the best actions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Problem Formulation",
                "sec_num": "3"
            },
            {
                "text": "Our goal is to predict a sequence of actions. We construct this sequence by repeatedly choosing an action given the current mapping state, and applying that action to advance to a new state.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Log-Linear Model for Actions",
                "sec_num": "4"
            },
            {
                "text": "Given a state s = (E, d, j, W ), the space of possible next actions is defined by enumerating subspans of unused words in the current sentence (i.e., subspans of the jth sentence of d not in W ), and the possible commands and parameters in environment state E. 4 We model the policy distribution p(a|s; \u03b8) over this action space in a log-linear fashion (Della Pietra et al., 1997; Lafferty et al., 2001) , giving us the flexibility to incorporate a diverse range of features. Under this representation, the policy distribution is:",
                "cite_spans": [
                    {
                        "start": 261,
                        "end": 262,
                        "text": "4",
                        "ref_id": null
                    },
                    {
                        "start": 353,
                        "end": 380,
                        "text": "(Della Pietra et al., 1997;",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 381,
                        "end": 403,
                        "text": "Lafferty et al., 2001)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Log-Linear Model for Actions",
                "sec_num": "4"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "p(a|s; \u03b8) = e \u03b8\u2022\u03c6(s,a) a e \u03b8\u2022\u03c6(s,a ) ,",
                        "eq_num": "(1)"
                    }
                ],
                "section": "A Log-Linear Model for Actions",
                "sec_num": "4"
            },
            {
                "text": "where \u03c6(s, a) \u2208 R n is an n-dimensional feature representation. During test, actions are selected according to the mode of this distribution.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Log-Linear Model for Actions",
                "sec_num": "4"
            },
            {
                "text": "During training, our goal is to find the optimal policy p(a|s; \u03b8). Since reward correlates with correct action selection, a natural objective is to maximize expected future reward -that is, the reward we expect while acting according to that policy from state s. Formally, we maximize the value function:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Reinforcement Learning",
                "sec_num": "5"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "V \u03b8 (s) = E p(h|\u03b8) [r(h)] ,",
                        "eq_num": "(2)"
                    }
                ],
                "section": "Reinforcement Learning",
                "sec_num": "5"
            },
            {
                "text": "where the history h is the sequence of states and actions encountered while interpreting a single document d \u2208 D. This expectation is averaged over all documents in D. The distribution p(h|\u03b8) returns the probability of seeing history h when starting from state s and acting according to a policy with parameters \u03b8. This distribution can be decomposed into a product over time steps:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Reinforcement Learning",
                "sec_num": "5"
            },
            {
                "text": "p(h|\u03b8) = n\u22121 t=0 p(a t |s t ; \u03b8)p(s t+1 |s t , a t ).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Reinforcement Learning",
                "sec_num": "5"
            },
            {
                "text": "(3)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Reinforcement Learning",
                "sec_num": "5"
            },
            {
                "text": "Our reinforcement learning problem is to find the parameters \u03b8 that maximize V \u03b8 from equation 2.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Policy Gradient Algorithm",
                "sec_num": "5.1"
            },
            {
                "text": "Although there is no closed form solution, policy gradient algorithms (Sutton et al., 2000) estimate the parameters \u03b8 by performing stochastic gradient ascent. The gradient of V \u03b8 is approximated by interacting with the environment, and the resulting reward is used to update the estimate of \u03b8. Policy gradient algorithms optimize a non-convex objective and are only guaranteed to find a local optimum. However, as we will see, they scale to large state spaces and can perform well in practice. To find the parameters \u03b8 that maximize the objective, we first compute the derivative of V \u03b8 . Expanding according to the product rule, we have:",
                "cite_spans": [
                    {
                        "start": 70,
                        "end": 91,
                        "text": "(Sutton et al., 2000)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Policy Gradient Algorithm",
                "sec_num": "5.1"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "\u2202 \u2202\u03b8 V \u03b8 (s) = E p(h|\u03b8) r(h) t \u2202 \u2202\u03b8 log p(a t |s t ; \u03b8) ,",
                        "eq_num": "(4)"
                    }
                ],
                "section": "A Policy Gradient Algorithm",
                "sec_num": "5.1"
            },
            {
                "text": "where the inner sum is over all time steps t in the current history h. Expanding the inner partial derivative we observe that:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Policy Gradient Algorithm",
                "sec_num": "5.1"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "\u2202 \u2202\u03b8 log p(a|s; \u03b8) = \u03c6(s, a)\u2212 a \u03c6(s, a )p(a |s; \u03b8),",
                        "eq_num": "(5)"
                    }
                ],
                "section": "A Policy Gradient Algorithm",
                "sec_num": "5.1"
            },
            {
                "text": "which is the derivative of a log-linear distribution.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Policy Gradient Algorithm",
                "sec_num": "5.1"
            },
            {
                "text": "Equation 5 is easy to compute directly. However, the complete derivative of V \u03b8 in equation 4 s0, a0, . . . , an\u22121, sn) as follows:",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 94,
                        "end": 103,
                        "text": "s0, a0, .",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "A Policy Gradient Algorithm",
                "sec_num": "5.1"
            },
            {
                "text": "for i = 1 . . . T do 1 foreach d \u2208 D do 2 Sample history h \u223c p(h|\u03b8) where 3 h = (",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Policy Gradient Algorithm",
                "sec_num": "5.1"
            },
            {
                "text": "3a for t = 0 . . . n \u2212 1 do 3b Sample action at \u223c p(a|st; \u03b8) 3c",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Policy Gradient Algorithm",
                "sec_num": "5.1"
            },
            {
                "text": "Execute at on state st: st+1 \u223c p(s|st, at) end is intractable, because computing the expectation would require summing over all possible histories. Instead, policy gradient algorithms employ stochastic gradient ascent by computing a noisy estimate of the expectation using just a subset of the histories. Specifically, we draw samples from p(h|\u03b8) by acting in the target environment, and use these samples to approximate the expectation in equation 4. In practice, it is often sufficient to sample a single history h for this approximation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Policy Gradient Algorithm",
                "sec_num": "5.1"
            },
            {
                "text": "\u2206 \u2190 P t`\u03c6 (st, at) \u2212 P a \u03c6(st, a )p(a |st; \u03b8)4 \u03b8 \u2190 \u03b8 + r(h)\u2206",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Policy Gradient Algorithm",
                "sec_num": "5.1"
            },
            {
                "text": "Algorithm 1 details the complete policy gradient algorithm. It performs T iterations over the set of documents D.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Policy Gradient Algorithm",
                "sec_num": "5.1"
            },
            {
                "text": "Step 3 samples a history that maps each document to actions. This is done by repeatedly selecting actions according to the current policy, and updating the state by executing the selected actions. Steps 4 and 5 compute the empirical gradient and update the parameters \u03b8.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Policy Gradient Algorithm",
                "sec_num": "5.1"
            },
            {
                "text": "In many domains, interacting with the environment is expensive. Therefore, we use two techniques that allow us to take maximum advantage of each environment interaction. First, a history h = (s 0 , a 0 , . . . , s n ) contains subsequences (s i , a i , . . . s n ) for i = 1 to n \u2212 1, each with its own reward value given by the environment as a side effect of executing h. We apply the update from equation 5 for each subsequence. Second, for a sampled history h, we can propose alternative histories h that result in the same commands and parameters with different word spans. We can again apply equation 5 for each h , weighted by its probability under the current policy, p(h |\u03b8) p(h|\u03b8) .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Policy Gradient Algorithm",
                "sec_num": "5.1"
            },
            {
                "text": "The algorithm we have presented belongs to a family of policy gradient algorithms that have been successfully used for complex tasks such as robot control (Ng et al., 2003) . Our formulation is unique in how it represents natural language in the reinforcement learning framework.",
                "cite_spans": [
                    {
                        "start": 155,
                        "end": 172,
                        "text": "(Ng et al., 2003)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A Policy Gradient Algorithm",
                "sec_num": "5.1"
            },
            {
                "text": "We can design a range of reward functions to guide learning, depending on the availability of annotated data and environment feedback. Consider the case when every training document d \u2208 D is annotated with its correct sequence of actions, and state transitions are deterministic. Given these examples, it is straightforward to construct a reward function that connects policy gradient to maximum likelihood. Specifically, define a reward function r(h) that returns one when h matches the annotation for the document being analyzed, and zero otherwise. Policy gradient performs stochastic gradient ascent on the objective from equation 2, performing one update per document. For document d, this objective becomes:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Reward Functions and ML Estimation",
                "sec_num": "5.2"
            },
            {
                "text": "E p(h|\u03b8) [r(h)] = h r(h)p(h|\u03b8) = p(h d |\u03b8),",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Reward Functions and ML Estimation",
                "sec_num": "5.2"
            },
            {
                "text": "where h d is the history corresponding to the annotated action sequence. Thus, with this reward policy gradient is equivalent to stochastic gradient ascent with a maximum likelihood objective.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Reward Functions and ML Estimation",
                "sec_num": "5.2"
            },
            {
                "text": "At the other extreme, when annotations are completely unavailable, learning is still possible given informative feedback from the environment. Crucially, this feedback only needs to correlate with action sequence quality. We detail environment-based reward functions in the next section. As our results will show, reward functions built using this kind of feedback can provide strong guidance for learning. We will also consider reward functions that combine annotated supervision with environment feedback.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Reward Functions and ML Estimation",
                "sec_num": "5.2"
            },
            {
                "text": "We study two applications of our model: following instructions to perform software tasks, and solving a puzzle game using tutorial guides.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Applying the Model",
                "sec_num": "6"
            },
            {
                "text": "On its Help and Support website, 5 Microsoft publishes a number of articles describing how to per-5 support.microsoft.com form tasks and troubleshoot problems in the Windows operating systems. Examples of such tasks include installing patches and changing security settings. Figure 1 shows one such article.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 275,
                        "end": 283,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Microsoft Windows Help and Support",
                "sec_num": "6.1"
            },
            {
                "text": "Our goal is to automatically execute these support articles in the Windows 2000 environment. Here, the environment state is the set of visible user interface (UI) objects, and object properties such as label, location, and parent window. Possible commands include left-click, right-click, double-click, and type-into, all of which take a UI object as a parameter; type-into additionally requires a parameter for the input text. Table 1 lists some of the features we use for this domain. These features capture various aspects of the action under consideration, the current Windows UI state, and the input instructions. For example, one lexical feature measures the similarity of a word in the sentence to the UI labels of objects in the environment. Environment-specific features, such as whether an object is currently in focus, are useful when selecting the object to manipulate. In total, there are 4,438 features.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 428,
                        "end": 435,
                        "text": "Table 1",
                        "ref_id": "TABREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Microsoft Windows Help and Support",
                "sec_num": "6.1"
            },
            {
                "text": "Reward Function Environment feedback can be used as a reward function in this domain. An obvious reward would be task completion (e.g., whether the stated computer problem was fixed). Unfortunately, verifying task completion is a challenging system issue in its own right.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Microsoft Windows Help and Support",
                "sec_num": "6.1"
            },
            {
                "text": "Instead, we rely on a noisy method of checking whether execution can proceed from one sentence to the next: at least one word in each sentence has to correspond to an object in the envi- If no words in a sentence match a current environment object, then one of the previous sentences was analyzed incorrectly. In this case, we assign the history a reward of -1. This reward is not guaranteed to penalize all incorrect histories, because there may be false positive matches between the sentence and the environment. When at least one word matches, we assign a positive reward that linearly increases with the percentage of words assigned to non-null commands, and linearly decreases with the number of output actions. This reward signal encourages analyses that interpret all of the words without producing spurious actions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Microsoft Windows Help and Support",
                "sec_num": "6.1"
            },
            {
                "text": "Our second application is to a puzzle game called Crossblock, available online as a Flash game. 7 Each of 50 puzzles is played on a grid, where some grid positions are filled with squares. The object of the game is to clear the grid by drawing vertical or horizontal line segments that remove groups of squares. Each segment must exactly cross a specific number of squares, ranging from two to seven depending on the puzzle. Humans players have found this game challenging and engaging enough to warrant posting textual tutorials. 8 A sample puzzle and tutorial are shown in Figure 3 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 575,
                        "end": 583,
                        "text": "Figure 3",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Crossblock: A Puzzle Game",
                "sec_num": "6.2"
            },
            {
                "text": "The environment is defined by the state of the grid. The only command is clear, which takes a parameter specifying the orientation (row or column) and grid location of the line segment to be removed. The challenge in this domain is to segment the text into the phrases describing each action, and then correctly identify the line segments from references such as \"the bottom four from the second column from the left.\"",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Crossblock: A Puzzle Game",
                "sec_num": "6.2"
            },
            {
                "text": "For this domain, we use two sets of binary features on state-action pairs (s, a). First, for each vocabulary word w, we define a feature that is one if w is the last word of a's consumed words W . These features help identify the proper text segmentation points between actions. Second, we introduce features for pairs of vocabulary word w and attributes of action a, e.g., the line orientation and grid locations of the squares that a would remove. This set of features enables us to match words (e.g., \"row\") with objects in the environment (e.g., a move that removes a horizontal series of squares). In total, there are 8,094 features.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Crossblock: A Puzzle Game",
                "sec_num": "6.2"
            },
            {
                "text": "Reward Function For Crossblock it is easy to directly verify task completion, which we use as the basis of our reward function. The reward r(h) is -1 if h ends in a state where the puzzle cannot be completed. For solved puzzles, the reward is a positive value proportional to the percentage of words assigned to non-null commands.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Crossblock: A Puzzle Game",
                "sec_num": "6.2"
            },
            {
                "text": "Datasets For the Windows domain, our dataset consists of 128 documents, divided into 70 for training, 18 for development, and 40 for test. In the puzzle game domain, we use 50 tutorials, divided into 40 for training and 10 for test. 9 Statistics for the datasets are shown below. The data exhibits certain qualities that make for a challenging learning problem. For instance, there are a surprising variety of linguistic constructs -as Figure 4 shows, in the Windows domain even a simple command is expressed in at least six different ways. Experimental Framework To apply our algorithm to the Windows domain, we use the Win32 application programming interface to simulate human interactions with the user interface, and to gather environment state information. The operating system environment is hosted within a virtual machine, 10 allowing us to rapidly save and reset system state snapshots. For the puzzle game domain, we replicated the game with an implementation that facilitates automatic play.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 436,
                        "end": 444,
                        "text": "Figure 4",
                        "ref_id": "FIGREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Experimental Setup",
                "sec_num": "7"
            },
            {
                "text": "As is commonly done in reinforcement learning, we use a softmax temperature parameter to smooth the policy distribution (Sutton and Barto, 1998) , set to 0.1 in our experiments. For Windows, the development set is used to select the best parameters. For Crossblock, we choose the parameters that produce the highest reward during training. During evaluation, we use these parameters to predict mappings for the test documents.",
                "cite_spans": [
                    {
                        "start": 120,
                        "end": 144,
                        "text": "(Sutton and Barto, 1998)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Setup",
                "sec_num": "7"
            },
            {
                "text": "Evaluation Metrics For evaluation, we compare the results to manually constructed sequences of actions. We measure the number of correct actions, sentences, and documents. An action is correct if it matches the annotations in terms of command and parameters. A sentence is correct if all of its actions are correctly identified, and analogously for documents. 11 Statistical significance is measured with the sign test.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Setup",
                "sec_num": "7"
            },
            {
                "text": "Additionally, we compute a word alignment score to investigate the extent to which the input text is used to construct correct analyses. This score measures the percentage of words that are aligned to the corresponding annotated actions in correctly analyzed documents.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Setup",
                "sec_num": "7"
            },
            {
                "text": "Baselines We consider the following baselines to characterize the performance of our approach.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Setup",
                "sec_num": "7"
            },
            {
                "text": "\u2022 Full Supervision Sequence prediction problems like ours are typically addressed using supervised techniques. We measure how a standard supervised approach would perform on this task by using a reward signal based on manual annotations of output action sequences, as defined in Section 5.2. As shown there, policy gradient with this reward is equivalent to stochastic gradient ascent with a maximum likelihood objective.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Setup",
                "sec_num": "7"
            },
            {
                "text": "\u2022 Partial Supervision We consider the case when only a subset of training documents is annotated, and environment reward is used for the remainder. Our method seamlessly combines these two kinds of rewards.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Setup",
                "sec_num": "7"
            },
            {
                "text": "\u2022 Random and Majority (Windows) We consider two na\u00efve baselines. Both scan through each sentence from left to right. A command c is executed on the object whose name is encountered first in the sentence. This command c is either selected randomly, or set to the majority command, which is leftclick. This procedure is repeated until no more words match environment objects.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Setup",
                "sec_num": "7"
            },
            {
                "text": "\u2022 Random (Puzzle) We consider a baseline that randomly selects among the actions that are valid in the current game state. 12 Table 2 presents evaluation results on the test sets. There are several indicators of the difficulty of this task. The random and majority baselines' poor performance in both domains indicates that na\u00efve approaches are inadequate for these tasks. The performance of the fully supervised approach provides further evidence that the task is challenging. This difficulty can be attributed in part to the large branching factor of possible actions at each stepon average, there are 27.14 choices per action in the Windows domain, and 9.78 in the Crossblock domain.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 126,
                        "end": 133,
                        "text": "Table 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Experimental Setup",
                "sec_num": "7"
            },
            {
                "text": "In both domains, the learners relying only on environment reward perform well. Although the fully supervised approach performs the best, adding just a few annotated training examples to the environment-based learner significantly reduces the performance gap. Table 2 : Performance on the test set with different reward signals and baselines. Our evaluation measures the proportion of correct actions, sentences, and documents. We also report the percentage of correct word alignments for the successfully completed documents. Note the puzzle domain has only singlesentence documents, so its sentence and document scores are identical. The partial supervision line refers to 20 out of 70 annotated training documents for Windows, and 10 out of 40 for the puzzle. Each result marked with * or is a statistically significant improvement over the result immediately above it; * indicates p < 0.01 and indicates p < 0.05. Figure 5 : Comparison of two training scenarios where training is done using a subset of annotated documents, with and without environment reward for the remaining unannotated documents. Figure 5 shows the overall tradeoff between annotation effort and system performance for the two domains. The ability to make this tradeoff is one of the advantages of our approach. The figure also shows that augmenting annotated documents with additional environment-reward documents invariably improves performance.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 259,
                        "end": 266,
                        "text": "Table 2",
                        "ref_id": null
                    },
                    {
                        "start": 917,
                        "end": 925,
                        "text": "Figure 5",
                        "ref_id": null
                    },
                    {
                        "start": 1104,
                        "end": 1112,
                        "text": "Figure 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "8"
            },
            {
                "text": "The word alignment results from Table 2 indicate that the learners are mapping the correct words to actions for documents that are successfully completed. For example, the models that perform best in the Windows domain achieve nearly perfect word alignment scores.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 32,
                        "end": 39,
                        "text": "Table 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "8"
            },
            {
                "text": "To further assess the contribution of the instruction text, we train a variant of our model without access to text features. This is possible in the game domain, where all of the puzzles share a single goal state that is independent of the instructions. This variant solves 34% of the puzzles, suggesting that access to the instructions significantly improves performance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "8"
            },
            {
                "text": "In this paper, we presented a reinforcement learning approach for inducing a mapping between instructions and actions. This approach is able to use environment-based rewards, such as task completion, to learn to analyze text. We showed that having access to a suitable reward function can significantly reduce the need for annotations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "9"
            },
            {
                "text": "Code, data, and annotations used in this work are available at http://groups.csail.mit.edu/rbg/code/rl/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "That is, action ai is executed before ai+1 is predicted.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "In most reinforcement learning problems, the reward function is defined over state-action pairs, as r(s, a) -in this case, r(h) = P t r(st, at), and our formulation becomes a standard finite-horizon Markov decision process. Policy gradient approaches allow us to learn using the more general case of history-based reward.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "For parameters that refer to words, the space of possible values is defined by the unused words in the current sentence.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "We assume that a word maps to an environment object if the edit distance between the word and the object's name is below a threshold value. 7 hexaditidom.deviantart.com/art/Crossblock-108669149 8 www.jayisgames.com/archives/2009/01/crossblock.php",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "For Crossblock, because the number of puzzles is limited, we did not hold out a separate development set, and report averaged results over five training/test splits.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "VMware Workstation, available at www.vmware.com 11 In these tasks, each action depends on the correct execution of all previous actions, so a single error can render the remainder of that document's mapping incorrect. In addition, due to variability in document lengths, overall action accuracy is not guaranteed to be higher than document accuracy.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Since action selection is among objects, there is no natural majority baseline for the puzzle.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "The authors acknowledge the support of the NSF (CAREER grant IIS-0448168, grant IIS-0835445, grant IIS-0835652, and a Graduate Research Fellowship) and the ONR. Thanks to Michael Collins, Amir Globerson, Tommi Jaakkola, Leslie Pack Kaelbling, Dina Katabi, Martin Rinard, and members of the MIT NLP group for their suggestions and comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Learning the semantics of words and pictures",
                "authors": [
                    {
                        "first": "Kobus",
                        "middle": [],
                        "last": "Barnard",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [
                            "A"
                        ],
                        "last": "Forsyth",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proceedings of ICCV",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kobus Barnard and David A. Forsyth. 2001. Learning the semantics of words and pictures. In Proceedings of ICCV.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Learning to sportscast: a test of grounded language acquisition",
                "authors": [
                    {
                        "first": "L",
                        "middle": [],
                        "last": "David",
                        "suffix": ""
                    },
                    {
                        "first": "Raymond",
                        "middle": [
                            "J"
                        ],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Mooney",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of ICML",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David L. Chen and Raymond J. Mooney. 2008. Learn- ing to sportscast: a test of grounded language acqui- sition. In Proceedings of ICML.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Inducing features of random fields",
                "authors": [
                    {
                        "first": "Vincent",
                        "middle": [
                            "J"
                        ],
                        "last": "Stephen Della Pietra",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [
                            "D"
                        ],
                        "last": "Della Pietra",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Lafferty",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "IEEE Trans. Pattern Anal. Mach. Intell",
                "volume": "19",
                "issue": "4",
                "pages": "380--393",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Stephen Della Pietra, Vincent J. Della Pietra, and John D. Lafferty. 1997. Inducing features of ran- dom fields. IEEE Trans. Pattern Anal. Mach. Intell., 19(4):380-393.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Understanding natural language instructions: the case of purpose clauses",
                "authors": [
                    {
                        "first": "Barbara",
                        "middle": [],
                        "last": "Di",
                        "suffix": ""
                    },
                    {
                        "first": "Eugenio",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "Proceedings of ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Barbara Di Eugenio. 1992. Understanding natural lan- guage instructions: the case of purpose clauses. In Proceedings of ACL.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Intentional context in situated language learning",
                "authors": [
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Fleischman",
                        "suffix": ""
                    },
                    {
                        "first": "Deb",
                        "middle": [],
                        "last": "Roy",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proceedings of CoNLL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Michael Fleischman and Deb Roy. 2005. Intentional context in situated language learning. In Proceed- ings of CoNLL.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
                "authors": [
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Lafferty",
                        "suffix": ""
                    },
                    {
                        "first": "Andrew",
                        "middle": [],
                        "last": "Mccallum",
                        "suffix": ""
                    },
                    {
                        "first": "Fernando",
                        "middle": [],
                        "last": "Pereira",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Proceedings of ICML",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In Proceedings of ICML.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Automatic optimization of dialogue management",
                "authors": [
                    {
                        "first": "Diane",
                        "middle": [
                            "J"
                        ],
                        "last": "Litman",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [
                            "S"
                        ],
                        "last": "Kearns",
                        "suffix": ""
                    },
                    {
                        "first": "Satinder",
                        "middle": [],
                        "last": "Singh",
                        "suffix": ""
                    },
                    {
                        "first": "Marilyn",
                        "middle": [
                            "A"
                        ],
                        "last": "Walker",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of COLING",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Diane J. Litman, Michael S. Kearns, Satinder Singh, and Marilyn A. Walker. 2000. Automatic optimiza- tion of dialogue management. In Proceedings of COLING.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Learning language from its perceptual context",
                "authors": [
                    {
                        "first": "Raymond",
                        "middle": [
                            "J"
                        ],
                        "last": "Mooney",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of ECML/PKDD",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Raymond J. Mooney. 2008a. Learning language from its perceptual context. In Proceedings of ECML/PKDD.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Learning to connect language and perception",
                "authors": [
                    {
                        "first": "Raymond",
                        "middle": [
                            "J"
                        ],
                        "last": "Mooney",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proceedings of AAAI",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Raymond J. Mooney. 2008b. Learning to connect lan- guage and perception. In Proceedings of AAAI.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Autonomous helicopter flight via reinforcement learning",
                "authors": [
                    {
                        "first": "Andrew",
                        "middle": [
                            "Y"
                        ],
                        "last": "Ng",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [
                            "Jin"
                        ],
                        "last": "Kim",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [
                            "I"
                        ],
                        "last": "Jordan",
                        "suffix": ""
                    },
                    {
                        "first": "Shankar",
                        "middle": [],
                        "last": "Sastry",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Advances in NIPS",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andrew Y. Ng, H. Jin Kim, Michael I. Jordan, and Shankar Sastry. 2003. Autonomous helicopter flight via reinforcement learning. In Advances in NIPS.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Grounding knowledge in sensors: Unsupervised learning for language and planning",
                "authors": [
                    {
                        "first": "James Timothy",
                        "middle": [],
                        "last": "Oates",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "James Timothy Oates. 2001. Grounding knowledge in sensors: Unsupervised learning for language and planning. Ph.D. thesis, University of Massachusetts Amherst.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Learning words from sights and sounds: a computational model",
                "authors": [
                    {
                        "first": "Deb",
                        "middle": [
                            "K"
                        ],
                        "last": "Roy",
                        "suffix": ""
                    },
                    {
                        "first": "Alex",
                        "middle": [
                            "P"
                        ],
                        "last": "Pentland",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Cognitive Science",
                "volume": "26",
                "issue": "",
                "pages": "113--146",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Deb K. Roy and Alex P. Pentland. 2002. Learn- ing words from sights and sounds: a computational model. Cognitive Science 26, pages 113-146.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Spoken dialogue management using probabilistic reasoning",
                "authors": [
                    {
                        "first": "Nicholas",
                        "middle": [],
                        "last": "Roy",
                        "suffix": ""
                    },
                    {
                        "first": "Joelle",
                        "middle": [],
                        "last": "Pineau",
                        "suffix": ""
                    },
                    {
                        "first": "Sebastian",
                        "middle": [],
                        "last": "Thrun",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nicholas Roy, Joelle Pineau, and Sebastian Thrun. 2000. Spoken dialogue management using proba- bilistic reasoning. In Proceedings of ACL.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Automatic learning of dialogue strategy using dialogue simulation and reinforcement learning",
                "authors": [
                    {
                        "first": "Konrad",
                        "middle": [],
                        "last": "Scheffler",
                        "suffix": ""
                    },
                    {
                        "first": "Steve",
                        "middle": [],
                        "last": "Young",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of HLT",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Konrad Scheffler and Steve Young. 2002. Automatic learning of dialogue strategy using dialogue simula- tion and reinforcement learning. In Proceedings of HLT.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Reinforcement learning for spoken dialogue systems",
                "authors": [
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Satinder",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [
                            "J"
                        ],
                        "last": "Singh",
                        "suffix": ""
                    },
                    {
                        "first": "Diane",
                        "middle": [
                            "J"
                        ],
                        "last": "Kearns",
                        "suffix": ""
                    },
                    {
                        "first": "Marilyn",
                        "middle": [
                            "A"
                        ],
                        "last": "Litman",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Walker",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Advances in NIPS",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Satinder P. Singh, Michael J. Kearns, Diane J. Litman, and Marilyn A. Walker. 1999. Reinforcement learn- ing for spoken dialogue systems. In Advances in NIPS.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic",
                "authors": [
                    {
                        "first": "Jeffrey",
                        "middle": [
                            "Mark"
                        ],
                        "last": "Siskind",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "J. Artif. Intell. Res. (JAIR)",
                "volume": "15",
                "issue": "",
                "pages": "31--90",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jeffrey Mark Siskind. 2001. Grounding the lexical se- mantics of verbs in visual perception using force dy- namics and event logic. J. Artif. Intell. Res. (JAIR), 15:31-90.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Reinforcement Learning: An Introduction",
                "authors": [
                    {
                        "first": "Richard",
                        "middle": [
                            "S"
                        ],
                        "last": "Sutton",
                        "suffix": ""
                    },
                    {
                        "first": "Andrew",
                        "middle": [
                            "G"
                        ],
                        "last": "Barto",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Richard S. Sutton and Andrew G. Barto. 1998. Re- inforcement Learning: An Introduction. The MIT Press.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Policy gradient methods for reinforcement learning with function approximation",
                "authors": [
                    {
                        "first": "Richard",
                        "middle": [
                            "S"
                        ],
                        "last": "Sutton",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Mcallester",
                        "suffix": ""
                    },
                    {
                        "first": "Satinder",
                        "middle": [],
                        "last": "Singh",
                        "suffix": ""
                    },
                    {
                        "first": "Yishay",
                        "middle": [],
                        "last": "Mansour",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Advances in NIPS",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 2000. Policy gradient meth- ods for reinforcement learning with function approx- imation. In Advances in NIPS.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Understanding Natural Language",
                "authors": [
                    {
                        "first": "Terry",
                        "middle": [],
                        "last": "Winograd",
                        "suffix": ""
                    }
                ],
                "year": 1972,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Terry Winograd. 1972. Understanding Natural Lan- guage. Academic Press.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "On the integration of grounding language and learning objects",
                "authors": [
                    {
                        "first": "Chen",
                        "middle": [],
                        "last": "Yu",
                        "suffix": ""
                    },
                    {
                        "first": "Dana",
                        "middle": [
                            "H"
                        ],
                        "last": "Ballard",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of AAAI",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Chen Yu and Dana H. Ballard. 2004. On the integra- tion of grounding language and learning objects. In Proceedings of AAAI.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "type_str": "figure",
                "text": "Input: A document set D, Feature representation \u03c6, Reward function r(h), Number of iterations T Initialization: Set \u03b8 to small random values.",
                "num": null,
                "uris": null
            },
            "FIGREF1": {
                "type_str": "figure",
                "text": "Estimate of parameters \u03b8Algorithm 1: A policy gradient algorithm.",
                "num": null,
                "uris": null
            },
            "FIGREF2": {
                "type_str": "figure",
                "text": "an environment object L Set of object class names (e.g. \"button\") V Vocabulary Features on W and object o Test if o is visible in s Test if o has input focus Test if o is in the foreground Test if o was previously interacted with Test if o came into existence since last action Min. edit distance between w \u2208 W and object labels in sFeatures on words in W , command c, and object o \u2200c \u2208 C, w \u2208 V : test if c = c and w \u2208 W \u2200c \u2208 C, l \u2208 L: test if c = c and l is the class of o",
                "num": null,
                "uris": null
            },
            "FIGREF3": {
                "type_str": "figure",
                "text": "Crossblock puzzle with tutorial. For this level, four squares in a row or column must be removed at once. The first move specified by the tutorial is greyed in the puzzle.ronment. 6 For instance, in the sentence from Figure 2 the word \"Run\" matches the Run... menu item.",
                "num": null,
                "uris": null
            },
            "FIGREF4": {
                "type_str": "figure",
                "text": "Variations of \"click internet options on the tools menu\" present in the Windows corpus.",
                "num": null,
                "uris": null
            },
            "TABREF0": {
                "num": null,
                "type_str": "table",
                "content": "<table/>",
                "text": "Example features in the Windows domain. All features are binary, except for the normalized edit distance which is real-valued.",
                "html": null
            },
            "TABREF2": {
                "num": null,
                "type_str": "table",
                "content": "<table><tr><td/><td/><td colspan=\"2\">Windows</td><td/><td/><td>Puzzle</td><td/></tr><tr><td/><td>Action</td><td>Sent.</td><td>Doc.</td><td colspan=\"2\">Word Action</td><td>Doc.</td><td>Word</td></tr><tr><td>Random baseline</td><td>0.128</td><td>0.101</td><td>0.000</td><td>--</td><td>0.081</td><td>0.111</td><td>--</td></tr><tr><td>Majority baseline</td><td>0.287</td><td>0.197</td><td>0.100</td><td>--</td><td>--</td><td>--</td><td>--</td></tr><tr><td>Partial supervision</td><td colspan=\"2\">0.723  *  0.702</td><td colspan=\"2\">0.475 0.989</td><td colspan=\"3\">0.575  *  0.523 0.850</td></tr><tr><td>Full supervision</td><td>0.756</td><td>0.714</td><td colspan=\"2\">0.525 0.991</td><td>0.632</td><td colspan=\"2\">0.630 0.869</td></tr></table>",
                "text": "Environment reward * 0.647 * 0.590 * 0.375 0.819 * 0.428 * 0.453 0.686",
                "html": null
            }
        }
    }
}