File size: 214,960 Bytes
20091b1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
# Actor-Critic based Improper Reinforcement Learning

Mohammadi Zaki<sup>1</sup> Avinash Mohan<sup>2</sup> Aditya Gopalan<sup>1</sup> Shie Mannor<sup>3</sup>

# Abstract

We consider an improper reinforcement learning setting where a learner is given  $M$  base controllers for an unknown Markov decision process, and wishes to combine them optimally to produce a potentially new controller that can outperform each of the base ones. This can be useful in tuning across controllers, learnt possibly in mismatched or simulated environments, to obtain a good controller for a given target environment with relatively few trials. Towards this, we propose two algorithms: (1) a Policy Gradient-based approach; and (2) an algorithm that can switch between a simple Actor-Critic (AC) based scheme and a Natural Actor-Critic (NAC) scheme depending on the available information. Both algorithms operate over a class of improper mixtures of the given controllers. For the first case, we derive convergence rate guarantees assuming access to a gradient oracle. For the AC-based approach we provide convergence rate guarantees to a stationary point in the basic AC case and to a global optimum in the NAC case. Numerical results on (i) the standard control theoretic benchmark of stabilizing an cartpole; and (ii) a constrained queueing task show that our improper policy optimization algorithm can stabilize the system even when the base policies at its disposal are unstable.

# 1. Introduction

A natural approach to design effective controllers for large, complex systems is to first approximate the system using a tried-and-true Markov decision process (MDP) model, such as the Linear Quadratic Regulator (LQR) (Dean et al., 2017) or tabular MDPs (Auer et al., 2009), and then compute (near-) optimal policies for the assumed model. Though this

$^{1}$ Department of ECE, IISc, Bangalore, India  $^{2}$ Boston University, Massachusetts, USA  $^{3}$ Faculty of Electrical Engineering, Technion, Haifa, Israel and NVIDIA Research, Israel.. Correspondence to: Mohammadi Zaki <mohammadi@iisc.ac.in>.

Proceedings of the  $39^{th}$  International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s).

yields favorable results in principle, it is quite possible that errors in describing or understanding the system – leading to misspecified models – may lead to ‘overfitting’, resulting in subpar controllers in practice. Moreover, in many cases, the stability of the designed controller may be crucial and more desirable than optimizing a fine-grained cost function. From the controller design standpoint, it is often easier, cheaper and more interpretable to specify or hardcode control policies based on domain-specific principles, e.g., anti-lock braking system (ABS) controllers (Radac & Precup, 2018). For these reasons, we investigate in this paper a promising, general-purpose reinforcement learning (RL) approach towards designing controllers<sup>1</sup> given predefined ensembles of basic or atomic controllers, which (a) allows for flexibly combining the given controllers to obtain richer policies than the atomic policies, and, at the same time, (b) can preserve the basic structure of the given class of controllers and confer a high degree of interpretability on the resulting hybrid policy.

Overview of the approach. We consider a situation where we are given 'black-box access' to  $M$  controllers (maps from state to action distributions)  $\{K_1, \dots, K_M\}$  for an unknown MDP. By this we mean that we can choose to invoke any of the given controllers at any point during the operation of the system. With the understanding that the given family of controllers is 'reasonable,' we frame the problem of learning the best combination of the controllers by trial and error. We first set up an improper policy class of all randomized mixtures of the  $M$  given controllers - each such mixture is parameterized by a probability distribution over the  $M$  base controllers. Applying an improper policy in this class amounts to selecting independently at each time a base controller according to this distribution and implementing the recommended action as a function of the present state of the system. The learner's goal is to find the best performing mixture policy by iteratively testing from the pool of given controllers and observing the resulting state-action-reward trajectory.

Note that the underlying parameterization in our setting is over a set of given controllers which could be potentially abstract and defined for complex MDPs with continuous

state/action spaces, instead of the (standard) policy gradient (PG) view where the parameterization directly defines the policy in terms of the state-action map. Our problem, therefore, hews more closely to a meta RL framework, in that we operate over a set of controllers that have themselves been designed using some optimization framework to which we are agnostic. This has the advantage of conferring a great deal of generality, since the class of controllers can now be chosen to promote any desirable secondary characteristic such as interpretability, ease of implementation or cost effectiveness.

It is also worth noting that our approach is different from treating each of the base controllers as an 'expert' and applying standard mixture-of-experts algorithms, e.g., Hedge or Exponentiated Gradient (Littlestone & Warmuth, 1994; Auer et al., 1995; Kocák et al., 2014; Neu, 2015). Whereas the latter approach is tailored to converge to the best single controller (under the usual gradient approximation framework) and hence qualifies as a 'proper' learning algorithm, the former optimization problem is in the improper class of mixture policies which not only contains each atomic controller but also allows for a true mixture (i.e., one which puts positive probability on at least two elements) of many atomic controllers to achieve optimality; we exhibit concrete examples where this is indeed possible.

Our Contributions. We make the following contributions in this context:

- We develop a gradient-based RL algorithm to iteratively tune a softmax parameterization of an improper (mixture) policy defined over the base controllers (Algorithm 1). While this algorithm, Softmax Policy Gradient (or Softmax PG), relies on the availability of value function gradients, we later propose a modification that we call GradEst (see Alg. 6 in appendix) to Softmax PG to rectify this. GradEst uses a combination of rollouts and Simultaneously Perturbed Stochastic Approximation (SPSA) (Borkar, 2008) to estimate the value gradient at the current mixture distribution.  
- We show a convergence rate of  $\mathcal{O}(1 / t)$  to the optimal value function for finite state-action MDPs. To do this, we employ a novel Non-uniform Lojasiewicz-type inequality (Lojasiewicz, 1963), that lower bounds the 2-norm of the value gradient in terms of the suboptimality of the current mixture policy's value. Essentially, this helps establish that when the gradient of the value function hits zero, the value function is itself close to the optimum.  
- Policy-gradient methods are well-known to suffer from high variance (Peters & Schaal, 2008; Bhatnagar et al., 2009). To circumvent this issue, we develop an algorithm that can switch between a simple Actor-Critic (AC) based scheme and a Natural Actor-Critic (NAC) scheme depending on the available information. The algorithm, 'ACIL' (Sec. 5), executes on a single sample path, without requiring any

forced resets, as is common in many RL algorithms. We provide convergence rate guarantees to a stationary point in the basic AC case and to a global optimum in the NAC case, under some additional (but standard) assumptions (of uniform ergodicity). The total complexity of AC is measured to attain an  $(\varepsilon + \text{Critical_error})$ -accurate stationary point. The total complexity of NAC is measured to attain an  $(\varepsilon + \text{Critical_error} + \text{Actor_error})$ -accurate stationary point. We use linear function approximation to approximate the value function and our convergence analysis show exactly how this approximation affects the final complexity bound.

- We corroborate our theory using extensive simulation studies. For the PG based method we use GradEst in two different settings (a) the well-known CartPole system and (b) a scheduling task in a constrained queueing system. We discuss both these settings in detail in Sec. 2, where we also demonstrate the power of our improper learning approach in finding control policies with provably good performance. In our experiments (see Sec. 6), we eschew access to exact value gradients and instead rely on a combination of roll outs and SPSA to estimate them. For the actor-critic based learner, we demonstrate simulations on various queuing theoretic simulations using the natural-actor-critic based ACIL. All the results show that our proposed algorithms quickly converge to the correct mixture of available atomic controllers.

Related Work (brief). We provide a quick survey of relevant literature. A detailed survey is deferred to the appendix. Policy gradient. The basic policy gradient method has become a cornerstone of modern RL and given birth to an entire class of highly efficient policy search techniques such as CPI (Kakade & Langford, 2002), TRPO (Schulman et al., 2015), PPO (Schulman et al., 2017), and MADDPG (Lowe et al., 2020). A growing body of recent work shows promising results about convergence rates for PG algorithms over finite state-action MDPs (Agarwal et al., 2020a; Shani et al., 2020; Bhandari & Russo, 2019; Mei et al., 2020), where the parameterization is over the entire space of state-action pairs, i.e.,  $\mathbb{R}^{S\times A}$ . These advances, however, are partially offset by negative results such as those in Li et al. (2021), which show that the convergence time is  $\Omega \left(|\mathcal{S}|^{2^{1 / (1 - \gamma)}}\right)$  where  $\mathcal{S}$  is the state space of the MDP and  $\gamma$  the discount factor, even with exact gradient knowledge.

Improper learning. The above works concern proper learning, where the policy search space is usually taken to be the set of all deterministic policies for an MDP. Improper learning, on the other hand, has been studied in statistical learning theory for the IID setting (Daniely et al., 2014; 2013). In this representation independent learning framework, the learning algorithm is not restricted to output a hypothesis from a given set of hypotheses.

Boosting. Agarwal et al. (2020b) attempts to frame and solve policy optimization over an improper class by boosting

a given class of controllers. This work, however, is situated in the context of non-stochastic control and assumes perfect knowledge of (i) the memory-boundedness of the MDP, and (ii) the state noise vector in every round, which amounts to essentially knowing the MDP transition dynamics. We work in the stochastic MDP setting and assume no access to the MDP's transition kernel. Further, it is assumed in (Agarwal et al., 2020b) that all the atomic controllers available are stabilizing which, when working with an unknown MDP, is a very strong assumption to make. While making no such assumptions on our atomic controller class; we show our algorithms can begin with provably unstable controllers and yet succeed in stabilizing the system (Sec. 2.2 and 6).

Options framework. Our work differs from the options framework (Barreto et al., 2017; Sutton et al., 1999) for hierarchical RL in spirit, in that we allow for each controller to be applied in each round rather than waiting for a subtask to complete. The current work deals with finding an optimal mixture of basic controllers to solve a particular task. However, if we allow for a state-dependent choice of controllers, then the methods proposed can be generalized for solving hierarchical RL tasks.

Ensemble policy-based RL. Our current work deals with accessing given (possibly separately trained) controllers as black-boxes and learning to combine them optimally. In contrast, in ensemble RL approaches (Maclin & Opitz, 2011; Xiliang et al., 2018; Wiering & van Hasselt, 2008) the base policies are learnt on the fly (e.g., Q-learning, SARSA) by the agent whereas the combining rule is fixed upfront (e.g., majority voting, rank voting, Boltzmann multiplication, etc.). Moreover, the base policies have access to the new system in Ensemble RL, which gives them a distinct advantage. Our method can serve as a meta-RL adaptation framework with theoretical guarantees which can use such pre-trained models to combine them optimally. To the best of our knowledge, ensemble RL works like (Xiliang et al., 2018; Wiering & van Hasselt, 2008) do not provide theoretical guarantees on the learnt combined policy. Our work on the other hand provides a firm theoretical as well as empirical basis for the methods we propose.

Improper learning with given base controllers. Probably the closest resemblance with our work is that of Banijamali et al. (2019) which aims at finding the best convex combination of a given set of base controllers for a given MDP. They however frame it as a planning problem where the transition kernel  $P$  is known to the agent. Furthermore, we treat the base controllers as black-box entities, whereas they exploit their structure to compute the state-occupancy measures.

Actor-critic methods. Actor-critic (AC) methods were first introduced in Konda & Tsitsiklis (2000). Natural actor-critic methods were first introduced in (Peters & Schaal, 2008; Bhatnagar et al., 2009). While many studies are available for the asymptotic convergence of AC and NAC, we use the new techniques proposed by Xu et al. (2020) and Barakat

et al. (2021) for showing convergence results.

# 2. Motivating Examples

We begin with two examples that help illustrate the need for improper learning over a given set of atomic controllers. These examples concretely demonstrate the power of this approach to find (improper) control policies that go well beyond what the atomic set can accomplish, while retaining some of their desirable properties (such as interpretability and simplicity of implementation).

# 2.1. Ergodic Control of the Cartpole System

Consider the Cartpole system which has, over the years, become a benchmark for testing control strategies (Khalil, 2015). The system's dynamics, evolving in  $\mathbb{R}^4$ , can be approximated via a Linear Quadratic Regulator around an (unstable) equilibrium state vector that we designate the origin  $(\mathbf{x} = \mathbf{0})$ . The objective now reduces to finding a (potentially randomized) control policy  $u \equiv \{u(t), t \geqslant 0\}$  that solves  $\inf_{u} J\left(\mathbb{E}_{u} \sum_{t=0}^{\infty} \mathbf{x}^{\intercal}(t) Q \mathbf{x}(t) + R u^{2}(t)\right)$  subject to  $\mathbf{x}(t+1) = A_{open} \mathbf{x}(t) + \mathbf{b} u(t)$  at all times  $t \geqslant 0$ .

Under standard assumptions of controllability and observability, this optimization has a stationary, linear solution  $u^{*}(t) = -\mathbf{K}^{\intercal}\mathbf{x}(t)$  (Bertsekas, 2011). Moreover, setting  $A := A_{open} - \mathbf{b}\mathbf{K}^{\intercal}$ , it is well known that the dynamics  $\mathbf{x}(t + 1) = A\mathbf{x}(t)$ ,  $t \geqslant 0$ , are stable. The usual design strategy for a given Cartpole involves a combination of system identification, followed by linearization and computing the controller gain  $\mathbf{K}$ . This would typically produce a controller with tolerable performance fairly quickly, but would also suffer from nonidealities of parameter estimation.

To alleviate this problem, first consider a generic (ergodic) control policy that builds on this strategy by switching across a menu of controllers  $\{K_1,\dots ,K_N\}$  produced as above. That is, at any time  $t$  , this policy chooses  $K_{i}$ $i\in [N]$  , w.p.  $p_i$  , so that the control input at time  $t$  is  $u(t) = -\mathbf{K}_i^\top \mathbf{x}(t)$  w.p.  $p_i$  . Let  $A(i)\coloneqq A_{open} - \mathbf{bK}_i^\top$  The resulting controlled dynamics are given by  $\mathbf{x}(t + 1) =$ $A(r(t))\mathbf{x}(t)$ $t\geqslant 0$  , where  $r(t) = i$  w.p.  $p_i$  , IID across  $t$

This is an example of an ergodic parameter linear system (EPLS) (Bolzern et al., 2008), which is said to be Exponentially Almost Surely Stable (EAS) if the state norm decays at least exponentially fast with time:  $\mathbb{P}\left\{\lim \sup_{t\to \infty}\frac{1}{t}\log \| \mathbf{x}(t)\| \leqslant -\rho \right\} = 1$  for some  $\rho >0$ . Let the random variable  $\lambda (\omega)\coloneqq \lim \sup_{t\to \infty}\frac{1}{t}\log \| \mathbf{x}(t,\omega)\|$ . For our dynamics  $\mathbf{x}(t + 1) = A(r(t))\mathbf{x}(t)$ ,  $t\geqslant 0$ , it is seen that the Lyapunov exponent  $\frac{1}{t}\log \| \mathbf{x}(t)\|$  is at most the quantity  $\sum_{i = 1}^{N}p_{i}\log \| A(i)\| a.s.$  (see appendix for details).

A good mixture controller can now be designed by choosing  $\{p_1,\dots ,p_N\}$  such that  $\lambda (\omega) < - \rho$  for some  $\rho >0$

ensuring exponentially almost sure stability (subject to  $\log \| A(i)\| < 0$  for some  $i$ ). As we show in the sequel, our policy gradient algorithm (SoftMax PG) learns an improper mixture  $\{p_1,\dots ,p_N\}$  that (i) can stabilize the system even when a majority of the constituent atomic controllers  $\{K_{1},\dots ,K_{N}\}$  are unstable, i.e., converges to a mixture that ensures that the average exponent  $\lambda (\omega) < 0$  and (ii) shows better performance than that each of the atomic controllers.

# 2.2. Scheduling in Constrained Queueing Networks

We consider a system that comprises two queues fed by independent, stochastic arrival processes  $A_{i}(t), i \in \{1,2\}$ ,  $t \in \mathbb{N}$ . The length of queue  $i$ , measured at the beginning of time slot  $t$ , is denoted by  $Q_{i}(t) \in \mathbb{Z}_{+}$ .

![](images/5bf1758d08a6ddd2362f9c4d8813ebf5d9c9c9fc478c518ae278c457deb0f854.jpg)  
Figure 1:  $K_{1}$  and  $K_{2}$  by themselves can only stabilize  $\mathcal{C}_1 \cup \mathcal{C}_2$  (gray rectangles). With improper learning, we enlarge the set of stabilizable arrival rates by the triangle  $\Delta ABC$  shown in purple, above.

A common server serves both queues and can drain at most one packet from the system in a time slot. The server, therefore, needs to decide which of the two queues it intends to serve in a given slot (we assume that once the server chooses to serve a packet, service succeeds with probability 1). The server's decision is denoted by the vector  $\mathbf{D}(t) \in \mathcal{A} := \{[0,0], [1,0], [0,1]\}$ , where a "1" denotes service and a "0" denotes lack thereof. Let  $\mathbb{E}A_{i}(t) = \lambda_{i}$  and note that the arrival rate  $\lambda = [\lambda_1, \lambda_2]$  is unknown to the learner. We aim to find a (potentially randomized) policy  $\pi$  to minimize the discounted system backlog given by  $J_{\pi}(\mathbf{Q}(0)) := \mathbb{E}_{\mathbf{Q}(0)}^{\pi} \sum_{t=0}^{\infty} \gamma^{t} (Q_{1}(t) + Q_{2}(t))$ .

Any policy with  $J_{\pi}(\cdot) < \infty$ , is said to be stabilizing (or, equivalently, a stable policy). It is well known that there exist stabilizing policies iff  $\lambda_1 + \lambda_2 < 1$  (Tassiulas & Ephremides, 1992). A policy  $\pi_{\mu_1,\mu_2}$  that chooses Queue  $i$  w.p.  $\mu_i$  in every slot, can provably stabilize a system iff  $\mu_i > \lambda_i, \forall i \in \{1,2\}$ . Now, assume our control set consists of two stationary policies  $K_1, K_2$  with  $K_1 \equiv \pi_{\varepsilon,1 - \varepsilon}$ ,  $K_1 \equiv \pi_{1 - \varepsilon, \varepsilon}$  and sufficiently small  $\varepsilon > 0$ . That is, we have  $M = 2$  controllers  $K_1, K_2$ . Clearly, neither of these can, by itself, stabilize a network with  $\lambda = [0.49, 0.49]$ .

However, an improper mixture of the two that selects  $K_{1}$  and  $K_{2}$  each with probability  $1/2$  can. In fact, as Fig. 1 shows, our improper learning algorithm can stabilize all arrival rates in  $\mathcal{C}_1 \cup \mathcal{C}_2 \cup \Delta ABC$ , without prior knowledge of  $[\lambda_1, \lambda_2]$ . In other words, our algorithm enlarges the stability region by the triangle  $\Delta ABC$ , over and above  $\mathcal{C}_1 \cup \mathcal{C}_2$ . We

will return to these examples in Sec. 6, and show, using experiments, (1) how our improper learner converges to the stabilizing mixture of the available policies and (2) if the optimal policy is among the available controllers, how our algorithm can find and converge to it.

# 3. Problem Statement and Notation

A (finite) Markov Decision Process  $(S, \mathcal{A}, \mathbb{P}, r, \rho, \gamma)$  is specified by a finite state space  $S$ , a finite action space  $\mathcal{A}$ , a transition probability matrix  $\mathbb{P}$ , where  $\mathbb{P}(\tilde{s}|s, a)$  is the probability of transitioning into state  $\tilde{s}$  upon taking action  $a \in \mathcal{A}$  in state  $s$ , a single stage reward function  $r: S \times \mathcal{A} \to \mathbb{R}$ , a starting state distribution  $\rho$  over  $S$  and a discount factor  $\gamma \in (0,1)$ . A (stationary) policy or controller  $\pi: S \to \mathcal{P}(\mathcal{A})$  specifies a decision-making strategy in which the learner chooses actions  $(a_t)$  adaptively based on the current state  $(s_t)$ , i.e.,  $a_t \sim \pi(s_t)$ .  $\pi$  and  $\rho$ , together with  $\mathbb{P}$ , induce a probability measure  $\mathbb{P}_{\rho}^{\pi}$  on the space of all sample paths of the underlying Markov process and we denote by  $\mathbb{E}_{\rho}^{\pi}$  the associated expectation operator. The value function of policy  $\pi$  (also called the value of policy  $\pi$ ), denoted by  $V^{\pi}$  is the total discounted reward obtained by following  $\pi$ , i.e.,  $V^{\pi}(\rho) := \mathbb{E}_{\rho}^{\pi} \sum_{t=0}^{\infty} \gamma^t r(s_t, a_t)$ .

Improper Learning. We assume that the learner is provided with a finite number of (stationary) controllers  $\mathcal{C} \coloneqq \{K_1, \dots, K_M\}$  and, as described below, set up a parameterized improper policy class  $\mathcal{I}_{soft}(\mathcal{C})$  that depends on  $\mathcal{C}$ . The aim therefore, is to identify the best policy for the given MDP within this class, i.e.,

$$
\pi^ {*} = \underset {\pi \in \mathcal {I} _ {s o f t} (\mathcal {C})} {\operatorname {a r g m a x}} V ^ {\pi} (\rho). \tag {1}
$$

We now describe the construction of the class  $\mathcal{I}_{soft}(\mathcal{C})$ .

The Softmax Policy Class. We assign weights  $\theta_{m}\in \mathbb{R}$ , to each controller  $K_{m}\in \mathcal{C}$  and define  $\theta \coloneqq [\theta_1,\dots ,\theta_M]$ . The improper class  $\mathcal{I}_{soft}$  is parameterized by  $\theta$  as follows. In each round, the policy  $\pi_{\theta}\in \mathcal{I}_{soft}(\mathcal{C})$  chooses a controller drawn from softmax  $(\theta)$ , i.e., the probability of choosing Controller  $K_{m}$  is given by,  $\pi_{\theta}(m)\coloneqq e^{\theta_m} / \left(\sum_{m' = 1}^{M}e^{\theta_{m'}}\right)$ . Note, therefore, that in every round, our algorithm interacts with the MDP only through the controller sampled in that round. In the rest of the paper, we will deal exclusively with a fixed and given  $\mathcal{C}$  and the resultant  $\mathcal{I}_{soft}$ . Therefore, we overload the notation  $\pi_{\theta_t}(a|s)$  for any  $a\in \mathcal{A}$  and  $s\in S$  to denote the probability with which the algorithm chooses action  $a$  in state  $s$  at time  $t$ . For ease of notation, whenever the context is clear, we will also drop the subscript  $\theta$  i.e.,  $\pi_{\theta_t}\equiv \pi_t$ . Hence, we have at any time  $t\geqslant 0:\pi_{\theta_t}(a|s) = \sum_{m = 1}^{M}\pi_{\theta_t}(m)K_m(s,a)$ . Since we deal with gradient-based methods in the sequel, we define the value gradient of policy  $\pi_{\theta}\in \mathcal{I}_{soft}$ , by  $\nabla_{\theta}V^{\pi_{\theta}}\equiv \frac{dV^{\pi_{\theta}}}{d\theta^t}$ . We say that  $V^{\pi_{\theta}}$  is  $\beta$ -smooth if  $\nabla_{\theta}V^{\pi_{\theta}}$  is  $\beta$ -Lipschitz (Agarwal et al., 2020a). Finally, let for any two integers  $a$  and  $b$

Algorithm 1 SoftMax PG  
Input: learning rate  $\eta >0$  , initial state distribution  $\mu$    
Initialize: each  $\theta_{m}^{1} = 1$  , for all  $m\in [M],s_1\sim \mu$    
for  $t = 1$  to  $T$  do Choose controller  $m_t\sim \pi_t$  Play action  $a_{t}\sim K_{m_{t}}(s_{t},:)$  Observe  $s_{t + 1}\sim \mathbb{P}(.|s_t,a_t)$  Update:  $\theta_{t + 1} = \theta_t + \eta \nabla_{\theta_t}V^{\pi_{\theta_t}}$    
end for

$\mathbb{I}_{ab}$  denote the indicator that  $a = b$

Comparison to the standard PG setting. This problem we define is different from the usual policy gradient setting where the parameterization completely defines the policy in terms of the state-action mapping. One can use the methodology followed in (Mei et al., 2020), by assigning a parameter  $\theta_{s,m}$  for every  $s\in S,m\in [M]$ . With some calculation, it can be shown that this is equivalent to the tabular setting with  $S$  states and  $M$  actions, with the new 'reward' defined by  $r(s,m)\coloneqq \sum_{a\in \mathcal{A}}K_m(s,a)r(s,a)$  where  $r(s,a)$  is the usual expected reward obtained at state  $s$  and playing action  $a\in \mathcal{A}$ . By following the approach in (Mei et al., 2020) on this modified setting, it can be shown that the policy converges for each  $s\in S,\pi_{\theta}(m^{*}(s)\mid s)\to 1$ , for every  $s\in S$ , which is the optimum policy. However, the problem that we address, is to select a single controller (from within  $\mathcal{I}_{soft}$ , the convex hull of the given  $M$  controllers), which would guarantee maximum return if one plays that single mixture for all time, from among the given set of controllers.

# 4. Improper Learning using Gradients

In this and the following sections, we propose and analyze a policy gradient-based algorithm that provably finds the best, potentially improper, mixture of controllers for the given MDP. While we employ gradient ascent to optimize the mixture weights, the fact that this procedure works at all is far from obvious. We begin by noting that  $V^{\pi_{\theta}}$ , as described in Section 3, is nonconcave in  $\theta$  for both direct and softmax parameterizations, which renders analysis with standard tools of convex optimization inapplicable.

Lemma 4.1. (Non-concavity of Value function) There is an MDP and a set of controllers, for which the maximization problem of the value function (i.e. (1)) is non-concave for both the SoftMax and direct parameterizations, i.e.,  $\theta \mapsto V^{\pi_{\theta}}$  is non-concave.

The proof follows from a counterexample whose construction we show in the appendix. Our PG algorithm, SoftMax PG, is shown in Algorithm 1. The parameters  $\theta \in \mathbb{R}^{M}$  which define the policy are updated by following the gradient of the value function at the current policy parameters.

Convergence Guarantees. The following result shows that

with SoftMax PG, the value function converges to that of the best in-class policy at a rate  $\mathcal{O}(1/t)$ . Furthermore, the theorem shows an explicit dependence on the number of controllers  $M$ , in place of the usual  $|\mathcal{S}|$ . Note that with perfect gradient knowledge the algorithm becomes deterministic. This is a standard assumption in the analysis of PG algorithms (Fazel et al., 2018; Agarwal et al., 2020a; Mei et al., 2020).

Theorem 4.2 (Convergence of Policy Gradient). With  $\{\theta_t\}_{t\geqslant 1}$  generated as in Algorithm 1 and using a learning rate  $\eta = \frac{(1 - \gamma)^2}{7\gamma^2 + 4\gamma + 5}$ , for all  $t\geqslant 1$ ,  $V^{*}(\rho) - V^{\pi_{\theta_t}}(\rho) = \mathcal{O}\left(\frac{1}{t}\frac{M\gamma^2}{c_t^2(1 - \gamma)^3}\right)$ , where  $c_{t}\coloneqq \min_{1\leqslant s\leqslant t}\min_{m:\pi^{*}(m) > 0}\pi_{\theta_s}(m)$ .

Remark 4.3. The quantity  $c_{t}$  in the statement is the minimum probability that SoftMax PG puts on the controllers for which the best mixture  $\pi^{*}$  has positive probability mass. Empirical evidence (Sec. 6) makes us conjecture that  $\lim_{t\to \infty}c_t$  is positive, which shows a convergence rate of  $\mathcal{O}(1 / t)$ .

Remark 4.4. The proof of the above theorem uses the  $\beta-$  smoothness property of the value function under the softmax parameterization along with a new non-uniform Lojaseiwicz-type inequality (NULI) for our probabilistic mixture class, which lower bounds the magnitude of the gradient of the value function, which we mention below.

Lemma 4.5 (NUL1).  $\left\| \frac{\partial}{\partial \theta} V^{\pi_{\theta}}(\mu) \right\|_2 \geqslant \frac{1}{\sqrt{M}} \left( \min_{m: \pi_{\theta_m}^* > 0} \pi_{\theta_m} \right) \times \left\| \frac{d_{\rho}^{\pi^*}}{d_{\mu}^{\pi_{\theta}}} \right\|_\infty^{-1} \times [V^*(\rho) - V^{\pi_{\theta}}(\rho)].$

The proof of Theorem 4.2, then follows by an induction argument over  $t \geqslant 1$ .

Technical Challenges. We note here that while the basic recipe for the analysis of Theorem 4.2 is similar to (Mei et al., 2020), our setting does not directly inherit the intuition of standard PG (sPG) analysis. (1) With  $|\mathcal{S} \times \mathcal{A}| < \infty$ , the sPG analysis critically depends on the fact that a deterministic optimal policy exists and shows convergence to it. In contrast, in our setting,  $\pi^{*}$  could be a strictly randomized mixture of the base controllers (see Sec. 2). (2) A crucial step in sPG analysis is establishing that the value function  $V^{\pi}(s), \forall s \in S$  increases monotonically with time such that parameter of the optimal action  $\theta_{s,a^{*}} \uparrow \infty$ . In the appendix, we supply a simple counterexample showing that monotonicity of the  $V$  function is not guaranteed in our setting for every  $s \in S$ . (3) The value function gradient in sPG has no 'cross contamination' from other states, in the sense that modifying the parameter at one state does not affect the values of the others. This plays a crucial part in simplifying the proof of global convergence to the optimal policy in sPG analysis. Our setting cannot leverage this property since the value function gradient at a given controller possesses contributions from all states.

For the special case of  $S = 1$ , which is the Multiarmed

Bandits, each controller is a probability distribution over the  $A$  arms of the bandit. We call this special case Bandit-over-Bandits. We obtain a convergence rate of  $\mathcal{O}\left(M^2 /t\right)$  to the optimum and recover  $M^2\log T$  regret bound when our softmax PG algorithm is applied to this special case. We refer to the appendix for details.

Discussion on  $c_t$ . Convergence in Theorem 4.2 depends inversely on  $c_t^2$ . It follows that in order for SoftMax PG to converge,  $c_t$  must either (a) converge to a positive constant, or (b) decay (to 0) slower than  $\mathcal{O}\left(1 / \sqrt{t}\right)$ . The technical challenges discussed above, render proving this extremely hard analytically. Hence, while we currently do not show this theoretically, our experiments in Sec. 6 repeatedly confirm that its empirical analog, i.e.,  $\bar{c}_t$  (defined formally in Sec. 6) approaches a positive value. Hence, we conjecture that the rate of convergence in Thm 4.2 is  $\mathcal{O}(1 / t)$ .

# 5. Actor-Critic based Improper Learning

Softmax PG follows a gradient ascent scheme to solve the optimization problem (1), but is limited by the requirement of the true gradient in every round. To address situations where this might be unavailable, we resort to a Monte-carlo sampling based procedure (see appendix: Alg 6), which may lead to high variance. In this section, we take an alternative approach and provide a new algorithm based on an actor-critic framework for solving our problem. Actor-Critic methods are well-known to have low variance than their Monte-carlo counterparts (Konda & Tsitsiklis, 2000).

We begin by proposing modifications to the standard  $Q$ -function and advantage function definitions. Recall that we wish to solve for the following optimization problem:  $\max_{\pi \in \mathcal{I}_{\mathrm{soft}}}\mathbb{E}_{s\sim \rho}[V^{\pi}(s)]$ , where  $\pi$  is some distribution over the  $M$  base controllers. Let  $\tilde{Q}^{\pi}(s,m)\coloneqq \sum_{a\in \mathcal{A}}K_m(s,a)Q^{\pi}(s,a)$ . Let  $\tilde{A}^{\pi}(s,m)\coloneqq \sum_{a\in \mathcal{A}}K_m(s,a)A^{\pi}(s,a) = \sum_{a\in \mathcal{A}}K_m(s,a)Q^{\pi}(s,a) - V^{\pi}(s)$ , where  $Q^{\pi}$  and  $A^{\pi}$  are the usual action-value functions and advantage functions respectively. We also define the new reward function  $\tilde{r} (s,m)\coloneqq \sum_{a\in \mathcal{A}}K_m(s,a)r(s,a)$  and a new transition kernel  $P(s^{\prime}|s,m)\coloneqq \sum_{a\in \mathcal{A}}K_m(s,a)P(s^{\prime}|s,a)$ . Then, following the distribution  $\pi$  over the controllers induces a Markov Chain on the state space  $S$ . Define  $\nu_{\pi}(s,m)$  as the state-controller visitation measure induced by the policy  $\pi$ :  $\nu_{\pi}(s,m)\coloneqq (1 - \gamma)\sum_{t\geqslant 0}\gamma^{t}\mathbb{P}^{\pi}(s_{t} = s,m_{t} = m) = d_{\rho}^{\pi}(s)\pi (m)$ . With these definitions, we have the following variant of the policy-gradient theorem.

Lemma 5.1 (Modified Policy Gradient Theorem).  $\begin{array}{rlr}{\nabla_{\theta}V^{\pi_{\theta}}(\rho)} & = & {\mathbb{E}_{(s,m)\sim \nu_{\pi_{\theta}}}[\tilde{Q}^{\pi_{\theta}}(s,m)\psi_{\theta}(m)]}\\ {\mathbb{E}_{(s,m)\sim \nu_{\pi_{\theta}}}[\tilde{A}^{\pi_{\theta}}(s,m)\psi_{\theta}(m)],} & \text{where} & {\psi_{\theta}(m)\quad \coloneqq}\\ {\nabla_{\theta}\log (\pi_{\theta}(m)).} & \end{array}$

Note the independence of the score function  $\psi$  from the state

Algorithm 2 Actor-Critic based Improper RL (ACIL)  
Input:  $\varphi$  actor stepsize  $\alpha$  ,critic stepsize  $\beta$  ,regulariza tion parameter  $\lambda$  ,AC' or NAC'   
Initialize:  $\theta_0 = (1,1,\dots ,1)_{M\times 1},s_0\sim \rho$    
flag  $= \mathbb{1}\{\mathrm{NAC}\}$  {Selects AC or NAC}   
for  $t\gets 0$  to  $T - 1$  do   
 $s_{init} = s_{t - 1,B}$  (when  $t = 0$ $s_{init} = s_0$ $w_{t},s_{t,0}\leftarrow \mathrm{Critic - TD}(s_{init},\pi_{\theta_t},\varphi ,\beta ,T_c,H)$ $F_{t}(\theta_{t})\gets 0.$    
for  $i\gets 0$  to  $B - 1$  do   
 $m_{t,i}\sim \pi_{\theta_t},a_{t,i}\sim K_{m_{t,i}}(s_{t,i},.)$ $s_{t,i + 1}\sim \tilde{P} (.|s_{t,i},m_{t,i})$ $\mathcal{E}_{w_t}(s_{t,i},m_{t,i},s_{t,i + 1}) = \tilde{r} (s_{t,i},m_{t,i}) +$ $(\gamma \varphi (s_{t,i + 1}) - \varphi (s_{t,i}))^{\top}w_{t}$ $F_{t}(\theta_{t})\leftarrow F_{t}(\theta_{t}) + \frac{1}{B}\psi_{\theta_{t}}(m_{t,i})\psi_{\theta_{t}}(m_{t,i})^{\top}$    
end for   
if {flag} then   
 $G_{t}\coloneqq [F_{t}(\theta_{t}) + \lambda I]$ $\theta_{t + 1}\qquad = \qquad \theta_t\qquad +$ $G_{t}^{-1}\frac{\alpha}{B}\sum_{i = 0}^{B - 1}\mathcal{E}_{w_{t}}(s_{t,i},m_{t,i},s_{t,i + 1})\psi_{\theta_{t}}(m_{t,i})$    
else   
 $\theta_{t + 1} = \theta_t + \frac{\alpha}{B}\sum_{i = 0}^{B - 1}\mathcal{E}_{w_t}(s_{t,i},m_{t,i},s_{t,i + 1})\psi_{\theta_t}(m_{t,i})$    
end if   
 $\pi_{\theta_{t + 1}} = \mathrm{softmax}(\theta_{t + 1})$    
end for   
Output:  $\theta_T$  with  $\widehat{T}$  chosen uniformly at random from   
 $\{1,\ldots ,T\}$

$s$ . For the gradient ascent update of the parameters  $\theta$  we need to estimate  $\tilde{A}^{\pi_{\theta}}(s,m)$  where  $(s,m)$  are drawn according to  $\nu_{\pi_{\theta}}(\cdot ,\cdot)$ . We recall how to sample from  $\nu_{\pi}$ . Following Konda & Tsitsiklis (2000) and the recent works like Xu et al. (2020); Barakat et al. (2021) and casting into our setting, observe that  $\nu_{\pi}$  is a stationary distribution of a Markov chain over the pair  $(s,m)$  with state-to-state transition kernel defined by  $\bar{P} (s'|s,m)\coloneqq \gamma \tilde{P} (s'|s,m) + (1 - \gamma)\rho (s')$  and  $m\sim \pi (.)$ .

Algorithm Description. We present the algorithm in detail in Algorithm 2 along with a subroutine Alg 3 which updates the critic's parameters. ACIL is a single-trajectory based algorithm, in the sense that it does not require a forced reset along the run. We begin with the critic's updates. The critic uses linear function approximation  $V_{w}(s) \coloneqq \varphi(s)^{\top} w$ , and uses TD learning to update its parameters  $w \in \mathbb{R}^d$ . We assume that  $\varphi(\cdot): S \to \mathbb{R}^d$  is a known feature mapping. Let  $\Phi$  be the corresponding  $|S| \times d$  matrix. We assume that the columns of  $\Phi$  are linearly independent. Next, based on the critic's parameters, the actor approximates the  $\tilde{A}(s, m)$  function using the TD error:  $\mathcal{E}_{w}(s, m, s') = \tilde{r}(s, m) + (\gamma \varphi(s') - \varphi(s))^{\top} w$ .

In order to provide guarantees of the convergence rates

Algorithm 3 Critic-TD Subroutine  
Input:  $s_{init},\pi ,\varphi ,\beta ,T_c,H$    
Initialize:  $w_{0}$    
for  $k\gets 0$  to  $T_{c} - 1$  do  $s_{k,0} = s_{k - 1,H}$  (when  $k = 0$ $s_{k,0} = s_{init})$    
for  $j\gets 0$  to  $H - 1$  do  $m_{k,j}\sim \pi (.)$ $a_{k,j}\sim K_{m_{k,j}}(s_{k,j},.)$ $s_{k,j + 1}\sim \tilde{P} (.|s_{k,j},m_{k,k})$ $\mathcal{E}_{w_k}(s_{k,j},m_{k,j},s_{k,j + 1}) = \tilde{r} (s_{k,j},m_{k,j}) +$ $(\gamma \varphi (s_{k,j + 1}) - \varphi (s_{k,j}))^\top w_k$    
end for   
 $w_{k + 1} = w_k + \frac{\beta}{H}\sum_{i = 0}^{H - 1}\mathcal{E}_{w_k}(s_{k,i},m_{k,i},s_{k,i + 1})\varphi (s_{k,i})$    
end for   
Output:  $w_{T_c},s_{T_c - 1,H}$

of Algorithm ACIL, we make the following assumptions, which are standard in RL literature (Konda & Tsitsiklis, 2000; Bhandari et al., 2018; Xu et al., 2020).

Assumption 5.2 (Uniform Ergodicity). For any  $\theta \in \mathbb{R}^M$ , consider the Markov Chain induced by the policy  $\pi_{\theta}$ , and following the transition kernel  $\bar{P}(.|s, m)$ . Let  $\xi_{\pi_{\theta}}$  be the stationary distribution of this Markov Chain. We assume that there exists constants  $\kappa > 0$  and  $\xi \in (0,1)$  such that

$$
\sup _ {s \in \mathcal {S}} \| \mathbb {P} (s _ {t} \in \cdot | s _ {0} = s, \pi_ {\theta}) - \xi_ {\pi_ {\theta}} (\cdot) \| _ {T V} \leqslant \kappa \xi^ {t}.
$$

Further, let  $L_{\pi} \coloneqq \mathbb{E}_{\nu_{\pi}}[\varphi(s)(\gamma \varphi(s') - \varphi(s))^{\top}]$  and  $v_{\pi} \coloneqq \mathbb{E}_{\nu_{\pi}}[r(s,m,s')\varphi(s)]$ . The optimal solution to the critic's TD learning is now  $w^{*} \coloneqq -L_{\pi}^{-1}v_{\pi}$ .

Assumption 5.3. There exists a positive constant  $\Gamma_L$  such that for all  $w\in \mathbb{R}^d$ , we have  $\langle w - w^{*},L_{\pi}(w - w^{*})\rangle \leqslant -\Gamma_L\| w - w^{*}\| _2^2$ .

Based on the above two assumptions, let  $L_{V} \coloneqq \frac{2\sqrt{2}C_{\kappa\xi} + 1}{1 - \gamma}$ , where  $C_{\kappa\xi} = \left(1 + \left\lceil \log_{\xi}\frac{1}{\kappa}\right\rceil +\frac{1}{1 - \xi}\right)$ .

Theorem 5.4. Consider the Actor-Critic improper learning algorithm ACIL (Alg 2). Assume  $\sup_{s\in S}\| \varphi (s)\| _2\leqslant 1$  Under Assumptions 5.2 and 5.3 with step-sizes chosen as  $\alpha = \left(\frac{1}{4L_V\sqrt{M}}\right),\beta = \min \{\mathcal{O}(\Gamma_L),\mathcal{O}(1 / \Gamma_L)\}$  , batch sizes  $H = \mathcal{O}\left(\frac{1}{\varepsilon}\right),B = \mathcal{O}(1 / \varepsilon),T_c = \mathcal{O}\left(\frac{\sqrt{M}}{\Gamma_L}\log (1 / \varepsilon)\right),$ $T = \mathcal{O}\left(\frac{\sqrt{M}}{(1 - \gamma)^2\varepsilon}\right),$  we have  $\mathbb{E}[\| \nabla_{\theta}V(\theta_{\hat{T}})\| _2^2 ]\leqslant$ $\varepsilon +\mathcal{O}(\Delta_{\mathrm{critic}})$  . Hence, the total sample complexity is  $\mathcal{O}\left(M(1 - \gamma)^{-2}\varepsilon^{-2}\log (1 / \varepsilon)\right)$

Here,  $\Delta_{\text{critic}} := \max_{\theta \in \mathbb{R}^M} \mathbb{E}_{\nu_{\pi_\theta}} \left[ \left| V^{\pi_\theta}(s) - V^{w_{\pi_\theta}^*} \right|^2 \right]$ , which equals zero, if the value function lies in the linear space spanned by the features.

Next we provide the global optimality guarantee for the Natural-Actor-Critic version of ACIL.

Theorem 5.5. Assume  $\sup_{s\in S}\| \varphi (s)\| _2\leqslant 1.$  Under Assumptions 5.2 and 5.3 with step-sizes chosen as  $\alpha = \left(\frac{\lambda^2}{2\sqrt{M}L_V(1 + \lambda)}\right),\beta = \min \{\mathcal{O}(\Gamma_L),\mathcal{O}(1 / \Gamma_L)\} ,$  batch-sizes  $H = \mathcal{O}\left(\frac{1}{\Gamma_L\varepsilon^2}\right),B = \mathcal{O}\left(\frac{1}{(1 - \gamma)^2\varepsilon^2}\right),$ $T_{c} = \mathcal{O}\left(\frac{\sqrt{M}}{\Gamma_{L}}\log (1 / \varepsilon)\right),T = \mathcal{O}\left(\frac{\sqrt{M}}{(1 - \gamma)^{2}\varepsilon}\right)$  and  $\lambda =$ $\mathcal{O}(\Delta_{\text{critic}})$  we have  $V(\pi^{*}) - \frac{1}{T}\sum_{t = 0}^{T - 1}\mathbb{E}[V(\pi_{\theta_t})]\leqslant \varepsilon +$ $\mathcal{O}\left(\sqrt{\frac{\Delta_{\text{actor}}}{(1 - \gamma)^3}}\right) + \mathcal{O}(\Delta_{\text{critic}}).$  Hence, the total sample complexity is  $\mathcal{O}\left(\frac{M}{(1 - \gamma)^4\varepsilon^3}\log \frac{1}{\varepsilon}\right).$  where  $\Delta_{\text{actor}}:= \max_{\theta \in \mathbb{R}^{M}}\min_{w\in \mathbb{R}^{d}}\mathbb{E}_{\nu_{\pi_{\theta}}}[[\psi_{\theta}^{\top}w - A_{\pi_{\theta}}(s,m)]^{2}]]$  and  $\Delta_{\text{critic}}$  is same as before.

# 6. Numerical Results

# 6.1. Simulations with Softmax PG

We now discuss the results of implementing Softmax PG (Alg 1) on the cartpole system and on the constrained queueing examples described in Sec. 2. Since neither value functions nor value gradients for these problems are available in closed-form, we modify SoftMax PG (Algorithm 1) to make it generally implementable using a combination of (1) rollouts to estimate the value function of the current (improper) policy and (2) simultaneous perturbation stochastic approximation (SPSA) to estimate its value gradient. Specifically, we use the approach in (Flaxman et al., 2005), noting that for a function  $V: \mathbb{R}^M \to \mathbb{R}$ , the gradient,  $\nabla V$ ,  $\nabla V(\theta) \approx \mathbb{E}\left[(V(\theta + \alpha.u) - V(\theta))u\right]$ .  $\frac{M}{\alpha}$ , where the perturbation parameter  $\alpha \in (0,1)$  and  $u$  is sampled uniformly randomly from the unit sphere. This expression requires evaluation of the value function at the point  $(\theta + \alpha.u)$ . Since the value function may not be explicitly computable, we employ rollouts, for its evaluation. The full algorithm, GradEst, can be found in the appendix (Alg. 6).

Note that all of the simulations shown have been averaged over  $\# \mathbb{L} = 20$  trials, and the mean and standard deviations plotted. We also show empirically that  $c_{t}$  in Theorem 4.2 is indeed strictly positive. In the sequel, for every trial  $l\in [\# \mathbb{L}]$ , let  $\bar{c}_t^l\coloneqq \inf_{1\leqslant s\leqslant t}\min_{m\in \{m'\in [M]:\pi^* (m') > 0\}}\pi_{\theta_s}(m)$ , and  $\bar{c}_t\coloneqq \frac{1}{\# \mathbb{L}}\sum_{l = 1}^{\# \mathbb{L}}\bar{c}_t^l$ . Also let  $\bar{c}^T\coloneqq \min_{l\in [\# \mathbb{L}]}\min_{1\leqslant t\leqslant T}\bar{c}_t^l$ . That is the sequences  $\{\bar{c}_t^l\}_{t = 1,l = 1}^{T,\# \mathbb{L}}$  define the minimum probabilities that the algorithm puts, over rounds  $1:t$  in trial  $l$ , on controllers with  $\pi^{*}(\cdot) > 0$ .  $\{\bar{c}_t\}_{t = 1}^T$  represents its average across the different trials, and  $\bar{c}^T$  is the minimum such probability that the algorithm learns across all rounds  $1\leqslant t\leqslant T$  and across trials.

Simulations for the Cartpole. We study two different settings for the Cartpole example. Let  $K_{opt}$  be the optimal controller for the given system, computed via standard procedures (details can be found in (Bertsekas, 2011)). We

![](images/2dce24619510d8ab6da0272df7fa87afdefef8ae00979d622f49c0f1c59e62b7.jpg)  
(a) Cartpole with  $\{K_1, K_{opt}, K_2 = K_{opt} + \Delta\}$ .

![](images/b2442df4492a62d2eeea80b5ffde5857d987c351bc247092467fb7fd3fcf017f.jpg)  
$\mathbf{\sigma} = \mathbf{(b)}$  Cartpole with  $\{K_1 = K_{opt} - \Delta, K_2 = K_{opt} + \Delta\}$ .

![](images/5b965f5945f344d302a091cbcb9abbd012e9817edf2593131734cb4ee79e6b7e.jpg)  
(c) Softmax PG applied to a Path Graph Network.

![](images/31aad81ef6beaaeb7d662d23ac4c09b42faa99c2e3638ce4dded190b04890613.jpg)  
(d) 2 queue system with time-varying arrival rates

![](images/e884872078bb3957d8db203d0459ce119e7bbd5656b99ee0a0d9c183fcdbdc42.jpg)  
Figure 2: Softmax PG algorithm applied to the cartpole control and path graph scheduling tasks. Each plot shows (a) the learnt probabilities of various base controllers over time, and (b) the minimum probability  $\bar{c}_t$  and  $\bar{c}^T$  as described in text.  
(a) Arrival rate:  $(\lambda_1, \lambda_2) = (0.4, 0.4)$

![](images/1a21e5785a7b0657684f7e5f40439a04fa3ddefba2aed270d5573abca7ed7bea.jpg)  
(b) Arrival rate:  $(\lambda_1, \lambda_2) = (0.35, 0.35)$

![](images/d6832208dd316fbcc33ebecc913f7136613df2eab6f59097acdcab7d188702fc.jpg)  
(c) (Estimated) 2 queue system with time-varying arrival rates  
Figure 3: Natural-actor-critic based improper learning algorithm applied to various queuing networks show convergence to the best mixture policy.

set  $M = 2$  and consider two scenarios: (i) the two base controllers are  $\mathcal{C} \equiv \{K_{opt}, K_{opt} + \Delta\}$ , where  $\Delta$  is a random matrix, each entry of which is drawn IID  $\mathcal{N}(0,0.1)$ , (ii)  $\mathcal{C} \equiv \{K_{opt} - \Delta, K_{opt} + \Delta\}$ . In the first case a corner point of the simplex is optimal. In the second case a strict improper mixture of the available controllers is optimum. As we can see in Fig. 2(a) and 2(b) our policy gradient algorithm converges to the best controller/mixture in both the cases. The details of all the hyperparameters for this setting are provided in the appendix. We note here that in the second setting even though none of the controllers, applied individually, stabilizes the system, our Softmax PG algorithm finds and follows a improper mixture of the controllers which stabilizes the given Cartpole.

Constrained Queueing Networks. We present simulation results for the following networks.

(i) Path Graph Networks. The scheduling constraints in the first network we study dictate that Queues  $i$  and  $i + 1$  cannot be served simultaneously for  $i\in [N - 1]$  in any round  $t\geqslant 0$ . Such queueing systems are called path graph networks (Mohan et al., 2020). We work with  $N = 4$ . Therefore, sets of queues which can be served simultaneously are  $\mathcal{A} = \{\emptyset ,\{1\} ,\{2\} ,\{3\} ,\{4\} ,\{1,3\} ,\{2,4\} ,\{1,4\} \}$ . The constituents of  $\mathcal{A}$  are called independent sets in the literature. In each round  $t$ , the scheduler selects an independent set to serve the queues therein. Let  $Q_{j}(t)$  be the backlog of Queue  $j$  at time  $t$ . We use the follow

ing base controllers: (i)  $K_{1}$ : Max Weight (MW) controller (Tassiulas & Ephremides, 1992) chooses a set  $s_t \coloneqq \operatorname{argmax}_{\underline{\mathbf{S}} \in \mathcal{A}} \sum_{j \in \underline{\mathbf{S}}} Q_j(t)$ , i.e., the set with the largest backlog, (ii)  $K_{2}$ : Maximum Egress Rate (MER) controller chooses a set  $s_t \coloneqq \operatorname{argmax}_{\underline{\mathbf{S}} \in \mathcal{A}} \sum_{j \in \underline{\mathbf{S}}} \mathbb{I}\{Q_j(t) > 0\}$ , i.e., the set which has the maximum number of non-empty queues. We also choose  $K_{3}, K_{4}$  and  $K_{5}$  which serve the sets  $\{1, 3\}, \{2, 4\}, \{1, 4\}$  respectively with probability 1. We fix the arrival rates to the queues (0.495, 0.495, 0.495, 0.495). It is well known that the MER rule is mean-delay optimal in this case (Mohan et al., 2020). In Fig. 2(c), we plot the probability of choosing  $K_i, i \in [5]$ , learnt by our algorithm. The probability of choosing MER indeed converges to 1.

(ii) Non-stationary arrival rates. Recall the example discussed in Sec. 2.2 of two queues. The scheduler there is now given two base/atomic controllers  $\mathcal{C} \coloneqq \{K_1, K_2\}$ , i.e.  $M = 2$ . Controller  $K_i$  serves Queue  $i$  with probability  $1$ ,  $i = 1, 2$ . As can be seen in Fig. 2(d), the arrival rates  $\lambda$  to the two queues vary over time (adversarily) during the learning. In particular,  $\lambda$  varies from  $(0.3, 0.6) \to (0.6, 0.3) \to (0.49, 0.49)$ . Our PG algorithm successfully tracks this change and adapts to the optimal improper stationary policies in each case.

In all the simulations shown above we note that the empirical trajectories of  $\bar{c}_t$  and  $\bar{c}^T$  become flat after some initial rounds and are bounded away from zero. This supports our conjecture that  $\lim_{t\to \infty}c_t$  in Theorem 4.2 is bounded away

from zero, rendering the theorem statement non-vacuous. Note that Alg. 1 performs well in challenging scenarios, even with estimates of the value function and its gradient.

# 6.2. Simulations with ACIL

We perform some queueing theoretic simulations on the natural actor critic version of ACIL, which we will call NACIL in this section. Unlike Softmax PG, ACIL estimates gradients using temporal difference instead of SPSA. We study three different settings (1) where in the first case the optimal policy is a strict improper combination of the available controllers and (2) where it is at a corner point, i.e., one of the available controllers itself is optimal (3) arrival rates are time-varying as in the previous section. Our simulations show that in all the cases, ACIL converges to the correct controller mixture.

Recall the example that we discussed in Sec. 2.2. We consider the case with Bernoulli arrivals with rates  $\lambda = [\lambda_1, \lambda_2]$  and are given two base/atomic controllers  $\{K_1, K_2\}$ , where controller  $K_i$  serves Queue  $i$  with probability  $1$ ,  $i = 1, 2$ . As can be seen in Fig. 3(a) when  $\lambda = [0.4, 0.4]$  (equal arrival rates), NACIL converges to an improper mixture policy that serves each queue with probability  $[0.5, 0.5]$ . Next in Fig 3(b) shows a situation where one of the base controllers, i.e., the "Longest-Queue-First" (LQF) is the optimal controller. NACIL converges correctly to the corner point.

Lastly, Fig. 3(c) shows a setting similar to (ii) Sec. 6.1 above. Here there is a single transition of  $(\lambda_1,\lambda_2)$  from  $(0.4,0.3)\rightarrow (0.3,0.4)$  which occurs at  $t = \lceil 10^{5} / 3\rceil$  , which is unknown to the learner. We show the probability of choosing controller 1. NACIL tracks the changing arrival rates over time. We supply some more simulations with NACIL in the appendix due to space limitations.

# Acknowledgment

This work was partially supported by the Israel Science Foundation under contract 2199/20. Mohammadi Zaki was supported by the Aerospace Network Research Consortium (ANRC) Grant on Airplane IOT Data Management.

# References

Abbasi-Yadkori, Y. and Szepesvári, C. Regret bounds for the adaptive control of linear quadratic systems. In Proceedings of the 24th Annual Conference on Learning Theory, volume 19 of Proceedings of Machine Learning Research, pp. 1-26, Budapest, Hungary, 09-11 Jun 2011. JMLR Workshop and Conference Proceedings.  
Agarwal, A., Kakade, S. M., Lee, J. D., and Mahajan, G. Optimality and approximation with policy gradient methods in markov decision processes. In Proceedings of Thirty Third Conference on Learning Theory, pp. 64-66. PMLR, 2020a.  
Agarwal, N., Brukhim, N., Hazan, E., and Lu, Z. Boosting for control of dynamical systems. In III, H. D. and Singh, A. (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 96-103. PMLR, 13-18 Jul 2020b.  
Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Proceedings of IEEE 36th Annual Foundations of Computer Science, pp. 322-331, 1995.  
Auer, P., Jaksch, T., and Ortner, R. Near-optimal regret bounds for reinforcement learning. In Advances in Neural Information Processing Systems, volume 21, pp. 89-96. Curran Associates, Inc., 2009.  
Banijamali, E., Abbasi-Yadkori, Y., Ghavamzadeh, M., and Vlassis, N. Optimizing over a restricted policy class in mdps. In Chaudhuri, K. and Sugiyama, M. (eds.), Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pp. 3042-3050. PMLR, 16-18 Apr 2019.  
Barakat, A., Bianchi, P., and Lehmann, J. Analysis of a target-based actor-critic algorithm with linear function approximation. CoRR, abs/2106.07472, 2021.  
Barreto, A., Dabney, W., Munos, R., Hunt, J. J., Schaul, T., van Hasselt, H. P., and Silver, D. Successor features for transfer in reinforcement learning. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.  
Bertsekas, D. P. Dynamic programming and optimal control 3rd edition, volume ii. Belmont, MA: Athena Scientific, 2011.  
Bhandari, J. and Russo, D. Global optimality guarantees for policy gradient methods. ArXiv, abs/1906.01786, 2019.

Bhandari, J., Russo, D., and Singal, R. A finite time analysis of temporal difference learning with linear function approximation. *Oper. Res.*, 69:950-973, 2018.  
Bhatnagar, S., Sutton, R. S., Ghavamzadeh, M., and Lee, M. Natural actor-critic algorithms. Automatica, 45(11): 2471-2482, 2009. ISSN 0005-1098.  
Bolzern, P., Colaneri, P., and De Nicolao, G. Almost sure stability of stochastic linear systems with ergodic parameters. European Journal of Control, 14(2):114-123, 2008.  
Borkar, V. S. Stochastic Approximation. Cambridge Books. Cambridge University Press, December 2008.  
Cassel, A., Cohen, A., and Koren, T. Logarithmic regret for learning linear quadratic regulators efficiently. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 1328-1337. PMLR, 13-18 Jul 2020.  
Chen, X. and Hazan, E. Black-box control for linear dynamical systems. arXiv preprint arXiv:2007.06650, 2020.  
Daniely, A., Linial, N., and Shalev-Shwartz, S. More data speeds up training time in learning halfspaces over sparse vectors. In Advances in Neural Information Processing Systems, volume 26, pp. 145-153. Curran Associates, Inc., 2013.  
Daniely, A., Linial, N., and Shalev-Shwartz, S. From average case complexity to improper learning complexity. In Proceedings of the Forty-Sixth Annual ACM Symposium on Theory of Computing, STOC '14, pp. 441-448, New York, NY, USA, 2014. Association for Computing Machinery.  
Dean, S., Mania, H., Matni, N., Recht, B., and Tu, S. On the Sample Complexity of the Linear Quadratic Regulator. arXiv e-prints, art. arXiv:1710.01688, October 2017.  
Denisov, D. and Walton, N. Regret analysis of a markov policy gradient algorithm for multi-arm bandits. *ArXiv*, abs/2007.10229, 2020.  
Durrett, R. Probability: Theory and examples, 2011.  
Fazel, M., Ge, R., Kakade, S. M., and Mesbahi, M. Global convergence of policy gradient methods for the linear quadratic regulator, 2018.  
Flaxman, A. D., Kalai, A. T., and McMahan, H. B. Online convex optimization in the bandit setting: Gradient descent without a gradient. SODA '05, pp. 385-394, USA, 2005. Society for Industrial and Applied Mathematics.

Gao, B. and Pavel, L. On the properties of the softmax function with application in game theory and reinforcement learning. *ArXiv*, abs/1704.00805, 2017.  
Gopalan, A. and Mannor, S. Thompson Sampling for Learning Parameterized Markov Decision Processes. In Proceedings of The 28th Conference on Learning Theory, volume 40 of Proceedings of Machine Learning Research, pp. 861-898, Paris, France, 03-06 Jul 2015. PMLR.  
Ibrahimi, M., Javanmard, A., and Roy, B. Efficient reinforcement learning for high dimensional linear quadratic systems. In Advances in Neural Information Processing Systems, volume 25, pp. 2636-2644. Curran Associates, Inc., 2012.  
Kakade, S. and Langford, J. Approximately optimal approximate reinforcement learning. In In Proc. 19th International Conference on Machine Learning, 2002.  
Khalil, H. K. Nonlinear Control. Pearson, 2015.  
Kocák, T., Neu, G., Valko, M., and Munos, R. Efficient learning by implicit exploration in bandit problems with side observations. In Advances in Neural Information Processing Systems, volume 27, pp. 613-621. Curran Associates, Inc., 2014.  
Konda, V. and Tsitsiklis, J. Actor-critic algorithms. In Solla, S., Leen, T., and Müller, K. (eds.), Advances in Neural Information Processing Systems, volume 12. MIT Press, 2000.  
Lai, T. and Robbins, H. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1): 4 - 22, 1985. ISSN 0196-8858.  
Li, G., Wei, Y., Chi, Y., Gu, Y., and Chen, Y. Softmax policy gradient methods can take exponential time to converge. arXiv preprint arXiv:2102.11270, 2021.  
Littlestone, N. and Warmuth, M. K. The weighted majority algorithm. Inform. Comput., 108(2):212-261, 1994.  
Łojasiewicz, S. Les équations aux dérivées partielles (paris, 1962), 1963.  
Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, P., and Mordatch, I. Multi-agent actor-critic for mixed cooperative-competitive environments, 2020.  
Maclin, R. and Opitz, D. W. Popular ensemble methods: An empirical study. CoRR, abs/1106.0257, 2011.  
Mania, H., Tu, S., and Recht, B. Certainty equivalence is efficient for linear quadratic control. In Advances in Neural Information Processing Systems, volume 32, pp. 10154-10164. Curran Associates, Inc., 2019.

Mei, J., Xiao, C., Szepesvari, C., and Schuurmans, D. On the global convergence rates of softmax policy gradient methods. In Proceedings of the 37th International Conference on Machine Learning, pp. 6820-6829. PMLR, 2020.  
Mohan, A., Chattopadhyay, A., and Kumar, A. Hybrid mac protocols for low-delay scheduling. In 2016 IEEE 13th International Conference on Mobile Ad Hoc and Sensor Systems (MASS), pp. 47-55, Los Alamitos, CA, USA, oct 2016. IEEE Computer Society.  
Mohan, A., Gopalan, A., and Kumar, A. Throughput optimal decentralized scheduling with single-bit state feedback for a class of queueing systems. ArXiv, abs/2002.08141, 2020.  
Neu, G. Explore no more: Improved high-probability regret bounds for non-stochastic bandits. In Advances in Neural Information Processing Systems, volume 28, pp. 3168-3176. Curran Associates, Inc., 2015.  
Osband, I., Russo, D., and Van Roy, B. (more) efficient reinforcement learning via posterior sampling. In Advances in Neural Information Processing Systems, volume 26, pp. 3003-3011. Curran Associates, Inc., 2013.  
Ouyang, Y., Gagrani, M., Nayyar, A., and Jain, R. Learning unknown markov decision processes: A thompson sampling approach. In NIPS, 2017.  
Peters, J. and Schaal, S. Natural actor-critic. Neurocomputing, 71(7):1180-1190, 2008. ISSN 0925-2312. Progress in Modeling, Theory, and Application of Computational Intelligenc.  
Radac, M.-B. and Precup, R.-E. Data-driven model-free slip control of anti-lock braking systems using reinforcement q-learning. Neurocomput., 275(C):317-329, January 2018.  
Rummery, G. A. and Niranjan, M. On-line q-learning using connectionist systems. Technical report, 1994.  
Schulman, J., Levine, S., Abbeel, P., Jordan, M., and Moritz, P. Trust region policy optimization. In Bach, F. and Blei, D. (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 1889-1897, Lille, France, 07-09 Jul 2015. PMLR.  
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms, 2017.  
Shani, L., Efroni, Y., and Mannor, S. Adaptive trust region policy optimization: Global convergence and faster rates for regularized mdps. ArXiv, abs/1909.02769, 2020.

Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., and Hassabis, D. Mastering the game of go with deep neural networks and tree search. Nature, 529:484-503, 2016.  
Singh, S., Okun, A., and Jackson, A. Artificial intelligence: Learning to play Go from scratch. 550(7676):336-337, October 2017. doi: 10.1038/550336a.  
Sutton, R. S. and Barto, A. G. Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018.  
Sutton, R. S., Precup, D., and Singh, S. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1): 181-211, 1999. ISSN 0004-3702.  
Sutton, R. S., McAllester, D., Singh, S., and Mansour, Y. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems, volume 12, pp. 1057-1063. MIT Press, 2000.  
Tassiulas, L. and Ephremides, A. Stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks. IEEE Transactions on Automatic Control, 37(12):1936-1948, 1992. doi: 10.1109/9.182479.  
Wiering, M. A. and van Hasselt, H. Ensemble algorithms in reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 38(4):930-936, 2008.  
Xiliang, C., Cao, L., Li, C.-x., Xu, Z.-x., and Lai, J. Ensemble network architecture for deep reinforcement learning. Mathematical Problems in Engineering, 2018:1-6, 04 2018.  
Xu, T., Wang, Z., and Liang, Y. Improving sample complexity bounds for (natural) actor-critic algorithms. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 4358-4369. Curran Associates, Inc., 2020.

# A. Glossary of Symbols

1.  $S$ : State space  
2.  $\mathcal{A}$  : Action space  
3.  $S$  :Cardinality of  $s$  
4.  $A$  :Cardinality of  $\mathcal{A}$  
5.  $M$  : Number of controllers  
6.  $K_{i}$  Controller  $i, i = 1, \dots, M$ . For finite SA space MDP,  $K_{i}$  is a matrix of size  $S \times A$ , where each row is a probability distribution over the actions.  
7.  $\mathcal{C}$  : Given collection of  $M$  controllers.  
8.  $\mathcal{I}_{soft}(\mathcal{C})$  : Improper policy class setup by the learner.  
9.  $\theta \in \mathbb{R}^{M}$  : Parameter assigned to the controllers to controllers, representing weights, updated each round by the learner.  
10.  $\pi(.)$ : Probability of choosing controllers  
11.  $\pi(.|s)$  Probability of choosing action given state  $s$ . Note that in our setting, given  $\pi(.)$  over controllers (see previous item) and the set of controllers,  $\pi(.|s)$  is completely defined, i.e.,  $\pi(a|s) = \sum_{m=1}^{M} \pi(m) K_m(s,a)$ . Hence we use simply  $\pi$  to denote the policy followed, whenever the context is clear.  
12.  $r(s, a)$ : Immediate (one-step) reward obtained if action  $a$  is played in state  $s$ .  
13.  $\mathsf{P}(s^{\prime}\mid s,a)$  Probability of transitioning to state  $s^\prime$  from state  $s$  having taken action  $a$  
14.  $V^{\pi}(\rho) \coloneqq \mathbb{E}_{s_0 \sim \rho} [V^{\pi}(s_0)] = \mathbb{E}_{\rho}^{\pi} \sum_{t=0}^{\infty} \gamma^t r(s_t, a_t)$  Value function starting with initial distribution  $\rho$  over states, and following policy  $\pi$ .  
15.  $Q^{\pi}(s,a)\coloneqq \mathbb{E}\left[r(s,a) + \gamma \sum_{s'\in \mathcal{S}}\mathsf{P}(s'\mid s,a)V^{\pi}(s')\right]$ .  
16.  $\tilde{Q}^{\pi}(s,m)\coloneqq \mathbb{E}\left[\sum_{a\in \mathcal{A}}K_m(s,a)r(s,a) + \gamma \sum_{s'\in \mathcal{S}}\mathsf{P}(s'\mid s,a)V^{\pi}(s')\right].$  
17.  $A^{\pi}(s,a)\coloneqq Q^{\pi}(s,a) - V^{\pi}(s)$  
18.  $\tilde{A} (s,m)\coloneqq \tilde{Q}^{\pi}(s,m) - V^{\pi}(s).$  
19.  $d_{\nu}^{\pi} \coloneqq \mathbb{E}_{s_0 \sim \nu}\left[(1 - \gamma)\sum_{t = 0}^{\infty}\mathbb{P}\left[s_t = s \mid s_o, \pi, \mathrm{P}\right]\right]$ . Denotes a distribution over the states, is called the "discounted state visitation measure"  
20.  $c:\inf_{t\geqslant 1}\min_{m\in \{m^{\prime}\in [M]:\pi^{*}(m^{\prime}) > 0\}}\pi_{\theta_{t}}(m).$  
21.  $\left\| \frac{d_{\mu}^{\pi^{*}}}{\mu}\right\|_{\infty} = \max_{s}\frac{d_{\mu}^{\pi^{*}}(s)}{\mu(s)}.$  
22.  $\left\| \frac{1}{\mu}\right\|_{\infty} = \max_s\frac{1}{\mu(s)}.$

# B. Expanded Survey of Related Work

In this section, we provide a detailed survey of related works. It is vital to distinguish the approach investigated in the present paper from the plethora of existing algorithms based on 'proper learning'. Essentially, these algorithms try to find an (approximately) optimal policy for the MDP under investigation. These approaches can broadly be classified in two groups: model-based and model-free.

The former is based on first learning the dynamics of the unknown MDP followed by planning for this learnt model. Algorithms in this class include Thompson Sampling-based approaches (Osband et al., 2013; Ouyang et al., 2017; Gopalan & Mannor, 2015), Optimism-based approaches such as the UCRL algorithm (Auer et al., 2009), both achieving order-wise optimal  $\mathcal{O}(\sqrt{T})$  regret bound.

A particular class of MDPs which has been studied extensively is the Linear Quadratic Regulator (LQR) which is a continuous state-action MDP with linear state dynamics and quadratic cost (Dean et al., 2017). Let  $x_{t} \in \mathbb{R}^{m}$  be the current state and let  $u_{t} \in \mathbb{R}^{n}$  be the action applied at time  $t$ . The infinite horizon average cost minimization problem for LQR is to find a policy to choose actions  $\{u_{t}\}_{t \geqslant 1}$  so as to minimize

$$
\operatorname * {l i m} _ {T \rightarrow \infty} \mathbb {E} \left[ \frac {1}{T} \sum_ {t = 1} ^ {T} x _ {t} ^ {\mathrm {T}} Q x _ {t} + u _ {t} ^ {\mathrm {T}} R u _ {t} \right]
$$

such that  $x_{t + 1} = Ax_t + Bu_t + n(t)$ ,  $n(t)$  is iid zero-mean noise. Here the matrices  $A$  and  $B$  are unknown to the learner. Earlier works like (Abbasi-Yadkori & Szepesvári, 2011; Ibrahimi et al., 2012) proposed algorithms based on the well-known optimism principle (with confidence ellipsoids around estimates of  $A$  and  $B$ ). These show regret bounds of  $\mathcal{O}(\sqrt{T})$ .

However, these approaches do not focus on the stability of the closed-loop system. (Dean et al., 2017) describes a robust controller design which seeks to minimize the worst-case performance of the system given the error in the estimation process. They show a sample complexity analysis guaranteeing convergence rate of  $\mathcal{O}(1 / \sqrt{N})$  to the optimal policy for the given LQR,  $N$  being the number of rollouts. More recently, certainty equivalence (Mania et al., 2019) was shown to achieve  $\mathcal{O}(\sqrt{T})$  regret for LQRs. Further, (Cassel et al., 2020) show that it is possible to achieve  $\mathcal{O}(\log T)$  regret if either one of the matrices  $A$  or  $B$  are known to the learner, and also provided a lower bound showing that  $\Omega (\sqrt{T})$  regret is unavoidable when both are unknown.

The model-free approach on the other hand, bypasses model estimation and directly learns the value function of the unknown MDP. While the most popular among these have historically been Q-learning, TD-learning (Sutton & Barto, 2018) and SARSA (Rummery & Niranjan, 1994), algorithms based on gradient-based policy optimization have been gaining considerable attention of late, following their stunning success with playing the game of Go which has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. (Silver et al., 2016) and more recently (Singh et al., 2017) use policy gradient method combined with a neural network representation to beat human experts. Indeed, the Policy Gradient method has become the cornerstone of modern RL and given birth to an entire class of highly efficient policy search algorithms such as TRPO (Schulman et al., 2015), PPO(Schulman et al., 2017), and MADDPG (Lowe et al., 2020).

Despite its excellent empirical performance, not much was known about theoretical guarantees for this approach until recently. There is now a growing body of promising results showing convergence rates for PG algorithms over finite state-action MDPs (Agarwal et al., 2020a; Shani et al., 2020; Bhandari & Russo, 2019; Mei et al., 2020), where the parameterization is over the entire space of state-action pairs, i.e.,  $\mathbb{R}^{S\times A}$ . In particular, (Bhandari & Russo, 2019) show that projected gradient descent does not suffer from spurious local optima on the simplex, (Agarwal et al., 2020a) show that the with softmax parameterization PG converges to the global optima asymptotically. (Shani et al., 2020) show a  $\mathcal{O}(1 / \sqrt{t})$  convergence rate for mirror descent. (Mei et al., 2020) show that with softmax policy gradient convergence to the global optima occurs at a rate  $\mathcal{O}(1 / t)$  and at  $\mathcal{O}(e^{-t})$  with entropy regularization.

We end this section noting once again that all of the above works concern proper learning. Improper learning, on the other hand, has been separately studied in statistical learning theory in the IID setting (Daniely et al., 2014; 2013). In this framework, which is also called Representation Independent learning, the learning algorithm is not restricted to output a hypothesis from a given set of hypotheses. We note that improper learning has not been studied in RL literature to the best of our knowledge.

To our knowledge, (Agarwal et al., 2020b) is the only existing work that attempts to frame and solve policy optimization

over an improper class via boosting a given class of controllers. However, the paper is situated in the rather different context of non-stochastic control and assumes perfect knowledge of (i) the memory-boundedness of the MDP, and (ii) the state noise vector in every round, which amounts to essentially knowing the MDP transition dynamics. We work in the stochastic MDP setting and moreover assume no access to the MDP's transition kernel. Further, (Agarwal et al., 2020b) also assumes that all the atomic controllers available to them are stabilizing which, when working with an unknown MDP, is a very strong assumption to make. We make no such assumptions on our atomic controller class and, as we show in Sec. 2 and Sec. 6, our algorithms even begin with provably unstable controllers and yet succeed in stabilizing the system.

In summary, the problem that we address concerns finding the best among a given class of controllers. None of these need be optimal for the MDP at hand. Moreover, our PG algorithm could very well converge to an improper mixture of these controllers meaning that the output of our algorithms need not be any of the atomic controllers we are provided with. This setting, to the best of our knowledge has not been investigated in the RL literature hitherto.

# C. Details of Setup and Modelling of the Cartpole

![](images/dc853ce557e6974fde92862f5bdb2c0398f60458f442680ddac7b9c0b8e2e1e0.jpg)  
Figure 4: The Cartpole system. The mass of the pendulum is denoted by  $m_{p}$ , that of the cart by  $m_{K}$ , the force used to drive the cart by  $F$ , and the distance of the center of mass of the cart from its starting position by  $s$ .  $\theta$  denotes the angle the pendulum makes with the normal and its length is denoted by  $2l$ . Gravity is denoted by  $g$ .

As shown in Fig. 4, it comprises a pendulum whose pivot is mounted on a cart which can be moved in the horizontal direction by applying a force. The objective is to modulate the direction and magnitude of this force  $F$  to keep the pendulum from keeling over under the influence of gravity. The state of the system at time  $t$ , is given by the 4-tuple  $\mathbf{x}(t) \coloneqq [s, \dot{s}, \theta, \dot{\theta}]$  with  $\mathbf{x}(\cdot) = \mathbf{0}$  corresponding to the pendulum being upright and stationary. One of the strategies used to design control policies for this system is by first approximating the dynamics around  $\mathbf{x}(\cdot) = \mathbf{0}$  with a linear, quadratic cost model and designing a linear controller for these approximate dynamics. This, after time discretization, The objective now reduces to finding a (potentially randomized) control policy  $u \equiv \{u(t), t \geqslant 0\}$  that solves:

$$
\begin{array}{l} \inf  _ {u} J (\mathbf {x} (0)) = \mathbb {E} _ {u} \sum_ {t = 0} ^ {\infty} \mathbf {x} ^ {\intercal} (t) Q \mathbf {x} (t) + R u ^ {2} (t), \\ s. t. \mathbf {x} (t + 1) = \underbrace {\left( \begin{array}{l l l l} 0 & 1 & 0 & 0 \\ 0 & 0 & \frac {g}{l \left(\frac {4}{3} - \frac {m _ {p}}{m _ {p} + m _ {k}}\right)} & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & \frac {g}{l \left(\frac {4}{3} - \frac {m _ {p}}{m _ {p} + m _ {k}}\right)} & 0 \end{array} \right)} _ {A _ {\text {o p e n}}} \mathbf {x} (t) + \underbrace {\left( \begin{array}{c} 0 \\ \frac {1}{m _ {p} + m _ {k}} \\ 0 \\ \frac {1}{l \left(\frac {4}{3} - \frac {m _ {p}}{m _ {p} + m _ {k}}\right)} \end{array} \right)} _ {\mathbf {b}} u (t). \tag {2} \\ \end{array}
$$

Under standard assumptions of controllability and observability, this optimization has a stationary, linear solution  $u^{*}(t) = -\mathbf{K}^{\top}\mathbf{x}(t)$  (details are available in (?)Chap. 3]bertsekas11dynamic). Moreover, setting  $A := A_{open} - \mathbf{bK}^{\top}$ , it is well known that the dynamics  $\mathbf{x}(t + 1) = A\mathbf{x}(t)$ ,  $t \geqslant 0$ , are stable.

# C.1. Details of simulations settings for the cartpole system

In this section we supply the adjustments we made for specifically for the cartpole experiments. We first mention that we scale down the estimated gradient of the value function returned by the GradEst subroutine (Algorithm 6) (in the cartpole simulation only). The scaling that worked for us is  $\frac{10}{\|\nabla V^{\pi}(\mu)\|}$ .

Next, we provide the values of the constants that were described in Sec. C in Table 1.

<table><tr><td>Parameter</td><td>Value</td></tr><tr><td>Gravity g</td><td>9.8</td></tr><tr><td>Mass of pole mp</td><td>0.1</td></tr><tr><td>Length of pole l</td><td>1</td></tr><tr><td>Mass of cart mk</td><td>1</td></tr><tr><td>Total mass mt</td><td>1.1</td></tr></table>

Table 1: Values of the hyperparameters used for the cartpole simulation

# D. Stability for Ergodic Parameter Linear Systems (EPLS)

For simplicity and ease of understanding, we connect our current discussion to the cartpole example discussed in Sec. 2.1. Consider a generic (ergodic) control policy that switches across a menu of controllers  $\{K_1,\dots ,K_N\}$ . That is, at any time  $t$ , it chooses controller  $K_{i}$ ,  $i\in [N]$ , w.p.  $p_i$ , so that the control input at time  $t$  is  $u(t) = -\mathbf{K}_i^\top \mathbf{x}(t)$  w.p.  $p_i$ . Let  $A(i)\coloneqq A_{open} - \mathbf{b}\mathbf{K}_i^\top$ . The resulting controlled dynamics are given by

$$
\begin{array}{l} \mathbf {x} (t + 1) = A (r (t)) \mathbf {x} (t) \\ \mathbf {x} (0) = \mathbf {0}, \tag {3} \\ \end{array}
$$

where  $r(t) = i$  w.p.  $p_i$ , IID across time. In the literature, this belongs to a class of systems known as Ergodic Parameter Linear Systems (EPLS) (Bolzern et al., 2008), which are said to be Exponentially Almost Surely Stable (EAS) if there exists  $\rho > 0$  such that for any  $\mathbf{x}(0)$ ,

$$
\mathbb {P} \left\{\omega \in \Omega \left| \lim  _ {t \rightarrow \infty} \sup  _ {t \rightarrow \infty} \frac {1}{t} \log \| \mathbf {x} (t, \omega) \| \leqslant - \rho \right.\right\} = 1. \tag {4}
$$

In other words, w.p. 1, the trajectories of the system decay to the origin exponentially fast. The random variable  $\lambda (\omega)\coloneqq$ $\lim \sup_{t\to \infty}\frac{1}{t}\log \| \mathbf{x}(t,\omega)\|$  in (4) is called the Lyapunov Exponent of the system. For our EPLS,

$$
\begin{array}{l} \lambda (\omega) = \limsup_{t\to \infty}\frac{1}{t}\log \| \mathbf{x}(t,\omega)\| = \limsup_{t\to \infty}\frac{1}{t}\log \left\| \prod_{s = 1}^{t}A(r(s,\omega))\mathbf{x}(0)\right\| \\ \leqslant \lim  _ {t \rightarrow \infty} \frac {1}{t} \log \| \mathbf {x} (0) \| + \lim  _ {t \rightarrow \infty} \frac {1}{t} \log \left\| \prod_ {s = 1} ^ {t} A (r (s, \omega)) \right\| \\ \leqslant \lim  _ {t \rightarrow \infty} \frac {1}{t} \sum_ {s = 1} ^ {t} \log \| A (r (s, \omega)) \| \stackrel {(*)} {=} \lim  _ {t \rightarrow \infty} \frac {1}{t} \sum_ {s = 1} ^ {t} \log \| A (r (s, \omega)) \| \\ \stackrel {(\dagger)} {=} \mathbb {E} \log \| A (r) \| = \sum_ {i = 1} ^ {N} p _ {i} \log \| A (i) \|, \tag {5} \\ \end{array}
$$

where the equalities  $(\ast)$  and  $(\dagger)$  are due to the ergodic law of large numbers. The control policy can now be designed by choosing  $\{p_1,\dots ,p_N\}$  such that  $\lambda (\omega) < -\rho$  for some  $\rho >0$ , ensuring exponentially almost sure stability.

# E. The Constrained Queuing Example

The system, shown in Fig. 5, comprises two queues fed by independent, stochastic arrival processes  $A_{i}(t), i \in \{1,2\}$ ,  $t \in \mathbb{N}$ . The length of Queue  $i$ , measured at the beginning of time slot  $t$ , is denoted by  $Q_{i}(t) \in \mathbb{Z}_{+}$ . A common server serves both queues and can drain at most one packet from the system in a time slot<sup>2</sup>. The server, therefore, needs to decide which of the two queues it intends to serve in a given slot (we assume that once the server chooses to serve a packet, service succeeds with probability 1). The server's decision is denoted by the vector  $\mathbf{D}(t) \in \mathcal{A} := \{[0,0], [1,0], [0,1]\}$ , where a "1" denotes service and a "0" denotes lack thereof.

![](images/ae6f720974bf48d726e403c3e94c1f6a8c3a88e524cc22c23b6d2c82e4d0832e.jpg)  
Figure 5:  $Q_{i}(t)$  is the length of Queue  $i$  ( $i \in \{1,2\}$ ) at the beginning of time slot  $t$ ,  $A_{i}(t)$  is its packet arrival process and  $\mathbf{D}(t) \in \{[0,0],[1,0],[0,1]\}$ .

For simplicity, we assume that the processes  $(A_{i}(t))_{t = 0}^{\infty}$  are both IID Bernoulli, with  $\mathbb{E}A_{i}(t) = \lambda_{i}$ . Note that the arrival rate  $\lambda = [\lambda_1,\lambda_2]$  is unknown to the learner. Defining  $(x)^{+}:= \max \{0,x\}$ ,  $\forall x\in \mathbb{R}$ , queue length evolution is given by the equations

$$
Q _ {i} (t + 1) = \left(Q _ {i} (t) - D _ {i} (t)\right) ^ {+} + A _ {i} (t + 1), i \in \{1, 2 \}. \tag {6}
$$

# F. Non-concavity of the Value function

We show here that the value function  $V^{\pi}(\rho)$  is in general non-concave, and hence standard convex optimization techniques for maximization may get stuck in local optima. We note once again that this is different from the non-concavity of  $V^{\pi}$  when the parameterization is over the entire state-action space, i.e.,  $\mathbb{R}^{S\times A}$ .

We show here that for both SoftMax and direct parameterization, the value function is non-concave where, by "direct" parameterization we mean that the controllers  $K_{m}$  are parameterized by weights  $\theta_{m} \in \mathbb{R}$ , where  $\theta_{i} \geqslant 0$ ,  $\forall i \in [M]$  and  $\sum_{i=1}^{M} \theta_{i} = 1$ . A similar argument holds for softmax parameterization, which we outline in Note F.2.

Lemma F.1. (Non-concavity of Value function) There is an MDP and a set of controllers, for which the maximization problem of the value function (i.e. (1)) is non-concave for SoftMax parameterization, i.e.,  $\theta \mapsto V^{\pi_{\theta}}$  is non-concave.

![](images/b7d6894d19b558de8e1551c28660c3829cbf387ea3402d1d9259f66bb5814aad.jpg)  
Figure 6: An example of an MDP with controllers as defined in (7) having a non-concave value function. The MDP has  $S = 5$  states and  $A = 2$  actions. States  $s_3, s_4$  and  $s_5$  are terminal states. The only transition with nonzero reward is  $s_2 \rightarrow s_4$ .

Proof. Consider the MDP shown in Figure 6 with 5 states,  $s_1,\ldots ,s_5$ . States  $s_3,s_4$  and  $s_5$  are terminal states. In the figure we also show the allowed transitions and the rewards obtained by those transitions. Let the action set  $\mathcal{A}$  consists of only three actions  $\{a_{1},a_{2},a_{3}\} \equiv \{\mathrm{right},\mathrm{up},\mathrm{null}\}$ , where 'null' is a dummy action included to accommodate the three terminal states. Let us consider the case when  $M = 2$ . The two controllers  $K_{i}\in \mathbb{R}^{S\times A}$ ,  $i = 1,2$  (where each row is probability distribution over  $\mathcal{A}$ ) are shown below.

$$
K _ {1} = \left[ \begin{array}{l l l} 1 / 4 & 3 / 4 & 0 \\ 3 / 4 & 1 / 4 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array} \right], K _ {2} = \left[ \begin{array}{l l l} 3 / 4 & 1 / 4 & 0 \\ 1 / 4 & 3 / 4 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array} \right]. \tag {7}
$$

Let  $\theta^{(1)} = (1,0)^{\mathrm{T}}$  and  $\theta^{(2)} = (0,1)^{\mathrm{T}}$ . Let us fix the initial state to be  $s_1$ . Since a nonzero reward is only earned during a  $s_2 \to s_4$  transition, we note for any policy  $\pi : \mathcal{A} \to \mathcal{S}$  that  $V^{\pi}(s_1) = \pi(a_1|s_1)\pi(a_2|s_2)r$ . We also have,

$$
(K _ {1} + K _ {2}) / 2 = \left[ \begin{array}{c c c} 1 / 2 & 1 / 2 & 0 \\ 1 / 2 & 1 / 2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array} \right].
$$

We will show that  $\frac{1}{2} V^{\pi_{\theta(1)}} + \frac{1}{2} V^{\pi_{\theta(2)}} > V^{\pi_{\left(\theta^{(1)} + \theta^{(2)}\right) / 2}}$ .

We observe the following.

$$
V ^ {\pi_ {\theta^ {(1)}}} (s _ {1}) = V ^ {K _ {1}} (s _ {1}) = (1 / 4). (1 / 4). r = r / 1 6.
$$

$$
V ^ {\pi_ {\theta^ {(2)}}} \left(s _ {1}\right) = V ^ {K _ {2}} \left(s _ {1}\right) = (3 / 4). (3 / 4). r = 9 r / 1 6.
$$

where  $V^{K}(s)$  denotes the value obtained by starting from state  $s$  and following a controller matrix  $K$  for all time. Also, on the other hand we have,

$$
V ^ {\pi \left(\theta^ {(1)} + \theta^ {(2)}\right) / 2} = V ^ {\left(K _ {1} + K _ {2}\right) / 2} (s _ {1}) = (1 / 2). (1 / 2). r = r / 4.
$$

Hence we see that,

$$
\frac {1}{2} V ^ {\pi_ {\theta^ {(1)}}} + \frac {1}{2} V ^ {\pi_ {\theta^ {(2)}}} = r / 3 2 + 9 r / 3 2 = 1 0 r / 3 2 = 1. 2 5 r / 4 > r / 4 = V ^ {\pi_ {\left(\theta^ {(1)} + \theta^ {(2)}\right) / 2}}.
$$

This shows that  $\theta \mapsto V^{\pi_{\theta}}$  is non-concave, which concludes the proof for direct parameterization.

Remark F.2. For softmax parametrization, we choose the same 2 controllers  $K_{1}, K_{2}$  as above. Fix some  $\varepsilon \in (0,1)$  and set  $\theta^{(1)} = (\log (1 - \varepsilon), \log \varepsilon)^{\mathrm{T}}$  and  $\theta^{(2)} = (\log \varepsilon, \log (1 - \varepsilon))^{\mathrm{T}}$ . A similar calculation using softmax projection, and using the fact that  $\pi_{\theta}(a|s) = \sum_{m=1}^{M} \pi_{\theta}(m) K_m(s, a)$ , shows that under  $\theta^{(1)}$  we follow matrix  $(1 - \varepsilon)K_{1} + \varepsilon K_{2}$ , which yields a Value of  $(1/4 + \varepsilon/2)^{2}r$ . Under  $\theta^{(2)}$  we follow matrix  $\varepsilon K_{1} + (1 - \varepsilon)K_{2}$ , which yields a Value of  $(3/4 - \varepsilon/2)^{2}r$ . On the other hand,  $(\theta^{(1)} + \theta^{(2)})/2$  amounts to playing the matrix  $(K_{1} + K_{2})/2$ , yielding the value of  $r/4$ , as above. One can verify easily that  $(1/4 + \varepsilon/2)^{2}r + (3/4 - \varepsilon/2)^{2}r > 2.r/4$ . This shows the non-concavity of  $\theta \mapsto V^{\pi_{\theta}}$  under softmax parameterization.

# G. Example showing that the value function need not be pointwise (over states) monotone over the improper class

Consider the same MDP as in Sec F, however with different base controllers. Let the initial state be  $s_1$ .

The two base controllers  $K_{i} \in \mathbb{R}^{S \times A}$ ,  $i = 1, 2$  (where each row is probability distribution over  $\mathcal{A}$ ) are shown below.

$$
K _ {1} = \left[ \begin{array}{l l l} 1 / 4 & 3 / 4 & 0 \\ 1 / 4 & 3 / 4 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array} \right], K _ {2} = \left[ \begin{array}{l l l} 3 / 4 & 1 / 4 & 0 \\ 3 / 4 & 1 / 4 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array} \right]. \tag {8}
$$

Let  $\theta^{(1)} = (1,0)^{\mathrm{T}}$  and  $\theta^{(2)} = (0,1)^{\mathrm{T}}$ . Let us fix the initial state to be  $s_1$ . Since a nonzero reward is only earned during a  $s_2 \to s_4$  transition, we note for any policy  $\pi$ , that  $V^{\pi}(s_1) = \pi(a_1|s_1)\pi(a_2|s_2)r$  and  $V^{\pi}(s_2) = \pi(a_2|s_2)r$ . Note here that the optimal policy of this MDP is deterministic with  $\pi^*(a_1|s_1) = 1$  and  $\pi^*(a_2|s_2) = 1$ . The transitions are all deterministic.

However, notice that the optimal policy (with initial state  $s_1$ ) given  $K_1$  and  $K_2$  is strict mixture, because, given any  $\theta = [\theta, 1 - \theta]$ ,  $\theta \in [0, 1]$ , the value of the policy  $\pi_\theta$  is

$$
v ^ {\pi \theta} = \frac {1}{4} (3 - 2 \theta) (1 + 2 \theta) r, \tag {9}
$$

which is maximized at  $\theta = 1 / 2$ . This means that the optimal non deterministic policy chooses  $K_{1}$  and  $K_{2}$  with probabilities  $(1 / 2, 1 / 2)$ , i.e.,

$$
K ^ {*} = (K _ {1} + K _ {2}) / 2 = \left[ \begin{array}{c c c} 1 / 2 & 1 / 2 & 0 \\ 1 / 2 & 1 / 2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array} \right].
$$

We observe the following.

$$
V ^ {\pi_ {\theta^ {(1)}}} (s _ {1}) = V ^ {K _ {1}} (s _ {1}) = (1 / 4). (3 / 4). r = 3 r / 1 6.
$$

$$
V ^ {\pi_ {\theta^ {(2)}}} (s _ {1}) = V ^ {K _ {2}} (s _ {1}) = (3 / 4). (1 / 4). r = 3 r / 1 6.
$$

$$
V ^ {\pi_ {\theta^ {(1)}}} (s _ {2}) = V ^ {K _ {1}} (s _ {2}) = (3 / 4). r = 3 r / 4.
$$

On the other hand we have,

$$
V ^ {\pi \left(\theta^ {(1)} + \theta^ {(2)}\right) / 2} (s _ {1}) = V ^ {K ^ {*}} (s _ {1}) = (1 / 2). (1 / 2). r = r / 4.
$$

$$
V ^ {\pi \left(\theta^ {(1)} + \theta^ {(2)}\right) / 2} (s _ {2}) = V ^ {K ^ {*}} (s _ {2}) = (1 / 2). r = r / 2.
$$

We see that  $V^{K^*}(s_1) > \max \{V^{K_1}(s_1), V^{K_2}(s_1)\}$ . However,  $V^{K^*}(s_2) < V^{K_1}(s_2)$ . This implies that playing according to an improved mixture policy (here the optimal given the initial state is  $s_1$ ) does not necessarily improve the value across all states.

# H. Proof details for Bandit-over-bandits

In this section we consider the instructive sub-case when  $S = 1$ , which is also called the Multiarmed Bandit. We provide regret bounds for two cases (1) when the value gradient  $\frac{dV^{\pi\theta_t}(\mu)}{d\theta^t}$  (in the gradient update) is available in each round, and (2) when it needs to be estimated.

Note that each controller in this case, is a probability distribution over the  $A$  arms of the bandit. We consider the scenario where the agent at each time  $t \geqslant 1$ , has to choose a probability distribution  $K_{m_t}$  from a set of  $M$  probability distributions over actions  $\mathcal{A}$ . She then plays an action  $a_t \sim K_{m_t}$ . This is different from the standard MABs because the learner cannot choose the actions directly, instead chooses from a given set of controllers, to play actions. Note the  $V$  function has

no argument as  $S = 1$ . Let  $\mu \in [0,1]^A$  be the mean vector of the arms  $\mathcal{A}$ . The value function for any given mixture  $\pi \in \mathcal{P}([M])$ ,

$$
\begin{array}{l} V ^ {\pi} := \mathbb {E} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} r _ {t} \mid \pi \right] = \sum_ {t = 0} ^ {\infty} \gamma^ {t} \mathbb {E} \left[ r _ {t} \mid \pi \right] \\ = \sum_ {t = 0} ^ {\infty} \gamma^ {t} \sum_ {a \in \mathcal {A}} \sum_ {m = 1} ^ {M} \pi (m) K _ {m} (a) \mu_ {a}. \\ = \frac {1}{1 - \gamma} \sum_ {m = 1} ^ {M} \pi_ {m} \mu^ {\mathrm {T}} K _ {m} = \frac {1}{1 - \gamma} \sum_ {m = 1} ^ {M} \pi_ {m} \mathfrak {r} _ {m} ^ {\mu}. \tag {10} \\ \end{array}
$$

where the interpretation of  $\mathfrak{r}_m^\mu$  is that it is the mean reward one obtains if the controller  $m$  is chosen at any round  $t$ . Since  $V^{\pi}$  is linear in  $\pi$ , the maximum is attained at one of the base controllers  $\pi^{*}$  puts mass 1 on  $m^{*}$  where  $m^{*} := \operatorname{argmax}_{m \in [M]} V^{K_{m}}$ , and  $V^{K_{m}}$  is the value obtained using  $K_{m}$  for all time. In the sequel, we assume  $\Delta_{i} := \mathfrak{r}_{m^{*}}^{\mu} - \mathfrak{r}_{i}^{\mu} > 0$ .

# H.1. Proofs for MABs with perfect gradient knowledge

With access to the exact value gradient at each step, we have the following result, when Softmax PG (Algorithm 1) is applied for the bandits-over-bandits case.

Theorem H.1. With  $\eta = \frac{2(1 - \gamma)}{5}$  and with  $\theta_m^{(1)} = 1 / M$  for all  $m\in [M]$ , with the availability for true gradient, we have  $\forall t\geqslant 1$ ,

$$
V ^ {\pi^ {*}} - V ^ {\pi_ {\theta_ {t}}} \leqslant \frac {5}{1 - \gamma} \frac {M ^ {2}}{t}.
$$

Also, defining regret for a time horizon of  $T$  rounds as

$$
\mathcal {R} (T) := \sum_ {t = 1} ^ {T} V ^ {\pi^ {*}} - V ^ {\pi_ {\theta_ {t}}}, \tag {11}
$$

we show as a corollary to Thm. H.4 that,

Corollary H.2.

$$
\mathcal {R} (T) \leqslant \min \left\{\frac {5 M ^ {2}}{1 - \gamma} \mathrm {l o g} T, \sqrt {\frac {5}{1 - \gamma}} M \sqrt {T} \right\}.
$$

Proof. Recall from eq (10), that the value function for any given policy  $\pi \in \mathcal{P}([M])$ , that is a distribution over the given  $M$  controllers (which are itself distributions over actions  $\mathcal{A}$ ) can be simplified as:

$$
V ^ {\pi} = \frac {1}{1 - \gamma} \sum_ {m = 1} ^ {M} \pi_ {m} \mu^ {\mathrm {T}} K _ {m} = \frac {1}{1 - \gamma} \sum_ {m = 1} ^ {M} \pi_ {m} \mathfrak {r} _ {m} ^ {\mu}
$$

where  $\mu$  here is the (unknown) vector of mean rewards of the arms  $\mathcal{A}$ . Here,  $\mathfrak{r}_m^\mu \coloneqq \mu^{\mathrm{T}}K_m$ ,  $i = 1,\dots ,M$ , represents the mean reward obtained by choosing to play controller  $K_{m}, m \in M$ . For ease of notation, we will drop the superscript  $\mu$  in the proofs of this section. We first show a simplification of the gradient of the value function w.r.t. the parameter  $\theta$ . Fix a  $m \in [M]$ ,

$$
\frac {\partial}{\partial \theta_ {m ^ {\prime}}} V ^ {\pi_ {\theta}} = \frac {1}{1 - \gamma} \sum_ {m = 1} ^ {M} \frac {\partial}{\partial \theta_ {m}} \pi_ {\theta} (m) \mathfrak {r} _ {m} = \frac {1}{1 - \gamma} \sum_ {m = 1} ^ {M} \pi_ {\theta} \left(m ^ {\prime}\right) \left\{\mathbb {I} _ {m m ^ {\prime}} - \pi_ {\theta} (m) \right\} \mathfrak {r} _ {m}. \tag {12}
$$

Next we show that  $V^{\pi}$  is  $\beta-$  smooth. A function  $f:\mathbb{R}^{M}\to \mathbb{R}$  is  $\beta-$  smooth, if  $\forall \theta ',\theta \in \mathbb{R}^{M}$

$$
\left| f (\theta^ {\prime}) - f (\theta) - \left\langle \frac {d}{d \theta} f (\theta), \theta^ {\prime} - \theta \right\rangle \right| \leqslant \frac {\beta}{2} \| \theta^ {\prime} - \theta \| _ {2} ^ {2}.
$$

Let  $S \coloneqq \frac{d^2}{d\theta^2} V^{\pi_\theta}$ . This is a matrix of size  $M \times M$ . Let  $1 \leqslant i, j \leqslant M$ .

$$
\begin{array}{l} S _ {i, j} = \left(\frac {d}{d \theta} \left(\frac {d}{d \theta} V ^ {\pi_ {\theta}}\right)\right) _ {i, j} (13) \\ = \frac {1}{1 - \gamma} \frac {d \left(\pi_ {\theta} (i) \left(\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}\right)\right)}{d \theta_ {j}} (14) \\ = \frac {1}{1 - \gamma} \left(\frac {d \pi_ {\theta} (i)}{d \theta_ {j}} \left(\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}\right) + \pi_ {\theta} (i) \frac {d \left(\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}\right)}{d \theta_ {j}}\right) (15) \\ = \frac {1}{1 - \gamma} \left(\pi_ {\theta} (j) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) - \pi_ {\theta} (i) \pi_ {\theta} (j) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) - \pi_ {\theta} (i) \pi_ {\theta} (j) (\mathfrak {r} (j) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r})\right). (16) \\ \end{array}
$$

Next, let  $y \in \mathbb{R}^M$ ,

$$
\begin{array}{l} \left| y ^ {\mathrm {T}} S y \right| = \left| \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} S _ {i j} y (i) y (j) \right| \\ = \frac {1}{1 - \gamma} \left| \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \left(\pi_ {\theta} (j) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) - \pi_ {\theta} (i) \pi_ {\theta} (j) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) - \pi_ {\theta} (i) \pi_ {\theta} (j) (\mathfrak {r} (j) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r})\right) y (i) y (j) \right| \\ = \frac {1}{1 - \gamma} \left| \sum_ {i = 1} ^ {M} \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) y (i) ^ {2} - 2 \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \pi_ {\theta} (i) \pi_ {\theta} (j) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) y (i) y (j) \right| \\ = \frac {1}{1 - \gamma} \left| \sum_ {i = 1} ^ {M} \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) y (i) ^ {2} - 2 \sum_ {i = 1} ^ {M} \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) y (i) \sum_ {j = 1} ^ {M} \pi_ {\theta} (j) y (j) \right| \\ \leqslant \frac {1}{1 - \gamma} \left| \sum_ {i = 1} ^ {M} \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) y (i) ^ {2} \right| + \frac {2}{1 - \gamma} \left| \sum_ {i = 1} ^ {M} \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) y (i) \sum_ {j = 1} ^ {M} \pi_ {\theta} (j) y (j) \right| \\ \leqslant \frac {1}{1 - \gamma} \left\| \pi_ {\theta} \odot (\mathfrak {r} - \pi_ {\theta} ^ {\mathsf {T}} \mathfrak {r}) \right\| _ {\infty} \| y \odot y \| _ {1} + \frac {2}{1 - \gamma} \left\| \pi_ {\theta} \odot (\mathfrak {r} - \pi_ {\theta} ^ {\mathsf {T}} \mathfrak {r}) \right\| _ {1}. \| y \| _ {\infty}. \| \pi_ {\theta} \| _ {1} \| y \| _ {\infty}. \\ \end{array}
$$

The last equality is by the assumption that reward are bounded in [0,1]. We observe that,

$$
\begin{array}{l} \left| \left| \pi_ {\theta} \odot (\mathfrak {r} - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) \right| \right| _ {1} = \sum_ {m = 1} ^ {M} \left| \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) \right| \\ = \sum_ {m = 1} ^ {M} \pi_ {\theta} (i) | \mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r} | \\ = \max  _ {i = 1, \dots , M} \left| \mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r} \right| \leqslant 1. \\ \end{array}
$$

Next, for any  $i \in [M]$ ,

$$
\begin{array}{l} \left| \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) \right| = \left| \pi_ {\theta} (i) \mathfrak {r} (i) - \pi_ {\theta} (i) ^ {2} r (i) - \sum_ {j \neq i} \pi_ {\theta} (i) \pi_ {\theta} (j) \mathfrak {r} (j) \right| \\ = \pi_ {\theta} (i) \left(1 - \pi_ {\theta} (i)\right) + \pi_ {\theta} (i) \left(1 - \pi_ {\theta} (i)\right) \leqslant 2. 1 / 4 = 1 / 2. \\ \end{array}
$$

Combining the above two inequalities with the fact that  $\| \pi_{\theta}\| _1 = 1$  and  $\| y\|_{\infty}\leqslant \| y\|_{2}$ , we get,

$$
\left| y ^ {\mathsf {T}} S y \right| \leqslant \frac {1}{1 - \gamma} \left\| \pi_ {\theta} \odot (\mathfrak {r} - \pi_ {\theta} ^ {\mathsf {T}} \mathfrak {r}) \right\| _ {\infty} \| y \odot y \| _ {1} + \frac {2}{1 - \gamma} \left\| \pi_ {\theta} \odot (\mathfrak {r} - \pi_ {\theta} ^ {\mathsf {T}} \mathfrak {r}) \right\| _ {1}. \| y \| _ {\infty}. \| \pi_ {\theta} \| _ {1} \| y \| _ {\infty} \leqslant \frac {1}{1 - \gamma} (1 / 2 + 2) \| y \| _ {2} ^ {2}.
$$

Hence  $V^{\pi_{\theta}}$  is  $\beta$ -smooth with  $\beta = \frac{5}{2(1 - \gamma)}$ .

We establish a lower bound on the norm of the gradient of the value function at every step  $t$  as below (these type of inequalities are called Łojaseiwickz inequalities (Łojasiewicz, 1963))

Lemma H.3. [Lower bound on norm of gradient]

$$
\left\| \frac {\partial V ^ {\pi_ {\theta}}}{\partial \theta} \right\| _ {2} \geqslant \pi_ {\theta_ {m ^ {*}}} \left(V ^ {\pi^ {*}} - V ^ {\pi_ {\theta}}\right).
$$

Proof of Lemma H.3.

Proof. Recall from the simplification of gradient of  $V^{\pi}$ , i.e., eq (12):

$$
\begin{array}{l} \frac {\partial}{\partial \theta_ {m}} V ^ {\pi_ {\theta}} = \frac {1}{1 - \gamma} \sum_ {m ^ {\prime} = 1} ^ {M} \pi_ {\theta} (m) \left\{\mathbb {I} _ {m m ^ {\prime}} - \pi_ {\theta} (m ^ {\prime}) \right\} \mathfrak {r} _ {m} ^ {\prime} \\ = \frac {1}{1 - \gamma} \pi (m) \left(\mathfrak {r} (m) - \pi^ {\mathrm {T}} \mathfrak {r}\right). \\ \end{array}
$$

Taking norm both sides,

$$
\begin{array}{l} \left\| \frac {\partial}{\partial \theta} V ^ {\pi_ {\theta}} \right\| = \frac {1}{1 - \gamma} \sqrt {\sum_ {m = 1} ^ {M} (\pi (m)) ^ {2} \left(\mathfrak {r} (m) - \pi^ {\mathrm {T}} \mathfrak {r}\right) ^ {2}} \\ \geqslant \frac {1}{1 - \gamma} \sqrt {\left(\pi \left(m ^ {*}\right)\right) ^ {2} \left(\mathfrak {r} \left(m ^ {*}\right) - \pi^ {\mathfrak {T}} \mathfrak {r}\right) ^ {2}} \\ = \frac {1}{1 - \gamma} \left(\pi \left(m ^ {*}\right)\right) \left(\mathfrak {r} \left(m ^ {*}\right) - \pi^ {\mathrm {T}} \mathfrak {r}\right) \\ = \frac {1}{1 - \gamma} \left(\pi \left(m ^ {*}\right)\right) \left(\pi^ {*} - \pi\right) ^ {\mathrm {T}} \mathfrak {r} \\ = \left(\pi (m ^ {*})\right) \left[ V ^ {\pi^ {*}} - V ^ {\pi_ {\theta}} \right]. \\ \end{array}
$$

where  $\pi^{*} = e_{m^{*}}$

![](images/56a690e06a20618be9b176424f346185aed3f41cd7cfad3030a5a74d7bb7d72a.jpg)

We will now prove Theorem H.4 and corollary H.2. We restate the result here.

Theorem H.4. With  $\eta = \frac{2(1 - \gamma)}{5}$  and with  $\theta_m^{(1)} = 1 / M$  for all  $m\in [M]$ , with the availability for true gradient, we have  $\forall t\geqslant 1$ ,

$$
V ^ {\pi^ {*}} - V ^ {\pi_ {\theta_ {t}}} \leqslant \frac {5}{1 - \gamma} \frac {M ^ {2}}{t}.
$$

Proof. First, note that since  $V^{\pi}$  is smooth we have:

$$
\begin{array}{l} V ^ {\pi_ {\theta_ {t}}} - V ^ {\pi_ {\theta_ {t + 1}}} \leqslant - \left\langle \frac {d}{d \theta_ {t}} V ^ {\pi_ {\theta_ {t}}}, \theta_ {t + 1} - \theta_ {t} \right\rangle + \frac {5}{2 (1 - \gamma)} \| \theta_ {t + 1} - \theta_ {t} \| _ {2} ^ {2} \\ = - \eta \left\| \frac {d}{d \theta_ {t}} V ^ {\pi_ {\theta_ {t}}} \right\| _ {2} ^ {2} + \frac {5}{4 (1 - \gamma)} \eta^ {2} \left\| \frac {d}{d \theta_ {t}} V ^ {\pi_ {\theta_ {t}}} \right\| _ {2} ^ {2} \\ = \left\| \frac {d}{d \theta_ {t}} V ^ {\pi_ {\theta_ {t}}} \right\| _ {2} ^ {2} \left(\frac {5 \eta^ {2}}{4 (1 - \gamma)} - \eta\right) \\ = - \left(\frac {1 - \gamma}{5}\right) \left\| \frac {d}{d \theta_ {t}} V ^ {\pi_ {\theta_ {t}}} \right\| _ {2} ^ {2}. \\ \leqslant - \left(\frac {1 - \gamma}{5}\right) \left(\pi_ {\theta_ {t}} \left(m ^ {*}\right)\right) ^ {2} \left[ V ^ {\pi^ {*}} - V ^ {\pi_ {\theta}} \right] ^ {2} \quad \text {L e m m a H . 3} \\ \leqslant - \left(\frac {1 - \gamma}{5}\right) (\underbrace {\inf _ {1 \leqslant s \leqslant t} \pi_ {\theta_ {t}} (m ^ {*})} _ {=: c _ {t}}) ^ {2} \left[ V ^ {\pi^ {*}} - V ^ {\pi_ {\theta}} \right] ^ {2}. \\ \end{array}
$$

The first equality is by smoothness, second inequality is by the update equation in algorithm 1.

Next, let  $\delta_t \coloneqq V^{\pi^*} - V^{\pi_{\theta_t}}$ . We have,

$$
\delta_ {t + 1} - \delta_ {t} \leqslant - \frac {(1 - \gamma)}{5} c _ {t} ^ {2} \delta_ {t} ^ {2}. \tag {17}
$$

Claim:  $\forall t\geqslant 1,\delta_t\leqslant \frac{5}{c_t^2(1 - \gamma)}\frac{1}{t}.$

We prove the claim by using induction on  $t \geqslant 1$ .

Base case. Since  $\delta_t \leqslant \frac{1}{1 - \gamma}$ , the claim is true for all  $t \leqslant 5$ .

Induction step: Let  $\varphi_t \coloneqq \frac{5}{c_t^2(1 - \gamma)}$ . Fix a  $t \geqslant 2$ , assume  $\delta_t \leqslant \frac{\varphi_t}{t}$ .

Let  $g: \mathbb{R} \to \mathbb{R}$  be a function defined as  $g(x) = x - \frac{1}{\varphi_t} x^2$ . One can verify easily that  $g$  is monotonically increasing in  $\left[0, \frac{\varphi_t}{2}\right]$ . Next with equation 19, we have

$$
\begin{array}{l} \delta_ {t + 1} \leqslant \delta_ {t} - \frac {1}{\varphi_ {t}} \delta_ {t} ^ {2} \\ = g \left(\delta_ {t}\right) \\ \leqslant g \left(\frac {\varphi_ {t}}{t}\right) \\ \leqslant \frac {\varphi_ {t}}{t} - \frac {\varphi_ {t}}{t ^ {2}} \\ = \varphi_ {t} \left(\frac {1}{t} - \frac {1}{t ^ {2}}\right) \\ \leqslant \varphi_ {t} \left(\frac {1}{t + 1}\right). \\ \end{array}
$$

This completes the proof of the claim. We will show that  $c_{t} \geqslant 1 / M$  in the next lemma. We first complete the proof of the corollary assuming this.

We fix a  $T \geqslant 1$ . Observe that,  $\delta_t \leqslant \frac{5}{(1 - \gamma)c_t^2} \frac{1}{t} \leqslant \frac{5}{(1 - \gamma)c_T^2} \frac{1}{t}$ .

$$
\sum_ {t = 1} ^ {T} V ^ {\pi^ {*}} - V ^ {\pi_ {\theta_ {t}}} = \frac {1}{1 - \gamma} \sum_ {t = 1} ^ {T} (\pi^ {*} - \pi_ {\theta_ {t}}) ^ {\mathrm {T}} \mathfrak {r} \leqslant \frac {5 \log T}{(1 - \gamma) c _ {T} ^ {2}} + 1.
$$

Also we have that,

$$
\sum_ {t = 1} ^ {T} V ^ {\pi^ {*}} - V ^ {\pi_ {\theta_ {t}}} = \sum_ {t = 1} ^ {T} \delta_ {t} \leqslant \sqrt {T} \sqrt {\sum_ {t = 1} ^ {T} \delta_ {t} ^ {2}} \leqslant \sqrt {T} \sqrt {\sum_ {t = 1} ^ {T} \frac {5}{(1 - \gamma) c _ {T} ^ {2}} (\delta_ {t} - \delta_ {t + 1})} \leqslant \frac {1}{c _ {T}} \sqrt {\frac {5 T}{(1 - \gamma)}}.
$$

We next show that with  $\theta_m^{(1)} = 1 / M, \forall m$ , i.e., uniform initialization,  $\inf_{t \geqslant 1} c_t = 1 / M$ , which will then complete the proof of Theorem H.4 and of corollary H.2.

Lemma H.5. We have  $\inf_{t\geqslant 1}\pi_{\theta_t}(m^*) > 0$ . Furthermore, with uniform initialization of the parameters  $\theta_m^{(1)}$ , i.e.,  $1 / M$ ,  $\forall m\in [M]$ , we have  $\inf_{t\geqslant 1}\pi_{\theta_t}(m^*) = \frac{1}{M}$ .

Proof. We will show that there exists  $t_0$  such that  $\inf_{t \geqslant 1} \pi_{\theta_t}(m^*) = \min_{1 \leqslant t \leqslant t_0} \pi_{\theta_t}(m^*)$ , where  $t_0 = \min \{t : \pi_{\theta_t}(m^*) \geqslant C\}$ . We define the following sets.

$$
\mathcal {S} _ {1} = \left\{\theta : \frac {d V ^ {\pi_ {\theta}}}{d \theta_ {m ^ {*}}} \geqslant \frac {d V ^ {\pi_ {\theta}}}{d \theta_ {m}}, \forall m \neq m ^ {*} \right\}
$$

$$
\mathcal {S} _ {2} = \left\{\theta : \pi_ {\theta} \left(m ^ {*}\right) \geqslant \pi_ {\theta} (m), \forall m \neq m ^ {*} \right\}
$$

$$
\mathcal {S} _ {3} = \left\{\theta : \pi_ {\theta} \left(m ^ {*}\right) \geqslant C \right\}
$$

Note that  $S_{3}$  depends on the choice of  $C$ . Let  $C \coloneqq \frac{M - \Delta}{M + \Delta}$ . We claim the following:

Claim 2.  $(i)\theta_{t}\in S_{1}\Rightarrow \theta_{t + 1}\in S_{1}$  and  $(ii)\theta_t\in \mathcal{S}_1\stackrel {\sim}{\longrightarrow}\pi_{\theta_{t + 1}}(m^*)\geqslant \pi_{\theta_t}(m^*)$

Proof of Claim 2. (i) Fix a  $m \neq m^*$ . We will show that if  $\frac{dV^{\pi_{\theta}}}{d\theta_t(m^*)} \geqslant \frac{dV^{\pi_{\theta}}}{d\theta_t(m)}$ , then  $\frac{dV^{\pi_{\theta}}}{d\theta_{t+1}(m^*)} \geqslant \frac{dV^{\pi_{\theta}}}{d\theta_{t+1}(m)}$ . This will prove the first part.

Case (a):  $\pi_{\theta_t}(m^*) \geqslant \pi_{\theta_t}(m)$ . This implies, by the softmax property, that  $\theta_t(m^*) \geqslant \theta_t(m)$ . After gradient ascent update step we have:

$$
\begin{array}{l} \theta_ {t + 1} (m ^ {*}) = \theta_ {t} (m ^ {*}) + \eta \frac {d V ^ {\pi_ {\theta_ {t}}}}{d \theta_ {t} (m ^ {*})} \\ \geqslant \theta_ {t} (m) + \eta \frac {d V ^ {\pi_ {\theta_ {t}}}}{d \theta_ {t} (m)} \\ = \theta_ {t + 1} (m). \\ \end{array}
$$

This again implies that  $\theta_{t + 1}(m^{*})\geqslant \theta_{t + 1}(m)$ . By the definition of derivative of  $V^{\pi_{\theta}}$  w.r.t  $\theta_t$  (see eq (12)),

$$
\begin{array}{l} \frac {d V ^ {\pi_ {\theta}}}{d \theta_ {t + 1} (m ^ {*})} = \frac {1}{1 - \gamma} \pi_ {\theta_ {t + 1} (m ^ {*})} (\mathfrak {r} (m ^ {*}) - \pi_ {\theta_ {t + 1}} ^ {\mathrm {T}} \mathfrak {r}) \\ = \frac {1}{1 - \gamma} \pi_ {\theta_ {t + 1} (m)} (\mathbf {r} (m) - \pi_ {\theta_ {t + 1}} ^ {\mathbf {T}} \mathbf {r}) \\ = \frac {d V ^ {\pi_ {\theta}}}{d \theta_ {t + 1} (m)}. \\ \end{array}
$$

This implies  $\theta_{t + 1}\in \mathcal{S}_1$

Case (b):  $\pi_{\theta_t}(m^*) < \pi_{\theta_t}(m)$ . We first note the following equivalence:

$$
\frac {d V ^ {\pi_ {\theta}}}{d \theta (m ^ {*})} \geqslant \frac {d V ^ {\pi_ {\theta}}}{d \theta (m)} \longleftrightarrow (\mathfrak {r} (m ^ {*}) - \mathfrak {r} (m)) \left(1 - \frac {\pi_ {\theta} (m ^ {*})}{\pi_ {\theta} (m ^ {*})}\right) (\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}).
$$

which can be simplified as:

$$
\left(\mathfrak {r} (m ^ {*}) - \mathfrak {r} (m)\right) \left(1 - \frac {\pi_ {\theta} (m ^ {*})}{\pi_ {\theta} (m ^ {*})}\right) \left(\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}\right) = \left(\mathfrak {r} (m ^ {*}) - \mathfrak {r} (m)\right) \left(1 - \exp \left(\theta_ {t} (m ^ {*}) - \theta_ {t} (m)\right)\right) \left(\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}\right).
$$

The above condition can be rearranged as:

$$
\mathfrak {r} (m ^ {*}) - \mathfrak {r} (m) \geqslant \left(1 - \exp \left(\theta_ {t} (m ^ {*}) - \theta_ {t} (m)\right)\right) \left(\mathfrak {r} (m ^ {*}) - \pi_ {\theta_ {t}} ^ {\mathrm {T}} \mathfrak {r}\right).
$$

By lemma I.10, we have that  $V^{\pi_{\theta_{t + 1}}} \geqslant V^{\pi_{\theta_t}} \Rightarrow \pi_{\theta_{t + 1}}^{\mathrm{T}}\mathfrak{r} \geqslant \pi_{\theta_t}^{\mathrm{T}}\mathfrak{r}$ . Hence,

$$
0 <   \mathfrak {r} (m ^ {*}) - \pi_ {\theta_ {t + 1}} ^ {\mathrm {T}} \mathfrak {r} \leqslant \pi_ {\theta_ {t}} ^ {\mathrm {T}} \mathfrak {r}.
$$

Also, we note:

$$
\theta_ {t + 1} \left(m ^ {*}\right) - \theta_ {t + 1} (m) = \theta_ {t} \left(m ^ {*}\right) + \eta \frac {d V ^ {\pi_ {t}}}{d \theta_ {t} \left(m ^ {*}\right)} - \theta_ {t + 1} (m) - \eta \frac {d V ^ {\pi_ {t}}}{d \theta_ {t} (m)} \geqslant \theta_ {t} \left(m ^ {*}\right) - \theta_ {t} (m).
$$

This implies,  $1 - \exp \left(\theta_{t + 1}(m^{*}) - \theta_{t + 1}(m)\right) \leqslant 1 - \exp \left(\theta_{t}(m^{*}) - \theta_{t}(m)\right)$ .

Next, we observe that by the assumption  $\pi_t(m^*) < \pi_t(m)$ , we have

$$
1 - \exp \left(\theta_ {t} (m ^ {*}) - \theta_ {t} (m)\right) = 1 - \frac {\pi_ {t} (m ^ {*})}{\pi_ {t} (m)} > 0.
$$

Hence we have,

$$
\begin{array}{l} \left(1 - \exp \left(\theta_ {t + 1} \left(m ^ {*}\right) - \theta_ {t + 1} (m)\right)\right) \left(\mathfrak {r} \left(m ^ {*}\right) - \pi_ {\theta_ {t + 1}} ^ {\mathrm {T}} \mathfrak {r}\right) \leqslant \left(1 - \exp \left(\theta_ {t} \left(m ^ {*}\right) - \theta_ {t} (m)\right)\right) \left(\mathfrak {r} \left(m ^ {*}\right) - \pi_ {\theta_ {t}} ^ {\mathrm {T}} \mathfrak {r}\right) \\ \leqslant \mathfrak {r} (m ^ {*}) - \mathfrak {r} (m). \\ \end{array}
$$

Equivalently,

$$
\left(1 - \frac {\pi_ {t + 1} (m ^ {*})}{\pi_ {t + 1} (m)}\right) (\mathfrak {r} (m ^ {*}) - \pi_ {t + 1} ^ {\mathrm {T}} \mathfrak {r}) \leqslant \mathfrak {r} (m ^ {*}) - \mathfrak {r} (m).
$$

Finishing the proof of the claim 2(i).

(ii) Let  $\theta_t \in S_1$ . We observe that:

$$
\begin{array}{l} \pi_ {t + 1} (m ^ {*}) = \frac {\exp (\theta_ {t + 1} (m ^ {*}))}{\sum_ {m = 1} ^ {M} \exp (\theta_ {t + 1} (m))} \\ = \frac {\exp \left(\theta_ {t} \left(m ^ {*}\right) + \eta \frac {d V ^ {\pi_ {t}}}{d \theta_ {t} \left(m ^ {*}\right)}\right)}{\sum_ {m = 1} ^ {M} \exp \left(\theta_ {t} (m) + \eta \frac {d V ^ {\pi_ {t}}}{d \theta_ {t} (m)}\right)} \\ \geqslant \frac {\exp \left(\theta_ {t} \left(m ^ {*}\right) + \eta \frac {d V ^ {\pi_ {t}}}{d \theta_ {t} \left(m ^ {*}\right)}\right)}{\sum_ {m = 1} ^ {M} \exp \left(\theta_ {t} (m) + \eta \frac {d V ^ {\pi_ {t}}}{d \theta_ {t} \left(m ^ {*}\right)}\right)} \\ = \frac {\exp \left(\theta_ {t} \left(m ^ {*}\right)\right)}{\sum_ {m = 1} ^ {M} \exp \left(\theta_ {t} (m)\right)} = \pi_ {t} \left(m ^ {*}\right) \\ \end{array}
$$

This completes the proof of Claim 2(ii).

Claim 3.  $S_{2}\subset S_{1}$  and  $S_{3}\subset S_{1}$

Proof. To show that  $S_{2} \subset S_{1}$ , let  $\theta \in cS_{2}$ . We have  $\pi_{\theta}(m^{*}) \geqslant \pi_{\theta}(m), \forall m \neq m^{*}$ .

$$
\begin{array}{l} \frac {d V ^ {\pi_ {\theta}}}{d \theta (m ^ {*})} = \frac {1}{1 - \gamma} \pi_ {\theta} (m ^ {*}) (\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) \\ > \frac {1}{1 - \gamma} \pi_ {\theta} (m) (\mathfrak {r} (m) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) \\ = \frac {d V ^ {\pi_ {\theta}}}{d \theta (m)}. \\ \end{array}
$$

This shows that  $\theta \in S_1$ . For showing the second part of the claim, we assume  $\theta \in S_3 \cap S_2^c$ , because if  $\theta \in S_2$ , we are done. Let  $m \neq m^*$ . We have,

$$
\begin{array}{l} \frac {d V ^ {\pi_ {\theta}}}{d \theta (m ^ {*})} - \frac {d V ^ {\pi_ {\theta}}}{d \theta (m)} = \frac {1}{1 - \gamma} \left(\pi_ {\theta} (m ^ {*}) (\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) - \pi_ {\theta} (m) (\mathfrak {r} (m) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r})\right) \\ = \frac {1}{1 - \gamma} \left(2 \pi_ {\theta} (m ^ {*}) (\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) + \sum_ {i \neq m ^ {*} , m} ^ {M} \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r})\right) \\ = \frac {1}{1 - \gamma} \left(\left(2 \pi_ {\theta} (m ^ {*}) + \sum_ {i \neq m ^ {*} , m} ^ {M} \pi_ {\theta} (i)\right) \left(\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}\right) - \sum_ {i \neq m ^ {*} , m} ^ {M} \pi_ {\theta} (i) (\mathfrak {r} (m ^ {*}) - \mathfrak {r} (i))\right) \\ \geqslant \frac {1}{1 - \gamma} \left(\left(2 \pi_ {\theta} (m ^ {*}) + \sum_ {i \neq m ^ {*}, m} ^ {M} \pi_ {\theta} (i)\right) (\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) - \sum_ {i \neq m ^ {*}, m} ^ {M} \pi_ {\theta} (i)\right) \\ \geqslant \frac {1}{1 - \gamma} \left(\left(2 \pi_ {\theta} (m ^ {*}) + \sum_ {i \neq m ^ {*}, m} ^ {M} \pi_ {\theta} (i)\right) \frac {\Delta}{M} - \sum_ {i \neq m ^ {*}, m} ^ {M} \pi_ {\theta} (i)\right). \\ \end{array}
$$

Observe that,  $\sum_{i\neq m^{*},m}^{M}\pi_{\theta}(i) = 1 - \pi (m^{*}) - \pi (m)$ . Using this and rearranging we get,

$$
\frac {d V ^ {\pi_ {\theta}}}{d \theta (m ^ {*})} - \frac {d V ^ {\pi_ {\theta}}}{d \theta (m)} \geqslant \frac {1}{1 - \gamma} \left(\pi (m ^ {*}) \left(1 + \frac {\Delta}{M}\right) - \left(1 - \frac {\Delta}{M}\right) + \pi (m) \left(1 - \frac {\Delta}{M}\right)\right) \geqslant \frac {1}{1 - \gamma} \pi (m) \left(1 - \frac {\Delta}{M}\right) \geqslant 0.
$$

The last inequality follows because  $\theta \in S_3$  and the choice of  $C$ . This completes the proof of Claim 3.

Claim 4. There exists a finite  $t_0$ , such that  $\theta_{t_0} \in S_3$ .

Proof. The proof of this claim relies on the asymptotic convergence result of (Agarwal et al., 2020a). We note that their convergence result hold for our choice of  $\eta = \frac{2(1 - \gamma)}{5}$ . As noted in (Mei et al., 2020), the choice of  $\eta$  is used to justify the gradient ascent lemma I.10. Hence we have  $\pi_{\theta_t} \to 1$  as  $t \to \infty$ . Therefore, there exists a finite  $t_0$  such that  $\pi_{\theta_{t_0}}(m^*) \geqslant C$  and hence  $\theta_{t_0} \in S_3$ .

This completes the proof that there exists a  $t_0$  such that  $\inf_{t\geqslant 1}\pi_{\theta_t}(m^*) = \inf_{1\leqslant t\leqslant t_0}\pi_{\theta_t}(m^*)$ , since once the  $\theta_t\in S_3$ , by Claim 3,  $\theta_t\in S_1$ . Further, by Claim 2,  $\forall t\geqslant t_0$ ,  $\theta_t\in S_1$  and  $\pi_{\theta_t}(m^*)$  is non-decreasing after  $t_0$ .

With uniform initialization  $\theta_{1}(m^{*}) = \frac{1}{M} \geqslant \theta_{1}(m)$ , for all  $m \neq m^{*}$ . Hence,  $\pi_{\theta_1}(m^*) \geqslant \pi_{\theta_1}(m)$  for all  $m \neq m^{*}$ . This implies  $\theta_{1} \in S_{2}$ , which implies  $\theta_{1} \in S_{1}$ . As established in Claim 2,  $S_{1}$  remains invariant under gradient ascent updates, implying  $t_0 = 1$ . Hence we have that  $\inf_{t \geqslant 1} \pi_{\theta_t}(m^*) = \pi_{\theta_1}(m^*) = 1 / M$ , completing the proof of Theorem H.4 and corollary H.2.

# H.2. Proofs for MABs with noisy gradients

When value gradients are unavailable, we follow a direct policy gradient algorithm instead of softmax projection. The full pseudo-code is provided here in Algorithm 4. At each round  $t \geqslant 1$ , the learning rate for  $\eta$  is chosen asynchronously for each controller  $m$ , to be  $\alpha \pi_t(m)^2$ , to ensure that we remain inside the simplex, for some  $\alpha \in (0,1)$ . To justify its name as a policy gradient algorithm, observe that in order to minimize regret, we need to solve the following optimization problem:

$$
\min  _ {\pi \in \mathcal {P} ([ M ])} \sum_ {m = 1} ^ {M} \pi (m) \left(\mathfrak {r} _ {\mu} \left(m ^ {*}\right) - \mathfrak {r} _ {\mu} (m)\right).
$$

A direct gradient with respect to the parameters  $\pi(m)$  gives us a rule for the policy gradient algorithm. The other changes in the update step (eq 18), stem from the fact that true means of the arms are unavailable and importance sampling.

We have the following result.

Theorem H.6. With value of  $\alpha$  chosen to be less than  $\frac{\Delta_{min}}{\mathfrak{r}_{m^*}^\mu - \Delta_{min}}$ ,  $(\pi_t)$  is a Markov process, with  $\pi_t(m^*) \to 1$  as  $t \to \infty$ , a.s. Further the regret till any time  $T$  is bounded as

$$
\mathcal {R} (T) \leqslant \frac {1}{1 - \gamma} \sum_ {m \neq m ^ {*}} \frac {\Delta_ {m}}{\alpha \Delta_ {m i n} ^ {2}} \log T + C,
$$

where  $C\coloneqq \frac{1}{1 - \gamma}\sum_{t\geqslant 1}\mathbb{P}\left\{\pi_t(m^* (t))\leqslant \frac{1}{2}\right\} < \infty$

We make couple of remarks before providing the full proof of Theorem H.6.

Remark H.7. The "cost" of not knowing the true gradient seems to cause the dependence on  $\Delta_{min}$  in the regret, as is not the case when true gradient is available (see Theorem H.4 and Corollary H.2). The dependence on  $\Delta_{min}$  as is well known from the work of (Lai & Robbins, 1985), is unavoidable.

Remark H.8. The dependence of  $\alpha$  on  $\Delta_{min}$  can be removed by a more sophisticated choice of learning rate, at the cost of an extra log  $T$  dependence on regret (Denisov & Walton, 2020).

Algorithm 4 Projection-free Policy Gradient (for MABs)  
Input: learning rate  $\eta \in (0,1)$   
Initialize each  $\pi_1(m) = \frac{1}{M}$ , for all  $m \in [M]$ .  
for  $t = 1$  to  $T$  do  
\[
m_{*}(t) \gets \operatorname*{argmax}_{m \in [M]} \pi_{t}(m)
\]  
Choose controller  $m_{t} \sim \pi_{t}$ .  
Play action  $a_{t} \sim K_{m_{t}}$ .  
Receive reward  $R_{m_t}$  by pulling arm  $a_{t}$ .  
Update  $\forall m \in [M], m \neq m_{*}(t)$ :

$$
\pi_ {t + 1} (m) = \pi_ {t} (m) + \eta \left(\frac {R _ {m} \mathbb {I} _ {m}}{\pi_ {t} (m)} - \frac {R _ {m _ {*}} (t) \mathbb {I} _ {m _ {*}} (t)}{\pi_ {t} \left(m _ {*} (t)\right)}\right) \tag {18}
$$

$$
\operatorname {S e t} \pi_ {t + 1} (m _ {*} (t)) = 1 - \sum_ {m \neq m _ {*} (t)} \pi_ {t + 1} (m).
$$

end for

Proof. The proof is an extension of that of Theorem 1 of (Denisov & Walton, 2020) for the setting that we have. The proof is divided into three main parts. In the first part we show that the recurrence time of the process  $\{\pi_t(m^*)\}_{t \geqslant 1}$  is almost surely finite. Next we bound the expected value of the time taken by the process  $\pi_t(m^*)$  to reach 1. Finally we show that almost surely,  $\lim_{t \to \infty} \pi_t(m^*) \to 1$ , in other words the process  $\{\pi_t(m^*)\}_{t \geqslant 1}$  is transient. We use all these facts to show a regret bound.

Recall  $m_{*}(t) \coloneqq \operatorname*{argmax}_{m \in [M]} \pi_{t}(m)$ . We start by defining the following quantity which will be useful for the analysis of algorithm 4.

Let  $\tau := \min \left\{t \geqslant 1 : \pi_t(m^*) > \frac{1}{2}\right\}$ .

Next, let  $S \coloneqq \left\{\pi \in \mathcal{P}([M]): \frac{1 - \alpha}{2} \leqslant \pi(m^*) < \frac{1}{2}\right\}$ .

In addition, we define for any  $a \in \mathbb{R}$ ,  $\mathcal{S}_a \coloneqq \left\{\pi \in \mathcal{P}([M]) : \frac{1 - \alpha}{a} \leqslant \pi(m^*) < \frac{1}{x}\right\}$ . Observe that if  $\pi_1(m^*) \geqslant 1/a$  and  $\pi_2(m^*) < 1/a$  then  $\pi_1 \in \mathcal{S}_a$ . This fact follows just by the update step of the algorithm 4, and choosing  $\eta = \alpha \pi_t(m)$  for every  $m \neq m^*$ .

Lemma H.9. For  $\alpha > 0$  such that  $\alpha < \frac{\Delta_{min}}{\mathfrak{r}(m^{*}) - \Delta_{min}}$ , we have that

$$
\sup _ {\pi \in \mathcal {S}} \mathbb {E} \left[ \tau \mid \pi_ {1} = \pi \right] <   \infty .
$$

Proof. The proof here is for completeness. We first make note of the following useful result: For a sequence of positive real numbers  $\{a_n\}_{n\geqslant 1}$  such that the following condition is met:

$$
a (n + 1) \leqslant a (n) - b. a (n) ^ {2},
$$

for some  $b > 0$ , the following is always true:

$$
a _ {n} \leqslant \frac {a _ {1}}{1 + b t}.
$$

This inequality follows by rearranging and observing the  $a_{n}$  is a non-increasing sequence. A complete proof can be found in eq. ((Denisov & Walton, 2020), Appendix A.1). Returning to the proof of lemma, we proceed by showing that the sequence  $1 / \pi_t(m^*) - ct$  is a supermartingale for some  $c > 0$ . Let  $\Delta_{min} \coloneqq \Delta$  for ease of notation. Note that if the condition on  $\alpha$  holds then there exists an  $\varepsilon > 0$ , such that  $(1 + \varepsilon)(1 + \alpha) < \mathfrak{r}^{*} / (\mathfrak{r}^{*} - \Delta)$ , where  $\mathfrak{r}^{*} \coloneqq \mathfrak{r}(m^{*})$ . We choose  $c$  to be

$$
c := \alpha . \frac {\mathfrak {r} ^ {*}}{1 + \alpha} - \alpha (\mathfrak {r} ^ {*} - \Delta) (1 + \varepsilon) > 0.
$$

Next, let  $x$  to be greater than  $M$  and satisfying:

$$
\frac {x}{x - \alpha M} \leqslant 1 + \varepsilon .
$$

Let  $\xi_{x}:= \min \{t\geqslant 1:\pi_{t}(m^{*}) > 1 / x\}$ . Since for  $t = 1,\dots ,\xi_{x} - 1$ ,  $m_{*}(t)\neq m^{*}$ , we have  $\pi_{t + 1}(m^{*}) = (1 + \alpha)\pi_{t}(m^{*})$  w.p.  $\pi_t(m^*)\mathfrak{r}^*$  and  $\pi_{t + 1}(m^{*}) = \pi_{t}(m^{*}) + \alpha \pi_{t}(m^{*})^{2} / \pi_{t}(m_{*})^{2}$  w.p.  $\pi_t(m_*)\mathfrak{r}_*(t)$ , where  $\mathfrak{r}_*(t)\coloneqq \mathfrak{r}(m_*(t))$ .

Let  $y(t)\coloneqq 1 / \pi_t(m^*)$ , then we observe by a short calculation that,

$$
y (t + 1) = \left\{ \begin{array}{l l} y (t) - \frac {\alpha}{1 + \alpha} y (t), & w. p. \frac {\mathfrak {r} ^ {*}}{y (t)} \\ y (t) + \alpha \frac {y (t)}{\pi_ {t} (m _ {*} (t)) y (t) - \alpha}. & w. p. \pi_ {t} (m _ {*}) \mathfrak {r} _ {*} (t) \\ y (t) & o t h e r w i s e. \end{array} \right.
$$

We see that,

$$
\begin{array}{l} \mathbb {E} \left[ y (t + 1) \mid H (t) \right] - y (t) \\ = \frac {\mathfrak {r} ^ {*}}{y (t)}. (y (t) - \frac {\alpha}{1 + \alpha} y (t)) + \pi_ {t} (m _ {*}) \mathfrak {r} _ {*} (t). (y (t) + \alpha \frac {y (t)}{\pi_ {t} (m _ {*} (t)) y (t) - \alpha}) - y (t) (\frac {\mathfrak {r} ^ {*}}{y (t)} + \pi_ {t} (m _ {*}) \mathfrak {r} _ {*} (t)) \\ \leqslant \alpha (\mathfrak {r} ^ {*} - \Delta) (1 + \varepsilon) - \frac {\alpha \mathfrak {r} ^ {*}}{1 + \alpha} = - c. \\ \end{array}
$$

The inequality holds because  $\mathfrak{r}_*(t) \leqslant \mathfrak{r}^*\Delta$  and that  $\pi_t(m_*) > 1 / M$ . By the Optional Stopping Theorem (Durrett, 2011),

$$
- c \mathbb {E} [ \xi_ {x} \wedge t ] \geqslant \mathbb {E} [ y (\xi_ {x} \wedge t) - \mathbb {E} [ y (1) ] ] \geqslant - \frac {x}{1 - \alpha}.
$$

The final inequality holds because  $\pi_1(m^*)\geqslant \frac{1 - \alpha}{x}$

Next, applying the monotone convergence theorem gives theta  $\mathbb{E}\left[\xi_x\right] \leqslant \frac{x}{c(1 - \alpha)}$ . Finally to show the result of lemma H.9, we refer the reader to (Appendix A.2, (Denisov & Walton, 2020)), which follows from standard Markov chain arguments.

Next we define an embedded Markov Chain  $\{p(s), s \in \mathbb{Z}_+\}$  as follows. First let  $\sigma(k) \coloneqq \min \left\{t \geqslant \tau(k) : \pi_t(m^*) < \frac{1}{2}\right\}$  and  $\tau(k) \coloneqq \min \left\{t \geqslant \sigma(k - 1) : \pi_t(m^*) \geqslant \frac{1}{2}\right\}$ . Note that within the region  $[\tau(k), \sigma(k))$ ,  $\pi_t(m^*) \geqslant 1/2$  and in  $[\sigma(k), \tau(k + 1))$ ,  $\pi_t(m^*) < 1/2$ . We next analyze the rate at which  $\pi_t(m^*)$  approaches 1. Define

$$
p (s) := \pi_ {t _ {s}} \left(m ^ {*}\right) \text {w h e r e} \quad t _ {s} = s + \sum_ {i = 0} ^ {k} (\tau (i + 1) - \sigma (i))
$$

$$
\text {f o r} \quad s \in \left[ \sum_ {i = 0} ^ {k} (\sigma (i) - \tau (i)), \sum_ {i = 0} ^ {k + 1} (\sigma (i) - \tau (i))\right)
$$

Also let,

$$
\sigma_ {s} := \min  \left\{t > 0: \pi_ {t + t _ {s}} \left(m ^ {*}\right) > 1 / 2 \right\}
$$

and,

$$
\tau_ {s} := \min  \left\{t > \sigma_ {s}: \pi_ {t + t _ {s}} \left(m ^ {*}\right) \leqslant 1 / 2 \right\}
$$

Lemma H.10. The process  $\{p(s)\}_{s\geqslant 1}$ , is a submartingale. Further,  $p(s)\to 1$ , as  $s\to \infty$ . Finally,

$$
\mathbb{E}\left[p(s)\right]\geqslant 1 - \frac{1}{1 + \alpha\frac{\Delta^{2}}{\left(\sum_{m^{\prime}\neq m^{*}}\Delta_{m^{\prime}}\right)}} s.
$$

Proof. We first observe that,

$$
p (s + 1) = \left\{ \begin{array}{l l} \pi_ {t _ {s} + 1} (m ^ {*}) & i f \pi_ {t _ {s} + 1} (m ^ {*}) \geqslant 1 / 2 \\ \pi_ {t _ {s} + \tau + s} (m ^ {*}) & i f \pi_ {t _ {s} + 1} (m ^ {*}) <   1 / 2 \end{array} \right.
$$

Since  $\pi_{t_s + \tau_s}(m^*)\geqslant 1 / 2$  we have that,

$$
p (s + 1) \geqslant \pi_ {t _ {s} + 1} \left(m ^ {*}\right) \text {a n d} p (s) = \pi_ {t _ {s}} \left(m ^ {*}\right).
$$

Since at times  $t_s, \pi_{t_s}(m^*) > 1/2$ , we know that  $m^*$  is the leading arm. Thus by the update step, for all  $m \neq m^*$ ,

$$
\pi_ {t _ {s} + 1} (m) = \pi_ {t _ {s}} (m) + \alpha \pi_ {t _ {s}} (m) ^ {2} \left[ \frac {\mathbb {I} _ {m} R _ {m} (t _ {s})}{\pi_ {t _ {s}} (m)} - \frac {\mathbb {I} _ {m ^ {*}} R _ {m ^ {*}} (t _ {s})}{\pi_ {t _ {s}} (m ^ {*})} \right].
$$

Taking expectations both sides,

$$
\mathbb {E} \left[ \pi_ {t _ {s} + 1} (m) \mid H (t _ {s}) \right] - \pi_ {t _ {s}} (m) = \alpha \pi_ {t _ {s}} (m) ^ {2} (\mathfrak {r} _ {m} - \mathfrak {r} _ {m ^ {*}}) = - \alpha \Delta_ {m} \pi_ {t _ {s}} (m) ^ {2}.
$$

Summing over all  $m \neq m^*$ :

$$
- \mathbb {E} \left[ \pi_ {t _ {s} + 1} (m ^ {*}) \mid H (t _ {s}) \right] + \pi_ {t _ {s}} (m ^ {*}) = - \alpha \sum_ {m \neq m ^ {*}} \Delta_ {m} \pi_ {t _ {s}} (m) ^ {2}.
$$

By Jensen's inequality,

$$
\begin{array}{l} \sum_ {m \neq m ^ {*}} \Delta_ {m} \pi_ {t _ {s}} (m) ^ {2} = \left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right) \sum_ {m \neq m ^ {*}} \frac {\Delta_ {m}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)} \pi_ {t _ {s}} (m) ^ {2} \\ \geqslant \left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right) \left(\sum_ {m \neq m ^ {*}} \frac {\Delta_ {m} \pi_ {t _ {s}} (m)}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}\right) ^ {2} \\ \geqslant \left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right) \frac {\Delta^ {2} \left(\sum_ {m \neq m ^ {*}} \pi_ {t _ {s}} (m)\right) ^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right) ^ {2}} \\ = \frac {\Delta^ {2} \left(1 - \pi_ {t _ {s}} (m ^ {*})\right) ^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}. \\ \end{array}
$$

Hence we get,

$$
p (s) - \mathbb {E} \left[ p (s + 1) \mid H (t _ {s}) \right] \leqslant - \alpha \frac {\Delta^ {2} (1 - p (s)) ^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)} \Rightarrow \mathbb {E} \left[ p (s + 1) \mid H (t _ {s}) \right] \geqslant p (s) + \alpha \frac {\Delta^ {2} (1 - p (s)) ^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}.
$$

This implies immediately that  $\{p(s)\}_{s\geqslant 1}$  is a submartingale.

Since,  $\{p(s)\}$  is non-negative and bounded by 1, by Martingale Convergence Theorem,  $\lim_{s\to \infty}p(s)$  exists. We will now show that the limit is 1. Clearly, it is sufficient to show that  $\lim \sup p(s) = 1$ . For  $a > 2$ , let

$$
s \rightarrow \infty
$$

$$
\varphi_ {a} := \min  \left\{s \geqslant 1: p (s) \geqslant \frac {a - 1}{a} \right\}.
$$

As is shown in (Denisov & Walton, 2020), it is sufficient to show  $\varphi_{a} < \infty$ , with probability 1, because then one can define a sequence of stopping times for increasing  $a$ , each finite w.p. 1. which implies that  $p(s) \to 1$ . By the previous display, we have

$$
\mathbb {E} \left[ p (s + 1) \mid H (t _ {s}) \right] - p (s) \geqslant \alpha \frac {\Delta^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right) a ^ {2}}
$$

as long as  $p(s) \leqslant \frac{a - 1}{a}$ . Hence by applying Optional Stopping Theorem and rearranging we get,

$$
\mathbb {E} \left[ \varphi_ {a} \right] \leqslant \lim  _ {s \rightarrow \infty} \mathbb {E} \left[ \varphi_ {a} \wedge s \right] \leqslant \frac {\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right) a ^ {2}}{\alpha \Delta} (1 - \mathbb {E} \left[ p (1) \right]) <   \infty .
$$

Since  $\varphi_{a}$  is a non-negative random variable with finite expectation,  $\varphi_{a} < \infty a.s$ . Let  $q(s) = 1 - p(s)$ . We have:

$$
\mathbb {E} \left[ q (s + 1) \right] - \mathbb {E} \left[ q (s) \right] \leqslant - \alpha \frac {\Delta^ {2} \left(q (s)\right) ^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}.
$$

By the useful result H.2, we get,

$$
\mathbb {E} [ q (s) ] \leqslant \frac {\mathbb {E} [ q (1) ]}{1 + \alpha \frac {\Delta^ {2} \mathbb {E} [ q (1) ]}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)} s}} \leqslant \frac {1}{1 + \alpha \frac {\Delta^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}}.
$$

This completes the proof of the lemma.

Finally we provide a lemma to tie the results above. We refer (Appendix A.5 (Denisov & Walton, 2020)) for the proof of this lemma.

Lemma H.11.

$$
\sum_ {t \geqslant 1} \mathbb {P} \left[ \pi_ {t} \left(m ^ {*}\right) <   1 / 2 \right] <   \infty .
$$

Also, with probability 1,  $\pi_t(m^*) \to 1$ , as  $t \to \infty$ .

Proof of regret bound: Since  $\mathfrak{r}^* - \mathfrak{r}(m) \leqslant 1$ , we have by the definition of regret (see eq 11)

$$
\mathcal {R} (T) = \mathbb {E} \left[ \frac {1}{1 - \gamma} \sum_ {t = 1} ^ {T} \left(\sum_ {m = 1} ^ {M} \pi^ {*} (m) \mathfrak {r} _ {m} - \pi_ {t} (m) \mathfrak {r} _ {m}\right) \right].
$$

Here we recall that  $\pi^{*} = e_{m^{*}}$ , we have:

$$
\begin{array}{l} \mathcal {R} (T) = \frac {1}{1 - \gamma} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(\sum_ {m = 1} ^ {M} (\pi^ {*} (m) \mathfrak {r} _ {m} - \pi_ {t} (m) \mathfrak {r} _ {m})\right) \right] \\ = \frac {1}{1 - \gamma} \mathbb {E} \left[ \sum_ {m = 1} ^ {M} \left(\sum_ {t = 1} ^ {T} (\pi^ {*} (m) \mathfrak {r} _ {m} - \pi_ {t} (m) \mathfrak {r} _ {m})\right) \right] \\ = \frac {1}{1 - \gamma} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(\mathfrak {r} ^ {*} - \sum_ {m = 1} ^ {M} \pi_ {t} (m) \mathfrak {r} _ {m}\right) \right] \\ = \frac {1}{1 - \gamma} \mathbb {E} \left[ \left(\sum_ {t = 1} ^ {T} \mathfrak {r} ^ {*} - \sum_ {t = 1} ^ {T} \sum_ {m = 1} ^ {M} \pi_ {t} (m) \mathfrak {r} _ {m}\right) \right] \\ = \frac {1}{1 - \gamma} \mathbb {E} \left[ \left(\sum_ {t = 1} ^ {T} \mathfrak {r} ^ {*} (1 - \pi_ {t} (m ^ {*})) - \sum_ {t = 1} ^ {T} \sum_ {m \neq m ^ {*}} \pi_ {t} (m) \mathfrak {r} _ {m}\right) \right] \\ = \frac {1}{1 - \gamma} \mathbb {E} \left[ \left(\sum_ {t = 1} ^ {T} \sum_ {m \neq m ^ {*}} \mathfrak {r} ^ {*} \pi_ {t} (m) - \sum_ {t = 1} ^ {T} \sum_ {m \neq m ^ {*}} \pi_ {t} (m) \mathfrak {r} _ {m}\right) \right] \\ = \frac {1}{1 - \gamma} \sum_ {m \neq m ^ {*}} \left(\mathfrak {r} ^ {*} - \mathfrak {r} _ {m}\right) \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \pi_ {t} (m) \right]. \\ \end{array}
$$

Hence we have,

$$
\begin{array}{l} \mathcal {R} (T) = \frac {1}{1 - \gamma} \sum_ {m \neq m ^ {*}} (\mathfrak {r} ^ {*} - \mathfrak {r} _ {m}) \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \pi_ {t} (m) \right] \\ \leqslant \frac {1}{1 - \gamma} \sum_ {m \neq m ^ {*}} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \pi_ {t} (m) \right] \\ = \frac {1}{1 - \gamma} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(1 - \pi_ {t} \left(m ^ {*}\right)\right) \right] \\ \end{array}
$$

We analyze the following term:

$$
\begin{array}{l} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(1 - \pi_ {t} \left(m ^ {*}\right)\right) \right] = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(1 - \pi_ {t} \left(m ^ {*}\right)\right) \mathbb {I} \left\{\pi_ {t} \left(m ^ {*}\right) \geqslant 1 / 2 \right\} \right] + \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(1 - \pi_ {t} \left(m ^ {*}\right)\right) \mathbb {I} \left\{\pi_ {t} \left(m ^ {*}\right) <   1 / 2 \right\} \right] \\ = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(1 - \pi_ {t} \left(m ^ {*}\right)\right) \mathbb {I} \left\{\pi_ {t} \left(m ^ {*}\right) \geqslant 1 / 2 \right\} \right] + C _ {1}. \\ \end{array}
$$

where,  $C_1 \coloneqq \sum_{t=1}^{\infty} \mathbb{P}\left[\pi_t(m^*) < 1/2\right] < \infty$  by Lemma H.11. Next we observe that,

$$
\begin{array}{l} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(1 - \pi_ {t} \left(m ^ {*}\right)\right) \mathbb {I} \left\{\pi_ {t} \left(m ^ {*}\right) \geqslant 1 / 2 \right\} \right] = \mathbb {E} \left[ \sum_ {s = 1} ^ {T} q (s) \mathbb {I} \left\{\pi_ {t} \left(m ^ {*}\right) \geqslant 1 / 2 \right\} \right] \leqslant \mathbb {E} \left[ \sum_ {s = 1} ^ {T} q (s) \right] \\ = \sum_ {t = 1} ^ {T} \frac {1}{1 + \alpha \frac {\Delta^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right) s}} \leqslant \sum_ {t = 1} ^ {T} \frac {\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}{\alpha \Delta^ {2} s} \\ \leqslant \frac {\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}{\alpha \Delta^ {2}} \log T. \\ \end{array}
$$

Putting things together, we get,

$$
\begin{array}{l} \mathcal {R} (T) \leqslant \frac {1}{1 - \gamma} \left(\frac {\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}{\alpha \Delta^ {2}} \log T + C _ {1}\right) \\ = \frac {1}{1 - \gamma} \left(\frac {\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}{\alpha \Delta^ {2}} \log T\right) + C. \\ \end{array}
$$

This completes the proof of Theorem H.6.

![](images/daffc5e509fbc9ec2629aba87cabdb8389f7f6ccf4d2751333ff319b0ab45759.jpg)

# I. Proofs for MDPs

First we recall the policy gradient theorem.

Theorem I.1 (Policy Gradient Theorem (Sutton et al., 2000)).

$$
\frac {\partial}{\partial \theta} V ^ {\pi_ {\theta}} (\mu) = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {a \in \mathcal {A}} \frac {\partial \pi_ {\theta} (a | s)}{\partial \theta} Q ^ {\pi_ {\theta}} (s, a).
$$

Let  $s \in S$  and  $m \in [m]$ . Let  $\tilde{Q}^{\pi_{\theta}}(s, m) \coloneqq \sum_{a \in \mathcal{A}} K_m(s, a) Q^{\pi_{\theta}}(s, a)$ . Also let  $\tilde{A}(s, m) \coloneqq \tilde{Q}(s, m) - V(s)$ .

Lemma I.2 (Gradient Simplification). The softmax policy gradient with respect to the parameter  $\theta \in \mathbb{R}^M$  is  $\frac{\partial}{\partial\theta_m} V^{\pi_\theta}(\mu) = \frac{1}{1 - \gamma}\sum_{s\in S}d_{\mu}^{\pi_\theta}(s)\pi_\theta (m)\tilde{A} (s,m)$ , where  $\tilde{A} (s,m)\coloneqq \tilde{Q} (s,m) - V(s)$  and  $\tilde{Q} (s,m)\coloneqq \sum_{a\in \mathcal{A}}K_m(s,a)Q^{\pi_\theta}(s,a)$ , and  $d_{\mu}^{\pi_\theta}(.)$  is the discounted state visitation measure starting with an initial distribution  $\mu$  and following policy  $\pi_{\theta}$ .

The interpretation of  $\tilde{A}(s, m)$  is the advantage of following controller  $m$  at state  $s$  and then following the policy  $\pi_{\theta}$  for all time versus following  $\pi_{\theta}$  always. As mentioned in section 4, we proceed by proving smoothness of the  $V^{\pi}$  function over the space  $\mathbb{R}^{M}$ .

Proof. From the policy gradient theorem I.1, we have:

$$
\begin{array}{l} \frac {\partial}{\partial \theta_ {m ^ {\prime}}} V ^ {\pi_ {\theta}} (\mu) = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {a \in \mathcal {A}} \frac {\partial \pi_ {\theta_ {m ^ {\prime}}} (a | s)}{\partial \theta} Q ^ {\pi_ {\theta}} (s, a) \\ = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {a \in \mathcal {A}} \frac {\partial}{\partial \theta_ {m ^ {\prime}}} \left(\sum_ {m = 1} ^ {M} \pi_ {\theta} (m) K _ {m} (s, a)\right) Q ^ {\pi_ {\theta}} (s, a) \\ = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {m = 1} ^ {M} \sum_ {a \in \mathcal {A}} \left(\frac {\partial}{\partial \theta_ {m ^ {\prime}}} \pi_ {\theta} (m)\right) K _ {m} (s, a) Q (s, a) \\ = \frac {1}{1 - \gamma} \sum_ {s \in S} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {a \in \mathcal {A}} \pi_ {m ^ {\prime}} \left(K _ {m ^ {\prime}} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {m} K _ {m} (s, a)\right) Q (s, a) \\ = \frac {1}{1 - \gamma} \sum_ {s \in S} d _ {\mu} ^ {\pi_ {\theta}} (s) \pi_ {m ^ {\prime}} \sum_ {a \in \mathcal {A}} \left(K _ {m ^ {\prime}} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {m} K _ {m} (s, a)\right) Q (s, a) \\ = \frac {1}{1 - \gamma} \sum_ {s \in S} d _ {\mu} ^ {\pi_ {\theta}} (s) \pi_ {m ^ {\prime}} \left[ \sum_ {a \in \mathcal {A}} K _ {m ^ {\prime}} (s, a) Q (s, a) - \sum_ {a \in \mathcal {A}} \sum_ {m = 1} ^ {M} \pi_ {m} K _ {m} (s, a) Q (s, a) \right] \\ = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \pi_ {m ^ {\prime}} \left[ \tilde {Q} (s, m ^ {\prime}) - V (s) \right] \\ = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \pi_ {m ^ {\prime}} \tilde {A} ^ {\pi_ {\theta}} (s, m ^ {\prime}). \\ \end{array}
$$

Lemma I.3.  $V^{\pi_{\theta}}(\mu)$  is  $\frac{7\gamma^2 + 4\gamma + 5}{2(1 - \gamma)^2}$ -smooth.

Proof. The proof uses ideas from (Agarwal et al., 2020a) and (Mei et al., 2020). Let  $\theta_{\alpha} = \theta + \alpha u$ , where  $u \in \mathbb{R}^{M}$ ,  $\alpha \in \mathbb{R}$ . For any  $s \in S$ ,

$$
\sum_ {a} \left| \frac {\partial \pi_ {\theta_ {\alpha}} (a | s)}{\partial \alpha} \Big | _ {\alpha = 0} \right| = \sum_ {a} \left| \left\langle \frac {\partial \pi_ {\theta_ {\alpha}} (a | s)}{\partial \theta_ {\alpha}} \Big | _ {\alpha = 0}, \frac {\partial \theta_ {\alpha}}{\partial \alpha} \right\rangle \right| = \sum_ {a} \left| \left\langle \frac {\partial \pi_ {\theta_ {\alpha}} (a | s)}{\partial \theta_ {\alpha}} \Big | _ {\alpha = 0}, u \right\rangle \right|
$$

![](images/46715df59df2fff4cba15d77718e687f075e79fd0f30919568c6b025ba46c11a.jpg)

$$
\begin{array}{l} = \sum_ {a} \left| \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} \left(\mathbb {I} _ {m m ^ {\prime \prime}} - \pi_ {\theta_ {m}}\right) K _ {m} (s, a) u \left(m ^ {\prime \prime}\right) \right| \\ = \sum_ {a} \left| \sum_ {m ^ {\prime \prime} = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} \left(K _ {m ^ {\prime \prime}} (s, a) u (m ^ {\prime \prime}) - \sum_ {m = 1} ^ {M} K _ {m} (s, a) u (m ^ {\prime \prime})\right) \right| \\ \leqslant \sum_ {a} \sum_ {m ^ {\prime \prime} = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} K _ {m ^ {\prime \prime}} (s, a) | u (m ^ {\prime \prime}) | + \sum_ {a} \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} \pi_ {\theta_ {m}} K _ {m} (s, a) | u (m ^ {\prime \prime}) | \\ = \sum_ {m ^ {\prime \prime} = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} | u (m ^ {\prime \prime}) | \underbrace {\sum_ {a} K _ {m ^ {\prime \prime}} (s , a)} _ {= 1} + \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} \pi_ {\theta_ {m}} | u (m ^ {\prime \prime}) | \underbrace {\sum_ {a} K _ {m} (s , a)} _ {= 1} \\ = \sum_ {m ^ {\prime \prime} = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} | u (m ^ {\prime \prime}) | + \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} \pi_ {\theta_ {m}} | u (m ^ {\prime \prime}) | \\ = 2 \sum_ {m ^ {\prime \prime} = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} | u (m ^ {\prime \prime}) | \leqslant 2 \| u \| _ {2}. \\ \end{array}
$$

Next we bound the second derivative.

$$
\sum_ {a} \left| \frac {\partial^ {2} \pi_ {\theta_ {\alpha}} (a \mid s)}{\partial \alpha^ {2}} \mid_ {\alpha = 0} \right| = \sum_ {a} \left| \left\langle \frac {\partial}{\partial \theta_ {\alpha}} \frac {\partial \pi_ {\theta_ {\alpha}} (a \mid s)}{\partial \alpha} \mid_ {\alpha = 0}, u \right\rangle \right| = \sum_ {a} \left| \left\langle \frac {\partial^ {2} \pi_ {\theta_ {\alpha}} (a \mid s)}{\partial \alpha^ {2}} \mid_ {\alpha = 0} u, u \right\rangle \right|.
$$

Let  $H^{a,\theta} \coloneqq \frac{\partial^2\pi_{\theta_\alpha}(a|s)}{\partial\theta^2} \in \mathbb{R}^{M\times M}$ . We have,

$$
\begin{array}{l} H _ {i, j} ^ {a, \theta} = \frac {\partial}{\partial \theta_ {j}} \left(\sum_ {m = 1} ^ {M} \pi_ {\theta_ {i}} \left(\mathbb {I} _ {m i} - \pi_ {\theta_ {m}}\right) K _ {m} (s, a)\right) \\ = \frac {\partial}{\partial \theta_ {j}} \left(\pi_ {\theta_ {i}} K _ {i} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {\theta_ {i}} \pi_ {\theta_ {m}} K _ {m} (s, a)\right) \\ = \pi_ {\theta_ {j}} \left(\mathbb {I} _ {i j} - \pi_ {\theta_ {i}}\right) K _ {i} (s, a) - \sum_ {m = 1} ^ {M} K _ {m} (s, a) \frac {\partial \pi_ {\theta_ {i}} \pi_ {\theta_ {m}}}{\partial \theta_ {j}} \\ = \pi_ {j} \left(\mathbb {I} _ {i j} - \pi_ {i}\right) K _ {i} (s, a) - \sum_ {m = 1} ^ {M} K _ {m} (s, a) \left(\pi_ {j} \left(\mathbb {I} _ {i j} - \pi_ {i}\right) \pi_ {m} + \pi_ {i} \pi_ {j} \left(\mathbb {I} _ {m j} - \pi_ {m}\right)\right) \\ = \pi_ {j} \left(\left(\mathbb {I} _ {i j} - \pi_ {i}\right) K _ {i} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {m} \left(\mathbb {I} _ {i j} - \pi_ {i}\right) K _ {m} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {i} \left(\mathbb {I} _ {m j} - \pi_ {m}\right) K _ {m} (s, a)\right). \\ \end{array}
$$

Plugging this into the second derivative, we get,

$$
\begin{array}{l} \left| \left\langle \frac {\partial^ {2}}{\partial \theta^ {2}} \pi_ {\theta} (a | s) u, u \right\rangle \right| \\ = \left| \sum_ {j = 1} ^ {M} \sum_ {i = 1} ^ {M} H _ {i, j} ^ {a, \theta} u _ {i} u _ {j} \right| \\ = \left| \sum_ {j = 1} ^ {M} \sum_ {i = 1} ^ {M} \pi_ {j} \left(\left(\mathbb {I} _ {i j} - \pi_ {i}\right) K _ {i} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {m} \left(\mathbb {I} _ {i j} - \pi_ {i}\right) K _ {m} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {i} \left(\mathbb {I} _ {m j} - \pi_ {m}\right) K _ {m} (s, a)\right) u _ {i} u _ {j} \right| \\ = \left| \sum_ {i = 1} ^ {M} \pi_ {i} K _ {i} (s, a) u _ {i} ^ {2} - \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \pi_ {i} \pi_ {j} K _ {i} (s, a) u _ {i} u _ {j} - \sum_ {i = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {i} \pi_ {m} K _ {m} (s, a) u _ {i} ^ {2} \right. \\ + \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {i} \pi_ {j} \pi_ {m} K _ {m} (s, a) u _ {i} u _ {j} - \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \pi_ {i} \pi_ {j} K _ {j} (s, a) u _ {i} u _ {j} \\ + \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {i} \pi_ {j} \pi_ {m} K _ {m} (s, a) u _ {i} u _ {j} \\ = \left| \sum_ {i = 1} ^ {M} \pi_ {i} K _ {i} (s, a) u _ {i} ^ {2} - 2 \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \pi_ {i} \pi_ {j} K _ {i} (s, a) u _ {i} u _ {j} \right. \\ - \sum_ {i = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {i} \pi_ {m} K _ {m} (s, a) u _ {i} ^ {2} + 2 \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {i} \pi_ {j} \pi_ {m} K _ {m} (s, a) u _ {i} u _ {j} \\ = \left| \sum_ {i = 1} ^ {M} \pi_ {i} u _ {i} ^ {2} \left(K _ {i} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {m} K _ {m} (s, a)\right) - 2 \sum_ {i = 1} ^ {M} \pi_ {i} u _ {i} \sum_ {j = 1} ^ {M} \pi_ {j} u _ {j} \left(K _ {i} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {m} K _ {m} (s, a)\right) \right| \\ \leqslant \sum_ {i = 1} ^ {M} \pi_ {i} u _ {i} ^ {2} \underbrace {\left| K _ {i} (s , a) - \sum_ {m = 1} ^ {M} \pi_ {m} K _ {m} (s , a) \right|} _ {\leqslant 1} + 2 \sum_ {i = 1} ^ {M} \pi_ {i} | u _ {i} | \sum_ {j = 1} ^ {M} \pi_ {j} | u _ {j} | \underbrace {\left| K _ {i} (s , a) - \sum_ {m = 1} ^ {M} \pi_ {m} K _ {m} (s , a) \right|} _ {\leqslant 1} \\ \leqslant \| u \| _ {2} ^ {2} + 2 \sum_ {i = 1} ^ {M} \pi_ {i} | u _ {i} | \sum_ {j = 1} ^ {M} \pi_ {j} | u _ {j} | \leqslant 3 \| u \| _ {2} ^ {2}. \\ \end{array}
$$

The rest of the proof is similar to (Mei et al., 2020) and we include this for completeness. Define  $P(\alpha)\in \mathbb{R}^{S\times S}$ , where  $\forall (s,s^{\prime})$

$$
\left[ P (\alpha) \right] _ {(s, s ^ {\prime})} = \sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (a \mid s). \mathrm {P} (s ^ {\prime} | s, a).
$$

The derivative w.r.t.  $\alpha$  is,

$$
\left[ \frac {\partial}{\partial \alpha} P (\alpha) \Big | _ {\alpha = 0} \right] _ {(s, s ^ {\prime})} = \sum_ {a \in \mathcal {A}} \left[ \frac {\partial}{\partial \alpha} \pi_ {\theta_ {\alpha}} (a \mid s) \Big | _ {\alpha = 0} \right]. \mathrm {P} (s ^ {\prime} | s, a).
$$

For any vector  $x\in \mathbb{R}^S$

$$
\left[ \frac {\partial}{\partial \alpha} P (\alpha) \Big | _ {\alpha = 0} x \right] _ {(s)} = \sum_ {s ^ {\prime} \in \mathcal {S}} \sum_ {a \in \mathcal {A}} \left[ \frac {\partial}{\partial \alpha} \pi_ {\theta_ {\alpha}} (a \mid s) \Big | _ {\alpha = 0} \right]. \mathrm {P} (s ^ {\prime} | s, a). x (s ^ {\prime}).
$$

The  $l_{\infty}$  norm can be upper-bounded as,

$$
\left\| \frac {\partial}{\partial \alpha} P (\alpha) \right| _ {\alpha = 0} x \Bigg | _ {\infty} = \max  _ {s \in \mathcal {S}} \left| \sum_ {s ^ {\prime} \in \mathcal {S}} \sum_ {a \in \mathcal {A}} \left[ \frac {\partial}{\partial \alpha} \pi_ {\theta_ {\alpha}} (a \mid s) \right| _ {\alpha = 0} \right]. \mathbb {P} (s ^ {\prime} | s, a). x (s ^ {\prime})
$$

$$
\begin{array}{l} \leqslant \max  _ {s \in \mathcal {S}} \sum_ {s ^ {\prime} \in \mathcal {S}} \sum_ {a \in \mathcal {A}} \left| \frac {\partial}{\partial \alpha} \pi_ {\theta_ {\alpha}} (a | s) \right| _ {\alpha = 0} \Bigg |. P (s ^ {\prime} | s, a). \| x \| _ {\infty} \\ \leqslant 2 \left\| u \right\| _ {2} \left\| x \right\| _ {\infty}. \\ \end{array}
$$

Now we find the second derivative,

$$
\left[ \frac {\partial^ {2} P (\alpha)}{\partial \alpha^ {2}} \right| _ {\alpha = 0} \bigg | _ {(s, s ^ {\prime})} = \sum_ {a \in \mathcal {A}} \left[ \frac {\partial^ {2} \pi_ {\theta_ {\alpha}} (a | s)}{\partial \alpha^ {2}} \right| _ {\alpha = 0} \bigg ] P (s ^ {\prime} | s, a)
$$

taking the  $l_{\infty}$  norm,

$$
\begin{array}{l} \left\| \left[ \frac {\partial^ {2} P (\alpha)}{\partial \alpha^ {2}} \right| _ {\alpha = 0} \right] x \left\| _ {\infty} = \max  _ {s} \left| \sum_ {s ^ {\prime} \in \mathcal {S}} \sum_ {a \in \mathcal {A}} \left[ \frac {\partial^ {2} \pi_ {\theta_ {\alpha}} (a | s)}{\partial \alpha^ {2}} \right| _ {\alpha = 0} \right] \mathrm {P} (s ^ {\prime} | s, a) x (s ^ {\prime}) \right| \\ \leqslant \max  _ {s} \sum_ {s ^ {\prime} \in \mathcal {S}} \left[ \left| \frac {\partial^ {2} \pi_ {\theta_ {\alpha}} (a | s)}{\partial \alpha^ {2}} \right| _ {\alpha = 0} \right] \mathrm {P} \left(s ^ {\prime} | s, a\right) \| x \| _ {\infty} \leqslant 3 \| u \| _ {2} \| x \| _ {\infty}. \\ \end{array}
$$

Next we observe that the value function of  $\pi_{\theta_{\alpha}}$  ..

$$
V ^ {\pi_ {\theta_ {\alpha}}} \left(s\right) = \underbrace {\sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (a | s) r (s , a)} _ {r _ {\theta_ {\alpha}}} + \gamma \sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (a | s) \sum_ {s ^ {\prime} \in \mathcal {S}} \mathrm {P} \left(s ^ {\prime} | s, a\right) V ^ {\pi_ {\theta_ {\alpha}}} \left(s ^ {\prime}\right).
$$

In matrix form,

$$
\begin{array}{l} V ^ {\pi_ {\theta_ {\alpha}}} = r _ {\theta_ {\alpha}} + \gamma P (\alpha) V ^ {\pi_ {\theta_ {\alpha}}} \\ \Rightarrow (I d - \gamma P (\alpha)) V ^ {\pi_ {\theta_ {\alpha}}} = r _ {\theta_ {\alpha}} \\ V ^ {\pi_ {\theta_ {\alpha}}} = (I d - \gamma P (\alpha)) ^ {- 1} r _ {\theta_ {\alpha}}. \\ \end{array}
$$

Let  $M(\alpha) \coloneqq (Id - \gamma P(\alpha))^{-1} = \sum_{t=0}^{\infty} \gamma^{t}[P(\alpha)]^{t}$ . Also, observe that

$$
\begin{array}{l} \mathbf {1} = \frac {1}{1 - \gamma} (I d - \gamma P (\alpha)) \mathbf {1} \Longrightarrow M (\alpha) \mathbf {1} = \frac {1}{1 - \gamma} \mathbf {1}. \\ \Rightarrow \forall i \| [ M (\alpha) ] _ {i,:} \| _ {1} = \frac {1}{1 - \gamma} \\ \end{array}
$$

where  $[M(\alpha)]_{i,:}$  is the  $i^{th}$  row of  $M(\alpha)$ . Hence for any vector  $x\in \mathbb{R}^S$ ,  $\| M(\alpha)x\|_{\infty}\leqslant \frac{1}{1 - \gamma}\| x\|_{\infty}$ .

By assumption I.6, we have  $\| r_{\theta_{\alpha}} \|_{\infty} = \max_s |r_{\theta_{\alpha}}(s)| \leqslant 1$ . Next we find the derivative of  $r_{\theta_{\alpha}}$  w.r.t  $\alpha$ .

$$
\begin{array}{l} \left| \frac {\partial r _ {\theta_ {\alpha}} (s)}{\partial \alpha} \right| = \left| \left(\frac {\partial r _ {\theta_ {\alpha}} (s)}{\partial \theta_ {\alpha}}\right) ^ {T} \frac {\partial \theta_ {\alpha}}{\partial \alpha} \right| \\ \leqslant \left| \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {m = 1} ^ {M} \sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (m ^ {\prime \prime}) (\mathbb {I} _ {m m ^ {\prime \prime}} - \pi_ {\theta_ {\alpha}} (m)) K _ {m} (s, a) r (s, a) u (m ^ {\prime \prime}) \right| \\ = \left| \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (m ^ {\prime \prime}) K _ {m ^ {\prime \prime}} (s, a) r (s, a) u (m ^ {\prime \prime}) - \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {m = 1} ^ {M} \sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (m ^ {\prime \prime}) \pi_ {\theta_ {\alpha}} (m) K _ {m} (s, a) r (s, a) u (m ^ {\prime \prime}) \right. \\ \leqslant \left| \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (m ^ {\prime \prime}) K _ {m ^ {\prime \prime}} (s, a) r (s, a) - \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {m = 1} ^ {M} \sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (m ^ {\prime \prime}) \pi_ {\theta_ {\alpha}} (m) K _ {m} (s, a) r (s, a) \right| \| u \| _ {\infty} \leqslant \| u \| _ {2}. \\ \end{array}
$$

Similarly, we can calculate the upper-bound on second derivative,

$$
\begin{array}{l} \left\| \frac {\partial r _ {\theta_ {\alpha}}}{\partial \alpha^ {2}} \right\| _ {\infty} = \max  _ {s} \left| \frac {\partial r _ {\theta_ {\alpha}} (s)}{\partial \alpha^ {2}} \right| \\ = \max  _ {s} \left| \left(\frac {\partial}{\partial \alpha} \left\{\frac {\partial r _ {\theta_ {\alpha}} (s)}{\partial \alpha} \right\}\right) ^ {T} \frac {\partial \theta_ {\alpha}}{\partial \alpha} \right| \\ = \max  _ {s} \left| \left(\frac {\partial^ {2} r _ {\theta_ {\alpha}} (s)}{\partial \alpha^ {2}} \frac {\partial \theta_ {\alpha}}{\partial \alpha}\right) ^ {T} \frac {\partial \theta_ {\alpha}}{\partial \alpha} \right| \leqslant 5 / 2 \| u \| _ {2} ^ {2}. \\ \end{array}
$$

Next, the derivative of the value function w.r.t  $\alpha$  is given by,

$$
\frac {\partial V ^ {\pi_ {\theta_ {\alpha}}} (s)}{\partial \alpha} = \gamma e _ {s} ^ {\mathrm {T}} M (\alpha) \frac {\partial P (\alpha)}{\partial \alpha} M (\alpha) r _ {\theta_ {\alpha}} + e _ {s} ^ {\mathrm {T}} M (\alpha) \frac {\partial r _ {\theta_ {\alpha}}}{\partial \alpha}.
$$

And the second derivative,

$$
\begin{array}{l} \frac {\partial^ {2} V ^ {\pi_ {\theta_ {\alpha}}} (s)}{\partial \alpha^ {2}} = \underbrace {2 \gamma^ {2} e _ {s} ^ {\mathrm {T}} M (\alpha) \frac {\partial P (\alpha)}{\partial \alpha} M (\alpha) \frac {\partial P (\alpha)}{\partial \alpha} M (\alpha) r _ {\theta_ {\alpha}}} _ {T 1} + \underbrace {\gamma e _ {s} ^ {\mathrm {T}} M (\alpha) \frac {\partial^ {2} P (\alpha)}{\partial \alpha^ {2}} M (\alpha) r _ {\theta_ {\alpha}}} _ {T 2} \\ + \underbrace {2 \gamma e _ {s} ^ {\mathrm {T}} M (\alpha) \frac {\partial P (\alpha)}{\partial \alpha} M (\alpha) \frac {\partial r _ {\theta_ {\alpha}}}{\partial \alpha}} _ {T 3} + \underbrace {e _ {s} ^ {\mathrm {T}} M (\alpha) \frac {\partial^ {2} r _ {\theta_ {\alpha}}}{\partial \alpha^ {2}}} _ {T 4}. \\ \end{array}
$$

We use the above derived bounds to bound each of the term in the above display. The calculations here are same as shown for Lemma 7 in (Mei et al., 2020), except for the particular values of the bounds. Hence we directly, mention the final bounds that we obtain and refer to (Mei et al., 2020) for the detailed but elementary calculations.

$$
| T 1 | \leqslant \frac {4}{(1 - \gamma) ^ {3}} \| u \| _ {2} ^ {2}
$$

$$
| T 2 | \leqslant \frac {3}{(1 - \gamma) ^ {2}} \| u \| _ {2} ^ {2}
$$

$$
| T 3 | \leqslant \frac {2}{(1 - \gamma) ^ {2}} \| u \| _ {2} ^ {2}
$$

$$
\left| T 4 \right| \leqslant \frac {5 / 2}{(1 - \gamma)} \| u \| _ {2} ^ {2}.
$$

Combining the above bounds we get,

$$
\begin{array}{l} \left| \frac {\partial^ {2} V ^ {\pi_ {\theta_ {\alpha}}} (s)}{\partial \alpha^ {2}} \right| _ {\alpha = 0} \Bigg | \leqslant \left(\frac {8 \gamma^ {2}}{(1 - \gamma) ^ {3}} + \frac {3 \gamma}{(1 - \gamma) ^ {2}} + \frac {4 \gamma}{(1 - \gamma) ^ {2}} + \frac {5 / 2}{(1 - \gamma)}\right) \| u \| _ {2} ^ {2} \\ = \frac {7 \gamma^ {2} + 4 \gamma + 5}{2 (1 - \gamma) ^ {3}} \| u \| _ {2}. \\ \end{array}
$$

Finally, let  $y \in \mathbb{R}^M$  and fix a  $\theta \in \mathbb{R}^M$ :

$$
\begin{array}{l} \left| y ^ {\mathrm {T}} \frac {\partial^ {2} V ^ {\pi_ {\theta}} (s)}{\partial \theta^ {2}} y \right| = \left| \frac {y}{\| y \| _ {2}} ^ {\mathrm {T}} \frac {\partial^ {2} V ^ {\pi_ {\theta}} (s)}{\partial \theta^ {2}} \frac {y}{\| y \| _ {2}} \right|. \| y \| _ {2} ^ {2} \\ \leqslant \max  _ {\| u \| _ {2} = 1} \left| \left\langle \frac {\partial^ {2} V ^ {\pi_ {\theta}} (s)}{\partial \theta^ {2}} u, u \right\rangle \right|. \| y \| _ {2} ^ {2} \\ = \max  _ {\| u \| _ {2} = 1} \left| \left\langle \frac {\partial^ {2} V ^ {\pi_ {\theta_ {\alpha}}} (s)}{\partial \theta_ {\alpha} ^ {2}} \right| _ {\alpha = 0} \frac {\partial \theta_ {\alpha}}{\partial \alpha}, \frac {\partial \theta_ {\alpha}}{\partial \alpha} \right\rangle \Bigg |. \| y \| _ {2} ^ {2} \\ = \max  _ {\| u \| _ {2} = 1} \left| \frac {\partial^ {2} V ^ {\pi_ {\theta_ {\alpha}}} (s)}{\partial \alpha^ {2}} \right| _ {\alpha = 0}. \| y \| _ {2} ^ {2} \\ \end{array}
$$

$$
\leqslant \frac {7 \gamma^ {2} + 4 \gamma + 5}{2 (1 - \gamma) ^ {3}} \| y \| _ {2} ^ {2}.
$$

Let  $\theta_{\xi} \coloneqq \theta + \xi (\theta' - \theta)$  where  $\xi \in [0,1]$ . By Taylor's theorem  $\forall s, \theta, \theta'$ ,

$$
\begin{array}{l} \left| V ^ {\pi_ {\theta^ {\prime}}} (s) - V ^ {\pi_ {\theta}} (s) - \left\langle \frac {\partial V ^ {\pi_ {\theta}} (s)}{\partial \theta} \right\rangle \right| = \frac {1}{2}. \left| (\theta^ {\prime} - \theta) ^ {\mathrm {T}} \frac {\partial^ {2} V ^ {\pi_ {\theta_ {\xi}}} (s)}{\partial \theta_ {\xi} ^ {2}} (\theta^ {\prime} - \theta) \right| \\ \leqslant \frac {7 \gamma^ {2} + 4 \gamma + 5}{4 (1 - \gamma) ^ {3}} \| \theta^ {\prime} - \theta \| _ {2} ^ {2}. \\ \end{array}
$$

Since  $V^{\pi_{\theta}}(s)$  is  $\frac{7\gamma^2 + 4\gamma + 5}{2(1 - \gamma)^3}$  smooth for every  $s$ ,  $V^{\pi_{\theta}}(\mu)$  is also  $\frac{7\gamma^2 + 4\gamma + 5}{2(1 - \gamma)^3}$  smooth.

Lemma 1.4 (Value Difference Lemma-1). For any two policies  $\pi$  and  $\pi'$ , and for any state  $s \in S$ , the following is true.

$$
V ^ {\pi^ {\prime}} (s) - V ^ {\pi} (s) = \frac {1}{1 - \gamma} \sum_ {s ^ {\prime} \in \mathcal {S}} d _ {s} ^ {\pi^ {\prime}} (s ^ {\prime}) \sum_ {m = 1} ^ {M} \pi_ {m} ^ {\prime} \tilde {A} (s ^ {\prime}, m).
$$

Proof.

$$
\begin{array}{l} V ^ {\pi^ {\prime}} (s) - V ^ {\pi} (s) = \sum_ {m = 1} ^ {M} \pi_ {m} ^ {\prime} \tilde {Q} ^ {\prime} (s, m) - \sum_ {m = 1} ^ {M} \pi_ {m} \tilde {Q} (s, m) \\ = \sum_ {m = 1} ^ {M} \pi_ {m} ^ {\prime} \left(\tilde {Q} ^ {\prime} (s, m) - \tilde {Q} (s, m)\right) + \sum_ {m = 1} ^ {M} \left(\pi_ {m} ^ {\prime} - \pi_ {m}\right) \tilde {Q} (s, m) \\ = \sum_ {m = 1} ^ {M} (\pi_ {m} ^ {\prime} - \pi_ {m}) \tilde {Q} (s, m) + \underbrace {\sum_ {m = 1} ^ {M} \pi_ {m} ^ {\prime} \sum_ {a \in \mathcal {A}} K _ {m} (s , a)} _ {= \sum_ {a \in \mathcal {A}} \pi_ {\theta} (a | s)} \sum_ {s ^ {\prime} \in \mathcal {S}} \mathbb {P} (s ^ {\prime} | s, a) \left[ V ^ {\pi^ {\prime}} (s ^ {\prime}) - V ^ {\pi} (s ^ {\prime}) \right] \\ = \frac {1}{1 - \gamma} \sum_ {s ^ {\prime} \in \mathcal {S}} d _ {s} ^ {\pi^ {\prime}} (s ^ {\prime}) \sum_ {m ^ {\prime} = 1} ^ {M} \left(\pi_ {m ^ {\prime}} ^ {\prime} - \pi_ {m ^ {\prime}}\right) \tilde {Q} \left(s ^ {\prime}, m ^ {\prime}\right) \\ = \frac {1}{1 - \gamma} \sum_ {s ^ {\prime} \in \mathcal {S}} d _ {s} ^ {\pi^ {\prime}} (s ^ {\prime}) \sum_ {m ^ {\prime} = 1} ^ {M} \pi_ {m ^ {\prime}} ^ {\prime} (\tilde {Q} s ^ {\prime}, m ^ {\prime} - V (s ^ {\prime})) \\ = \frac {1}{1 - \gamma} \sum_ {s ^ {\prime} \in \mathcal {S}} d _ {s} ^ {\pi^ {\prime}} (s ^ {\prime}) \sum_ {m ^ {\prime} = 1} ^ {M} \pi_ {m ^ {\prime}} ^ {\prime} \tilde {A} (s ^ {\prime}, m ^ {\prime}). \\ \end{array}
$$

Lemma 1.5. (Value Difference Lemma-2) For any two policies  $\pi$  and  $\pi'$  and state  $s \in S$ , the following is true.

$$
V ^ {\pi^ {\prime}} (s) - V ^ {\pi} (s) = \frac {1}{1 - \gamma} \sum_ {s ^ {\prime} \in \mathcal {S}} d _ {s} ^ {\pi} (s ^ {\prime}) \sum_ {m = 1} ^ {M} (\pi_ {m} ^ {\prime} - \pi_ {m}) \tilde {Q} ^ {\pi^ {\prime}} (s ^ {\prime}, m).
$$

![](images/9180105e5807c61d65a7621501c5b207eb31ea05a4b44b8c5a0d37f30e925085.jpg)

Proof. We will use  $\tilde{Q}$  for  $\tilde{Q}^{\pi}$  and  $\tilde{Q}'$  for  $\tilde{Q}^{\pi'}$  as a shorthand.

$$
\begin{array}{l} V ^ {\pi^ {\prime}} (s) - V ^ {\pi} (s) = \sum_ {m = 1} ^ {M} \pi_ {m} ^ {\prime} \tilde {Q} ^ {\prime} (s, m) - \sum_ {m = 1} ^ {M} \pi_ {m} \tilde {Q} (s, m) \\ = \sum_ {m = 1} ^ {M} \left(\pi_ {m} ^ {\prime} - \pi_ {m}\right) \tilde {Q} ^ {\prime} (s, m) + \sum_ {m = 1} ^ {M} \pi_ {m} \left(\tilde {Q} ^ {\prime} (s, m) - \tilde {Q} (s, m)\right) \\ = \sum_ {m = 1} ^ {M} \left(\pi_ {m} ^ {\prime} - \pi_ {m}\right) \tilde {Q} ^ {\prime} (s, m) + \\ \end{array}
$$

$$
\begin{array}{l} \gamma \sum_ {m = 1} ^ {M} \pi_ {m} \left(\sum_ {a \in \mathcal {A}} K _ {m} (s, a) \sum_ {s ^ {\prime} \in \mathcal {S}} \mathrm {P} \left(s ^ {\prime} \mid s, a\right) V ^ {\prime} \left(s ^ {\prime}\right) - \sum_ {a \in \mathcal {A}} K _ {m} (s, a) \sum_ {s ^ {\prime} \in \mathcal {S}} \mathrm {P} \left(s ^ {\prime} \mid s, a\right) V \left(s ^ {\prime}\right)\right) \\ = \sum_ {m = 1} ^ {M} \left(\pi_ {m} ^ {\prime} - \pi_ {m}\right) \tilde {Q} ^ {\prime} (s, m) + \gamma \sum_ {a \in \mathcal {A}} \pi_ {\theta} (a | s) \sum_ {s ^ {\prime} \in \mathcal {S}} \mathrm {P} \left(s ^ {\prime} | s, a\right) \left[ V ^ {\prime} (s) - V \left(s ^ {\prime}\right) \right] \\ = \frac {1}{1 - \gamma} \sum_ {s ^ {\prime} \in \mathcal {S}} d _ {s} ^ {\pi} (s ^ {\prime}) \sum_ {m = 1} ^ {M} (\pi_ {m} ^ {\prime} - \pi_ {m}) \tilde {Q} ^ {\prime} (s ^ {\prime}, m). \\ \end{array}
$$

Assumption 1.6. The reward  $r(s,a)\in [0,1]$ , for all pairs  $(s,a)\in S\times \mathcal{A}$

Assumption I.7. Let  $\pi^{*} := \operatorname*{argmax}_{\pi \in \mathcal{P}_{M}} V^{\pi}(s_{0})$ . We make the following assumption.

$$
\mathbb {E} _ {m \sim \pi^ {*}} \left[ Q ^ {\pi_ {\theta}} (s, m) \right] - V ^ {\pi_ {\theta}} (s) \geqslant 0, \forall s \in S, \forall \pi_ {\theta} \in \Pi .
$$

Let the best controller be a point in the  $M - simplex$ , i.e.,  $K^{*} := \sum_{m=1}^{M} \pi_{m}^{*} K_{m}$ .

Lemma 1.8 (NUL1).  $\left\| \frac{\partial}{\partial \theta} V^{\pi_{\theta}}(\mu) \right\|_2 \geqslant \frac{1}{\sqrt{M}} \left( \min_{m: \pi_{\theta_m}^* > 0} \pi_{\theta_m} \right) \times \left\| \frac{d_{\rho}^{\pi^*}}{d_{\mu}^{\pi^*}} \right\|_\infty^{-1} \times [V^*(\rho) - V^{\pi_{\theta}}(\rho)]$ .

Proof.

$$
\begin{array}{l} \left\| \frac {\partial}{\partial \theta} V ^ {\pi_ {\theta}} (\mu) \right\| _ {2} = \left(\sum_ {m = 1} ^ {M} \left(\frac {\partial V ^ {\pi_ {\theta}} (\mu)}{\partial \theta_ {m}}\right) ^ {2}\right) ^ {1 / 2} \\ \geqslant \frac {1}{\sqrt {M}} \sum_ {m = 1} ^ {M} \left| \frac {\partial V ^ {\pi_ {\theta}} (\mu)}{\partial \theta_ {m}} \right| \quad \text {(C a u c h y - S c h w a r z)} \\ = \frac {1}{\sqrt {M}} \sum_ {m = 1} ^ {M} \frac {1}{1 - \gamma} \left| \sum_ {s \in S} d _ {\mu} ^ {\pi_ {\theta}} (s) \pi_ {m} \tilde {A} (s, m) \right| \quad \text {L e m m a I . 2} \\ \geqslant \frac {1}{\sqrt {M}} \sum_ {m = 1} ^ {M} \frac {\pi_ {m} ^ {*} \pi_ {m}}{1 - \gamma} \left| \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \tilde {A} (s, m) \right| \\ \geqslant \left(\min  _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) \frac {1}{\sqrt {M}} \sum_ {m = 1} ^ {M} \frac {\pi_ {m} ^ {*}}{1 - \gamma} \left| \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \tilde {A} (s, m) \right| \\ \geqslant \left(\min  _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) \frac {1}{\sqrt {M}} \left| \sum_ {m = 1} ^ {M} \frac {\pi_ {m} ^ {*}}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \tilde {A} (s, m) \right| \\ \end{array}
$$

$$
\begin{array}{l} = \left(\min  _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) \left| \frac {1}{\sqrt {M}} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {m = 1} ^ {M} \frac {\pi_ {m} ^ {*}}{1 - \gamma} \tilde {A} (s, m) \right| \\ = \left(\min  _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) \frac {1}{\sqrt {M}} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {m = 1} ^ {M} \frac {\pi_ {m} ^ {*}}{1 - \gamma} \tilde {A} (s, m) \quad \text {A s s u m p t i o n I . 7} \\ \geqslant \frac {1}{\sqrt {M}} \frac {1}{1 - \gamma} \left(\min  _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) \left\| \frac {d _ {\rho} ^ {\pi^ {*}}}{d _ {\mu} ^ {\pi_ {\theta}}} \right\| _ {\infty} ^ {- 1} \sum_ {s \in \mathcal {S}} d _ {\rho} ^ {*} (s) \sum_ {m = 1} ^ {M} \pi_ {m} ^ {*} \tilde {A} (s, m) \\ = \frac {1}{\sqrt {M}} \left(\min  _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) \left\| \frac {d _ {\rho} ^ {\pi^ {*}}}{d _ {\mu} ^ {\pi_ {\theta}}} \right\| _ {\infty} ^ {- 1} [ V ^ {*} (\rho) - V ^ {\pi_ {\theta}} (\rho) ] \quad \text {L e m m a I . 4}. \\ \end{array}
$$

# I.1. Proof of the Theorem 4.2

Lemma 1.9 (Modified Policy Gradient Theorem).  $\nabla_{\theta}V^{\pi_{\theta}}(\rho)$  =  $\mathbb{E}_{(s,m)\sim \nu_{\pi_\theta}}[\tilde{Q}^{\pi_\theta}(s,m)\psi_\theta (m)]$  =  $\mathbb{E}_{(s,m)\sim \nu_{\pi_\theta}}[\tilde{A}^{\pi_\theta}(s,m)\psi_\theta (m)],$  where  $\psi_{\theta}(m)\coloneqq \nabla_{\theta}\log (\pi_{\theta}(m))$

Let  $\beta := \frac{7\gamma^2 + 4\gamma + 5}{(1 - \gamma)^2}$ . We have that,

$$
\begin{array}{l} V ^ {*} (\rho) - V ^ {\pi_ {\theta}} (\rho) = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\rho} ^ {\pi_ {\theta}} (s) \sum_ {m = 1} ^ {M} \left(\pi_ {m} ^ {*} - \pi_ {m}\right) \tilde {Q} ^ {\pi^ {*}} (s, m) \quad (Lemma I.5) \\ = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} \frac {d _ {\rho} ^ {\pi_ {\theta}} (s)}{d _ {\mu} ^ {\pi_ {\theta}} (s)} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {m = 1} ^ {M} (\pi_ {m} ^ {*} - \pi_ {m}) \tilde {Q} ^ {\pi^ {*}} (s, m) \\ \leqslant \frac {1}{1 - \gamma} \left\| \frac {1}{d _ {\mu} ^ {\pi_ {\theta}}} \right\| _ {\infty} \sum_ {s \in \mathcal {S}} \sum_ {m = 1} ^ {M} \left(\pi_ {m} ^ {*} - \pi_ {m}\right) \tilde {Q} ^ {\pi^ {*}} (s, m) \\ \leqslant \frac {1}{(1 - \gamma) ^ {2}} \left\| \frac {1}{\mu} \right\| _ {\infty} \sum_ {s \in S} \sum_ {m = 1} ^ {M} \left(\pi_ {m} ^ {*} - \pi_ {m}\right) \tilde {Q} ^ {\pi^ {*}} (s, m) \\ = \frac {1}{(1 - \gamma)} \left\| \frac {1}{\mu} \right\| _ {\infty} \left[ V ^ {*} (\mu) - V ^ {\pi_ {\theta}} (\mu) \right] \quad (Lemma I.5). \\ \end{array}
$$

Let  $\delta_t \coloneqq V^*(\mu) - V^{\pi_{\theta_t}}(\mu)$ .

$$
\begin{array}{l} \delta_ {t + 1} - \delta_ {t} = V ^ {\pi_ {\theta_ {t}}} (\mu) - V ^ {\pi_ {\theta_ {t + 1}}} (\mu) \quad (Lemma I.3) \\ \leqslant - \frac {1}{2 \beta} \left\| \frac {\partial}{\partial \theta} V ^ {\pi_ {\theta_ {t}}} (\mu) \right\| _ {2} ^ {2} \quad (Lemma I.10) \\ \leqslant - \frac {1}{2 \beta} \frac {1}{M} \left(\min  _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) ^ {2} \left\| \frac {d _ {\rho} ^ {\pi^ {*}}}{d _ {\mu} ^ {\pi_ {\theta}}} \right\| _ {\infty} ^ {- 2} \delta_ {t} ^ {2} \quad (\text {L e m m a} 4. 5) \\ \leqslant - \frac {1}{2 \beta} (1 - \gamma) ^ {2} \frac {1}{M} \left(\min  _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) ^ {2} \left\| \frac {d _ {\rho} ^ {\pi^ {*}}}{d _ {\mu} ^ {\pi^ {\theta}}} \right\| _ {\infty} ^ {- 2} \delta_ {t} ^ {2} \\ \leqslant - \frac {1}{2 \beta} (1 - \gamma) ^ {2} \frac {1}{M} \left(\min  _ {1 \leqslant s \leqslant t} \min  _ {m: \pi_ {\theta_ {m}} > 0} \pi_ {\theta_ {m}}\right) ^ {2} \left\| \frac {d _ {\rho} ^ {\pi^ {*}}}{d _ {\mu} ^ {\pi_ {\theta}}} \right\| _ {\infty} ^ {- 2} \delta_ {t} ^ {2} \\ = - \frac {1}{2 \beta} \frac {1}{M} (1 - \gamma) ^ {2} \left\| \frac {d _ {\mu} ^ {\pi^ {*}}}{\mu} \right\| _ {\infty} ^ {- 2} c _ {t} ^ {2} \delta_ {t} ^ {2}, \\ \end{array}
$$

where  $c_{t} \coloneqq \min_{1 \leqslant s \leqslant t} \min_{m: \pi_{m}^{*} > 0} \pi_{\theta_{s}}(m)$ . Hence we have that,

$$
\delta_ {t + 1} \leqslant \delta_ {t} - \frac {1}{2 \beta} \frac {(1 - \gamma) ^ {2}}{M} \left\| \frac {d _ {\mu} ^ {\pi^ {*}}}{\mu} \right\| _ {\infty} ^ {- 2} c _ {t} ^ {2} \delta_ {t} ^ {2}. \tag {19}
$$

The rest of the proof follows from a induction argument over  $t \geqslant 1$ .

Base case: Since  $\delta_t \leqslant \frac{1}{1 - \gamma}$ , and  $c_t \in (0, 1)$ , the result holds for all  $t \leqslant \frac{2\beta M}{(1 - \gamma)} \left\| \frac{d_\mu^{\pi^*}}{\mu} \right\|_\infty^2$ .

For ease of notation, let  $\varphi_t \coloneqq \frac{2\beta M}{c_t^2(1 - \gamma)^2} \left\| \frac{d_\mu^{\pi^*}}{\mu} \right\|_\infty^2$ . We need to show that  $\delta_t \leqslant \frac{\varphi_t}{t}$ , for all  $t \geqslant 1$ .

Induction step: Fix a  $t \geqslant 2$ , assume  $\delta_t \leqslant \frac{\varphi_t}{t}$ .

Let  $g: \mathbb{R} \to \mathbb{R}$  be a function defined as  $g(x) = x - \frac{1}{\varphi_t} x^2$ . One can verify easily that  $g$  is monotonically increasing in  $\left[0, \frac{\varphi_t}{2}\right]$ . Next with equation 19, we have

$$
\begin{array}{l} \delta_ {t + 1} \leqslant \delta_ {t} - \frac {1}{\varphi_ {t}} \delta_ {t} ^ {2} \\ = g \left(\delta_ {t}\right) \\ \leqslant g \left(\frac {\varphi_ {t}}{t}\right) \\ \leqslant \frac {\varphi_ {t}}{t} - \frac {\varphi_ {t}}{t ^ {2}} \\ = \varphi_ {t} \left(\frac {1}{t} - \frac {1}{t ^ {2}}\right) \\ \leqslant \varphi_ {t} \left(\frac {1}{t + 1}\right) \\ \leqslant \varphi_ {t + 1} \left(\frac {1}{t + 1}\right). \\ \end{array}
$$

where the last step follows from the fact that  $c_{t + 1} \leqslant c_t$  (infimum over a larger set does not increase the value). This completes the proof.

Lemma I.10. Let  $f: \mathbb{R}^M \to \mathbb{R}$  be  $\beta$ -smooth. Then gradient ascent with learning rate  $\frac{1}{\beta}$  guarantees, for all  $x, x' \in \mathbb{R}^M$ :

$$
f (x) - f (x ^ {\prime}) \leqslant - \frac {1}{2 \beta} \left\| \frac {d f (x)}{d x} \right\| _ {2} ^ {2}.
$$

Proof.

$$
\begin{array}{l} f (x) - f \left(x ^ {\prime}\right) \leqslant - \left\langle \frac {\partial f (x)}{\partial x} \right\rangle + \frac {\beta}{2}. \| x ^ {\prime} - x \| _ {2} ^ {2} \\ = \frac {1}{\beta} \left\| \frac {d f (x)}{d x} \right\| _ {2} ^ {2} + \frac {\beta}{2} \frac {1}{\beta^ {2}} \left\| \frac {d f (x)}{d x} \right\| _ {2} ^ {2} \\ = - \frac {1}{2 \beta} \left\| \frac {d f (x)}{d x} \right\| _ {2} ^ {2}. \\ \end{array}
$$

□

# J. Proofs for (Natural) Actor-critic based improper learning

We will begin with some useful lemmas.

Lemma J.1. For any  $\theta, \theta \in \mathbb{R}^M$ , we have  $\| \psi_{\theta}(m) - \psi_{\theta'}(m) \|_2 \leqslant \| \theta - \theta' \|_2$ .

Proof. Recall,  $\psi_{\theta}(m) \coloneqq \nabla_{\theta} \log \pi_{\theta}(m)$ . Fix  $m' \in [M]$ ,

$$
\begin{array}{l} \frac {\partial \log \pi_ {\theta} (m)}{\partial \theta_ {m ^ {\prime}}} = \frac {\partial \log \left(\frac {e ^ {\theta_ {m}}}{\sum_ {j = 1} ^ {M} e ^ {\theta_ {j}}}\right)}{\partial \theta_ {m ^ {\prime}}} \\ = \frac {\partial}{\partial \theta_ {m ^ {\prime}}} \left(\theta_ {m} - \log \left(\sum_ {j = 1} ^ {M} e ^ {\theta_ {j}}\right)\right) \\ = \mathbb {1} \{m ^ {\prime} = m \} - \frac {e ^ {\theta_ {m ^ {\prime}}}}{\sum_ {j = 1} ^ {M} e ^ {\theta_ {j}}} \\ = \mathbb {1} \{m ^ {\prime} = m \} - \pi_ {\theta} (m ^ {\prime}). \\ \end{array}
$$

$$
\begin{array}{l} \left\| \psi_ {\theta} (m) - \psi_ {\theta^ {\prime}} (m) \right\| _ {2} \leqslant \left\| \theta - \theta^ {\prime} \right\| _ {2} = \left\| \nabla_ {\theta} \log \pi_ {\theta} (m) - \nabla_ {\theta} \log \pi_ {\theta^ {\prime}} (m) \right\| _ {2} \\ = \left\| \pi_ {\theta} (.) - \pi_ {\theta^ {\prime}} (.) \right\| _ {2} \\ \leqslant^ {(*)} \| \theta - \theta^ {\prime} \| _ {2}. \\ \end{array}
$$

Here  $(^{*})$  follows from the fact that the softmax function is 1-Lipschitz (Gao & Pavel, 2017).

Lemma J.2. For all  $m \in [M]$  and  $\theta \in \mathbb{R}^M$ ,  $\| \psi_{\theta}(m)\|_2 \leqslant \sqrt{2}$ .

Proof. Proof follows by noticing that  $\| \psi_{\theta}(m)\| _2 = \| \nabla_\theta \log \pi_\theta (m)\| _2\leqslant \sqrt{2}$ , where the last inequality follows because the 2-norm of a probability vector is bounded by 1.

Lemma J.3. For all  $\theta, \theta' \in \mathbb{R}^M$ ,  $\| \pi_\theta(.) - \pi_{\theta'}(.)\|_{TV} \leqslant \frac{\sqrt{M}}{2}\|\theta - \theta'\|_2$ .

Proof.

$$
\begin{array}{l} \left\| \pi_ {\theta} (.) - \pi_ {\theta^ {\prime}} (.) \right\| _ {T V} = \frac {1}{2} \left\| \pi_ {\theta} (.) - \pi_ {\theta^ {\prime}} (.) \right\| _ {1} \\ \leqslant \frac {\sqrt {M}}{2} \left\| \pi_ {\theta} (.) - \pi_ {\theta^ {\prime}} (.) \right\| _ {2}. \\ \end{array}
$$

The inequality follows from relation between 1-norm and 2-norm.

Proposition J.4. For any  $\theta, \theta' \in \mathbb{R}^M$ ,

$$
\nabla V (\theta) - \nabla V \left(\theta^ {\prime}\right) \leqslant \sqrt {M} L _ {V} \| \theta - \theta^ {\prime} \| _ {2}
$$

where  $L_{V} = \frac{2\sqrt{2}C_{\kappa\xi} + 1}{1 - \gamma}$ , and  $C_{\kappa \xi} = \left(1 + \left\lceil \log_{\xi}\frac{1}{\kappa}\right\rceil +\frac{1}{1 - \xi}\right)$ .

Proof. We follow the same steps as in Proposition 1 in (Xu et al., 2020) along with Lemmas J.3,J.2,J.1 and that the maximum reward is bounded by 1.

We will now restate a useful result from (Xu et al., 2020), about the convergence of the critic parameter  $w_{t}$  to the equilibrium point  $w^{*}$  of the underlying ODE, applied to our setting.

Proposition J.5. Suppose assumptions 5.3 and 5.2 hold. Then is  $\beta \leqslant \min \left\{\frac{\Gamma_L}{16}, \frac{8}{\Gamma_L}\right\}$  and  $H \geqslant \left(\frac{4}{\Gamma_L} + 2\alpha\right)\left[\frac{1536[1 + (\kappa - 1)\xi]}{(1 - \xi)\Gamma_L}\right]$ . We have

$$
\mathbb {E} \left[ \| w _ {T _ {c}} - w ^ {*} \| _ {2} ^ {2} \right] \leqslant \left(1 - \frac {\Gamma_ {L}}{1 6} \alpha\right) ^ {T _ {c}} \| w _ {0} - w ^ {*} \| _ {2} ^ {2} + \left(\frac {4}{\Gamma_ {L}} + 2 \alpha\right) \frac {1 5 3 6 (1 + R _ {w} ^ {2}) [ 1 + (\kappa - 1) \xi ]}{(1 - \xi) H}.
$$

If we further let  $T_{c} \geqslant \frac{16}{\Gamma_{L}\alpha} \log \frac{2\|w_{0} - w^{*}\|_{2}^{2}}{\varepsilon}$  and  $H \geqslant \left(\frac{4}{\Gamma_{L}} + 2\alpha\right) \frac{3072(R_{w}^{2} + 1)[1 + (\kappa - 1)\xi]}{(1 - \xi)\Gamma_{L}\varepsilon}$ , then we have  $\mathbb{E}\left[\|w_{T_c}-w^*\|_2^2\right] \leqslant \varepsilon$  with total sample complexity given by  $T_{c}H = \mathcal{O}\left(\frac{1}{\alpha\varepsilon} \log \frac{1}{\varepsilon}\right)$ .

Proof. Proof follows along the similar lines as in Thm. 4 in Xu et al. (2020) and by using  $\left\| \varphi(s)(\gamma \varphi(s') - \varphi(s))^{\top} \right\|_{F} \leqslant (1 + \gamma) \leqslant 2$  and assuming  $\| \varphi(s) \|_2 \leqslant 1$  for all  $s, s' \in S$ .

# J.1. Actor-critic based improper learning

Proof of Theorem 5.4. Let  $v_{t}(w) \coloneqq \frac{1}{B} \sum_{i=0}^{B-1} \mathcal{E}(s_{t,i}, m_{t,i}, s_{t,i+1}) \psi_{\theta_{t}}(m_{t,i})$  and  $A_{w}(s,m) \coloneqq \mathbb{E}_{\bar{P}}[\mathcal{E}(s,m,s')|(s,m)]$  and  $g(w,\theta) \coloneqq \mathbb{E}_{\nu_{\theta}}[A_{w}(s,m)\psi_{\theta}(m)]$  for all  $\theta \in \mathbb{R}^{M}, w \in \mathbb{R}^{d}, s \in S, m \in [M]$ . Using Prop J.4 we get,

$$
\begin{array}{l} V (\theta_ {t + 1}) \geqslant V (\theta_ {t}) + \langle \nabla_ {\theta} V (\theta_ {t}), \theta_ {t + 1} - \theta_ {t} \rangle - \frac {\sqrt {M} L _ {V}}{2} \| \theta_ {t + 1} - \theta_ {t} \| _ {2} ^ {2} \\ = V (\theta_ {t}) + \alpha \left\langle \nabla_ {\theta} V (\theta_ {t}), v _ {t} (w _ {t}) - \nabla_ {\theta} V (\theta_ {t}) + \nabla_ {\theta} V (\theta_ {t}) \right\rangle - \frac {\sqrt {M} L _ {V} \alpha^ {2}}{2} \| v _ {t} (w _ {t}) \| _ {2} ^ {2} \\ = V \left(\theta_ {t}\right) + \alpha \| \nabla_ {\theta} V \left(\theta_ {t}\right) \| _ {2} ^ {2} \\ + \alpha \left\langle \nabla_ {\theta} V (\theta_ {t}), v _ {t} (w _ {t}) - \nabla_ {\theta} V (\theta_ {t}) \right\rangle - \frac {\sqrt {M} L _ {V} \alpha^ {2}}{2} \left\| v _ {t} (w _ {t}) \right\| _ {2} ^ {2} \\ \geqslant V \left(\theta_ {t}\right) + \left(\frac {1}{2} \alpha - \sqrt {M} L _ {V} \alpha^ {2}\right) \left\| \nabla_ {\theta} V \left(\theta_ {t}\right) \right\| _ {2} ^ {2} - \left(\frac {1}{2} \alpha + \sqrt {M} L _ {V} \alpha^ {2}\right) \left\| v _ {t} \left(w _ {t}\right) - \nabla_ {\theta} V \left(\theta_ {t}\right) \right\| _ {2} ^ {2} \\ \end{array}
$$

Taking expectations and rearranging, we have

$$
\begin{array}{l} \left(\frac {1}{2} \alpha - \sqrt {M} L _ {V} \alpha^ {2}\right) \mathbb {E} \left[ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} \mid \mathcal {F} _ {t} \right] \\ \leqslant \mathbb {E} \left[ V (\theta_ {t + 1}) | \mathcal {F} _ {t} \right] - V (\theta_ {t}) + \left(\frac {1}{2} \alpha + \sqrt {M} L _ {V} \alpha^ {2}\right) \mathbb {E} \left[ \| v _ {t} (w _ {t}) - \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} | \mathcal {F} _ {t} \right]. \\ \end{array}
$$

Next we will upperbound  $\mathbb{E}\left[\| v_t(w_t) - \nabla_\theta V(\theta_t)\| _2^2 |\mathcal{F}_t\right]$ .

$$
\begin{array}{l} \left\| v _ {t} \left(w _ {t}\right) - \nabla_ {\theta} V \left(\theta_ {t}\right) \right\| _ {2} ^ {2} \\ \leqslant 3 \left\| v _ {t} \left(w _ {t}\right) - v _ {t} \left(w _ {\theta_ {t}} ^ {*}\right) \right\| _ {2} ^ {2} + 3 \left\| v _ {t} \left(w _ {\theta_ {t}} ^ {*}\right) - g \left(w _ {\theta_ {t}} ^ {*}\right) \right\| _ {2} ^ {2} + 3 \left\| g \left(w _ {\theta_ {t}} ^ {*}\right) - \nabla_ {\theta} V \left(\theta_ {t}\right) \right\| _ {2} ^ {2}. \\ \end{array}
$$

$$
\begin{array}{l} \left\| v _ {t} \left(w _ {t}\right) - v _ {t} \left(w _ {\theta_ {t}} ^ {*}\right) \right\| _ {2} ^ {2} = \left\| \frac {1}{B} \sum_ {i = 0} ^ {B - 1} \left[ \mathcal {E} _ {w _ {t}} \left(s _ {t, i}, m _ {t, i}, s _ {t, i + 1}\right) - \mathcal {E} _ {w _ {\theta_ {t}} ^ {*}} \left(s _ {t, i}, m _ {t, i}, s _ {t, i + 1}\right) \right] \psi \left(m _ {t, i}\right) \right\| _ {2} ^ {2} \\ \leqslant \frac {1}{B} \sum_ {i = 0} ^ {B - 1} \left\| \left[ \mathcal {E} _ {w _ {t}} \left(s _ {t, i}, m _ {t, i}, s _ {t, i + 1}\right) - \mathcal {E} _ {w _ {\theta_ {t}} ^ {*}} \left(s _ {t, i}, m _ {t, i}, s _ {t, i + 1}\right) \right] \psi \left(m _ {t, i}\right) \right\| _ {2} ^ {2} \\ \leqslant \frac {2}{B} \sum_ {i = 0} ^ {B - 1} \left\| \left[ \mathcal {E} _ {w _ {t}} \left(s _ {t, i}, m _ {t, i}, s _ {t, i + 1}\right) - \mathcal {E} _ {w _ {\theta_ {t}} ^ {*}} \left(s _ {t, i}, m _ {t, i}, s _ {t, i + 1}\right) \right] \right\| _ {2} ^ {2} \\ = \frac {2}{B} \sum_ {i = 0} ^ {B - 1} \left\| \left(\gamma \varphi \left(s _ {t, i + 1}\right) - \varphi \left(s _ {t, i}\right)\right) ^ {\top} \left(w _ {t} - w _ {\theta_ {t}} ^ {*}\right) \right\| _ {2} ^ {2} \\ \end{array}
$$

$$
\leqslant \frac {8}{B} \sum_ {i = 0} ^ {B - 1} \left\| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2} = 8 \left\| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2}.
$$

Next we have,

$$
\begin{array}{l} \left\| g (w _ {\theta_ {t}} ^ {*}) - \nabla_ {\theta} V (\theta_ {t}) \right\| _ {2} ^ {2} = \left\| \mathbb {E} _ {\nu_ {\theta_ {t}}} [ A _ {w _ {\theta_ {t}} ^ {*}} (s, m) \psi_ {\theta_ {t}} (m) ] - \mathbb {E} _ {\nu_ {\theta_ {t}}} [ A _ {\pi_ {\theta_ {t}}} (s, m) \psi_ {\theta_ {t}} (m) ] \right\| _ {2} ^ {2} \\ \leqslant 2 \mathbb {E} _ {\nu_ {\theta_ {t}}} \left\| A _ {w _ {\theta_ {t}} ^ {*}} (s, m) - A _ {\pi_ {\theta_ {t}}} (s, m) \right\| _ {2} ^ {2} \\ = 2 \mathbb {E} _ {\nu_ {\theta_ {t}}} \left[ | \gamma \mathbb {E} \left[ V _ {w _ {\hat {\theta} _ {t}} ^ {*}} (s ^ {\prime}) - V _ {\pi_ {\theta_ {t}}} (s ^ {\prime}) | s, m \right] + + V _ {\pi_ {\theta_ {t}}} (s) - V _ {w _ {\hat {\theta} _ {t}} ^ {*}} (s) | ^ {2} \right] \\ \leqslant 8 \Delta_ {c r i t i c}. \\ \end{array}
$$

Finally we bound the last term  $\left\| v_{t}(w_{\theta_{t}}^{*}) - g(w_{\theta_{t}}^{*}) \right\|_{2}^{2}$  by using Assumption 5.2 we have,

$$
\left\| v _ {t} (w _ {\theta_ {t}} ^ {*}) - g (w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2} \leqslant \mathbb {E} \left[ \left\| \frac {1}{B} \sum_ {i = 0} ^ {B - 1} \mathcal {E} _ {w _ {\theta_ {t}} ^ {*}} (s _ {t, i}, m _ {t, i}, s _ {t, i + 1}) \psi_ {\theta_ {t}} (m _ {t, i}) - \mathbb {E} _ {\nu_ {\theta_ {t}}} [ A _ {w _ {\theta_ {t}} ^ {*}} (s, m) \psi_ {\theta_ {t}} (m) ] \right\| _ {2} ^ {2} | \mathcal {F} _ {t} \right].
$$

We will now proceed in the similar manner as in (Xu et al., 2020) (eq 24 to eq 26), and using Lemma J.2, we have

$$
\mathbb {E} \left[ \left\| v _ {t} (w _ {\theta_ {t}} ^ {*}) - g (w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2} | \mathcal {F} _ {t} \right] \leqslant \frac {3 2 (1 + R _ {w}) ^ {2} [ 1 + (\kappa - 1) \xi ]}{B (1 - \xi)}.
$$

Putting things back we have,

$$
\mathbb {E} \left[ \right.\left|\left| v _ {t} (w _ {t}) - \nabla_ {\theta} V (\theta_ {t}) \right|\right| _ {2} ^ {2} \left. \right\rvert \mathcal {F} _ {t} \left. \right] \leqslant \frac {9 6 (1 + R _ {w}) ^ {2} [ 1 + (\kappa - 1) \xi ]}{B (1 - \xi)} + 2 4 \mathbb {E} \left[\left|\left| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \right|\right| _ {2} ^ {2} \right] + 2 4 \Delta_ {c r i t i c}.
$$

Hence we get,

$$
\begin{array}{l} \left(\frac {1}{2} \alpha - \sqrt {M} L _ {V} \alpha^ {2}\right) \mathbb {E} \left[ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} \right] \\ \leqslant \mathbb {E} \left[ V \left(\theta_ {t + 1}\right) \right] - \mathbb {E} \left[ V \left(\theta_ {t}\right) \right] \\ + \left(\frac {1}{2} \alpha + \sqrt {M} L _ {V} \alpha^ {2}\right) \left(\frac {9 6 (1 + R _ {w}) ^ {2} [ 1 + (\kappa - 1) \xi ]}{B (1 - \xi)} + 2 4 \mathbb {E} \left[ \left\| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2} \right] + 2 4 \Delta_ {c r i t i c}\right). \\ \end{array}
$$

We put  $\alpha = \frac{1}{4L_V\sqrt{M}}$  above to get,

$$
\begin{array}{l} \left(\frac {1}{1 6 L _ {V} \sqrt {M}}\right) \mathbb {E} \left[ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} \right] \leqslant \mathbb {E} \left[ V \left(\theta_ {t + 1}\right) \right] - \mathbb {E} \left[ V \left(\theta_ {t}\right) \right] \\ + \left(\frac {1}{4 L _ {V} \sqrt {M}}\right) \left(\frac {9 6 (1 + R _ {w}) ^ {2} [ 1 + (\kappa - 1) \xi ]}{B (1 - \xi)} + 2 4 \mathbb {E} \left[ \big \| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \big \| _ {2} ^ {2} \right] + 2 4 \Delta_ {c r i t i c}\right). \\ \end{array}
$$

which simplifies as

$$
\begin{array}{l} \mathbb {E} \left[ \left\| \nabla_ {\theta} V (\theta_ {t}) \right\| _ {2} ^ {2} \right] \\ \leqslant 1 6 L _ {V} \sqrt {M} \left(\mathbb {E} \left[ V (\theta_ {t + 1}) \right] - \mathbb {E} \left[ V (\theta_ {t}) \right]\right) + \frac {3 8 4 (1 + R _ {w}) ^ {2} [ 1 + (\kappa - 1) \xi ]}{B (1 - \xi)} + 9 6 \mathbb {E} \left[ \left\| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2} \right] + 9 6 \Delta_ {c r i t i c}. \\ \end{array}
$$

Taking summation over  $t = 0,1,2,\ldots ,T - 1$  and dividing by  $T$

$$
\begin{array}{l} \mathbb {E} \left[ \left\| \nabla_ {\theta} V (\theta_ {\widehat {T}}) \right\| _ {2} ^ {2} \right] \\ = \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} \right] \\ \leqslant \frac {1 6 L _ {V} \sqrt {M} \left(\mathbb {E} \left[ V (\theta_ {T}) \right] - \mathbb {E} \left[ V (\theta_ {0}) \right]\right)}{T} + \frac {3 8 4 (1 + R _ {w}) ^ {2} [ 1 + (\kappa - 1) \xi ]}{B (1 - \xi)} + 9 6 \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ \left\| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2} \right] + 9 6 \Delta_ {c r i t i c} \\ \leqslant \frac {1 6 L _ {V} \sqrt {M}}{(1 - \gamma) T} + \frac {3 8 4 (1 + R _ {w}) ^ {2} [ 1 + (\kappa - 1) \xi ]}{B (1 - \xi)} + 9 6 \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ \left\| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2} \right] + 9 6 \Delta_ {c r i t i c} \\ \end{array}
$$

We now let  $B \geqslant \frac{1152}{(1 + R_w)^2[1 + (\kappa - 1)\xi]} (1 - \xi) \varepsilon$ ,  $\mathbb{E}\left[\left\| (w_t - w_{\theta_t}^*)\right\|_2^2\right] \leqslant \frac{\varepsilon}{288}$  and  $T \geqslant \frac{48L_V\sqrt{M}}{(1 - \gamma)\varepsilon}$ , then we have

$$
\mathbb {E} \left[ \left\| \nabla_ {\theta} V (\theta_ {\widehat {T}}) \right\| _ {2} ^ {2} \right] \leqslant \varepsilon + \mathcal {O} (\Delta_ {\text {c r i t i c}}).
$$

This leads to the final sample complexity of  $(B + HT_{c})T = \left(\frac{1}{\varepsilon} +\frac{\sqrt{M}}{\varepsilon}\log \frac{1}{\varepsilon}\right)\left(\frac{\sqrt{M}}{(1 - \gamma)^{2}\varepsilon}\right) = \mathcal{O}\left(\frac{M}{(1 - \gamma)^{2}\varepsilon^{2}}\log \frac{1}{\varepsilon}\right)$

# J.2. Natural-actor-critic based improper learning

# J.2.1. PROOF OF THEOREM 5.5

Proof. We first show that the natural actor-critic improper learner converges to a stationary point. We will then show convergence to the global optima which is what is different from that of (Xu et al., 2020).

Let  $v_{t}(w) \coloneqq \frac{1}{B} \sum_{i=0}^{B-1} \mathcal{E}_{w}(s_{t,i}, m_{t,i}, s_{t,i+1}) \psi_{\theta_{t}}(m_{t,i}), \quad A_{w}(s,m) \coloneqq \mathbb{E}_{\tilde{P}}[\mathcal{E}(s,m,s')|s,m]$  and  $g(w,\theta) \coloneqq \mathbb{E}_{\nu_{\theta}}[A_{w}(s,m)\psi_{\theta}(m)]$  for  $w \in \mathbb{R}^d$  and  $\theta \in \mathbb{R}^M$ . Also let  $u_{t}(w) \coloneqq [F_{t}(\theta_{t}) + \lambda I]^{-1}\left[\frac{1}{B} \sum_{i=0}^{B-1} \mathcal{E}_{w}(s_{t,i}, m_{t,i}, s_{t,i+1}) \psi_{\theta_{t}}(m_{t,i})\right] = [F_{t}(\theta_{t}) + \lambda I]^{-1}v_{t}(w)$ .

Recall Prop J.4. We have

Lemma J.6. Assume  $\sup_{s\in S}\| \varphi (s)\| _2\leqslant 1$  . Under Assumptions 5.2 and 5.3 with step-sizes chosen as  $\alpha = \left(\frac{\lambda^2}{2\sqrt{ML_V}(1 + \lambda)}\right)$  we have

$$
\begin{array}{l} \mathbb {E} [ \| \nabla_ {\theta} V (\theta_ {\hat {T}}) \| _ {2} ^ {2} ] = \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} [ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} ] \\ \leqslant \frac {1 6 \sqrt {M} L _ {V} (1 + \lambda) ^ {2}}{\lambda^ {2}} \frac {\mathbb {E} [ V (\theta_ {T}) ] - V (\theta_ {0})}{T} + \frac {1 0 8}{\lambda^ {2}} [ 2 (1 + \lambda) ^ {2} + \lambda^ {2} ] \frac {\sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ \| w _ {t} - w _ {\theta_ {t}} ^ {*} \| _ {2} ^ {2} \right]}{T} \\ + [ 2 (1 + \lambda) ^ {2} + \lambda^ {2} ] \left(\frac {3 2}{\lambda^ {4} (1 - \gamma) ^ {2}} + \frac {4 3 2 (1 + 2 R _ {w}) ^ {2}}{\lambda^ {2}}\right) \frac {1 + (\kappa - 1) \xi}{(1 - \xi) B} + \frac {2 1 6}{\lambda^ {2}} [ 2 (1 + \lambda) ^ {2} + \lambda^ {2} ] \Delta_ {c r i t i c}. \\ \end{array}
$$

Proof. Proof is similar to first part of proof of Thm 6 in (Xu et al., 2020) and similar to Thm 5.4, along with using Prop J.4 and Lemmas J.3, J.2 and J.1.

We now move to proving the global optimality of natural actor critic based improper learner. Let  $KL(\cdot, \cdot)$  be the KL-divergence between two distributions. We denote  $\mathsf{D}(\theta) \coloneqq \mathsf{KL}(\pi^*, \pi_\theta)$ ,  $u_{\theta_t}^\lambda \coloneqq (F(\theta_t) + \lambda I)^{-1}\nabla_\theta V(\theta_t)$  and  $u_{\theta_t}^\dagger \coloneqq$

$F(\theta_t)^\dagger \nabla_\theta V(\theta_t)$ . We see that

$$
\begin{array}{l} \mathrm {D} \left(\theta_ {\mathrm {t}}\right) - \mathrm {D} \left(\theta_ {\mathrm {t} + 1}\right) \\ = \sum_ {m = 1} ^ {M} \pi^ {*} (m) \log \pi_ {\theta_ {t + 1}} (m) - \log \pi_ {\theta_ {t}} (m) \\ \stackrel {(i)} {=} \sum_ {s \in \mathcal {S}} d _ {\rho} ^ {\pi^ {*}} (s) \sum_ {m = 1} ^ {M} \pi^ {*} (m) \log \pi_ {\theta_ {t + 1}} (m) - \log \pi_ {\theta_ {t}} (m) \\ = \mathbb {E} _ {\nu_ {\pi^ {*}}} [ \log \pi_ {\theta_ {t + 1}} (m) - \log \pi_ {\theta_ {t}} (m) ] \\ \stackrel {(i i)} {\geqslant} \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \nabla_ {\theta} \log \left(\pi_ {\theta_ {t}} (m)\right) \right] ^ {\top} \left(\theta_ {t + 1} - \theta_ {t}\right) - \frac {\left\| \theta_ {t + 1} - \theta_ {t} \right\| _ {2} ^ {2}}{2} \\ = \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} \left(\theta_ {t + 1} - \theta_ {t}\right) - \frac {\| \theta_ {t + 1} - \theta_ {t} \| _ {2} ^ {2}}{2} \\ = \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} u _ {t} (w _ {t}) - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ = \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} u _ {\theta_ {t}} ^ {\lambda} + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda}) - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ = \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} u _ {\theta_ {t}} ^ {\dagger} + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {\theta_ {t}} ^ {\lambda} - u _ {\theta_ {t}} ^ {\dagger}) + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda}) - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ = \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} [ A _ {\pi_ {\theta_ {t}}} (s, m) ] + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} [ \psi_ {\theta_ {t}} (m) ^ {\top} u _ {\theta_ {t}} ^ {\dagger} - A _ {\pi_ {\theta_ {t}}} (s, m) ] + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} [ \psi_ {\theta_ {t}} (m) ] ^ {\top} (u _ {\theta_ {t}} ^ {\lambda} - u _ {\theta_ {t}} ^ {\dagger}) \\ + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda}) - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ \stackrel {(i i i)} {=} (1 - \gamma) (V (\pi^ {*}) - V (\pi_ {\theta_ {t}})) + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} [ \psi_ {\theta_ {t}} (m) ^ {\top} u _ {\theta_ {t}} ^ {\dagger} - A _ {\pi_ {\theta_ {t}}} (s, m) ] + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} [ \psi_ {\theta_ {t}} (m) ] ^ {\top} (u _ {\theta_ {t}} ^ {\lambda} - u _ {\theta_ {t}} ^ {\dagger}) \\ + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda}) - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ \geqslant (1 - \gamma) (V (\pi^ {*}) - V (\pi_ {\theta_ {t}})) - \alpha \sqrt {\mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ [ \psi_ {\theta_ {t}} (m) ^ {\top} u _ {\theta_ {t}} ^ {\dagger} - A _ {\pi_ {\theta_ {t}}} (s , m) ] ^ {2} \right]} + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {\theta_ {t}} ^ {\lambda} - u _ {\theta_ {t}} ^ {\dagger}) \\ + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda}) - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ \overset {(i v)} {\geqslant} (1 - \gamma) (V (\pi^ {*}) - V (\pi_ {\theta_ {t}})) - \sqrt {\left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {t}}}} \right\| _ {\infty}} \alpha \sqrt {\mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ [ \psi_ {\theta_ {t}} (m) ^ {\top} u _ {\theta_ {t}} ^ {\dagger} - A _ {\pi_ {\theta_ {t}}} (s , m) ] ^ {2} \right]} + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {\theta_ {t}} ^ {\lambda} - u _ {\theta_ {t}} ^ {\dagger}) \\ + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} \left(u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda}\right) - \frac {\alpha^ {2} \left\| u _ {t} (w _ {t}) \right\| _ {2} ^ {2}}{2} \\ \overset {(v)} {\geqslant} (1 - \gamma) (V (\pi^ {*}) - V (\pi_ {\theta_ {t}})) - \sqrt {\frac {1}{1 - \gamma} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \alpha \sqrt {\mathbb {E} _ {\nu_ {\pi^ {*}}} [ [ \psi_ {\theta_ {t}} (m) ^ {\top} u _ {\theta_ {t}} ^ {\dagger} - A _ {\pi_ {\theta_ {t}}} (s , m) ] ^ {2} ]} \\ + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {\theta_ {t}} ^ {\lambda} - u _ {\theta_ {t}} ^ {\dagger}) + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda}) - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ \overset {(v i)} {\geqslant} (1 - \gamma) (V (\pi^ {*}) - V (\pi_ {\theta_ {t}})) - \sqrt {\frac {1}{1 - \gamma} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \alpha \sqrt {\mathbb {E} _ {\nu_ {\pi^ {*}}} [ [ \psi_ {\theta_ {t}} (m) ^ {\top} u _ {\theta_ {t}} ^ {\dagger} - A _ {\pi_ {\theta_ {t}}} (s , m) ] ^ {2} ]} - \alpha C _ {s o f t} \lambda \\ - 2 \alpha \left\| u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda} \right\| _ {2} - \frac {\alpha^ {2} \left\| u _ {t} (w _ {t}) \right\| _ {2} ^ {2}}{2}. \\ \end{array}
$$

where (i) is by taking an extra expectation without changing the inner summand, (ii) follows by Lemma J.1 and Lemma 5 in (Xu et al., 2020), (iii) follows by the value difference lemma (Lemma I.4), (iv) follows by defining  $\left\| \frac{\nu_{\pi^*}}{\nu_{\pi_\theta_t}} \right\|_\infty := \max_{s,m} \frac{\nu_{\pi^*}(s,m)}{\nu_{\pi_\theta_t}(s,m)}$ , (v) follows because  $\nu_{\pi_\theta_t}(s,m) \geqslant (1 - \gamma)\nu_{\pi_\theta_0}(s,m)$ , (vi) follows by Lemma6 in (Xu et al., 2020) and

# Lemma J.2.

Next, we denote  $\Delta_{actor} := \max_{\theta \in \mathbb{R}^M} \min_{w \in \mathbb{R}^d} \mathbb{E}_{\nu_{\pi_\theta}} [[\psi_\theta^\top w - A_{\pi_\theta}(s, m)]^2]$  as the actor error.

$$
\begin{array}{l} \mathrm {D} \left(\theta_ {\mathrm {t}}\right) - \mathrm {D} \left(\theta_ {\mathrm {t} + 1}\right) \\ \geqslant (1 - \gamma) (V (\pi^ {*}) - V (\pi_ {\theta_ {t}})) - \sqrt {\frac {1}{1 - \gamma} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \alpha \sqrt {\Delta_ {a c t o r}} - \alpha C _ {s o f t} \lambda \\ - 2 \alpha \left\| u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda} \right\| _ {2} - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ \geqslant (1 - \gamma) (V (\pi^ {*}) - V (\pi_ {\theta_ {t}})) - \sqrt {\frac {1}{1 - \gamma} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \alpha \sqrt {\Delta_ {a c t o r}} - \alpha C _ {s o f t} \lambda \\ - 2 \alpha \left\| u _ {t} \left(w _ {t}\right) - u _ {\theta_ {t}} ^ {\lambda} \right\| _ {2} - \frac {\alpha^ {2} \left\| u _ {t} \left(w _ {t}\right) - u ^ {\lambda} \left(\theta_ {t}\right) \right\| _ {2} ^ {2}}{2} - \frac {\alpha^ {2}}{\lambda^ {2}} \| \nabla_ {\theta} V \left(\theta_ {t}\right) \| _ {2} ^ {2}. \\ \end{array}
$$

Rearranging and dividing by  $(1 - \gamma)\alpha$ , and taking expectation both sides we get

$$
\begin{array}{l} V \left(\pi^ {*}\right) - \mathbb {E} \left[ V \left(\pi_ {\theta_ {t}}\right) \right] \\ \leqslant \frac {\mathbb {E} [ \mathrm {D} (\theta_ {t}) ] - \mathbb {E} [ \mathrm {D} (\theta_ {t + 1}) ]}{(1 - \gamma) \alpha} + \frac {2 \sqrt {\mathbb {E} [ \| u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda} \| _ {2} ^ {2} ]}}{1 - \gamma} + \frac {\alpha \mathbb {E} \left[ \| u _ {t} (w _ {t}) - u ^ {\lambda} (\theta_ {t}) \| _ {2} ^ {2} \right]}{2 (1 - \gamma)} \\ + \frac {\alpha}{\lambda^ {2} (1 - \gamma)} \mathbb {E} \left[ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} \right] + \sqrt {\frac {1}{(1 - \gamma) ^ {3}} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \sqrt {\Delta_ {a c t o r}} + \frac {C _ {s o f t} \lambda}{1 - \gamma}. \\ \end{array}
$$

Next we use the same argument as in eq (33) and Lemma 2 in Xu et al. (2020) to bound the second term.

$$
\mathbb {E} \left[ \left\| u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda} \right\| _ {2} ^ {2} \right] \leqslant \frac {C}{B} + \frac {1 0 8 \mathbb {E} \left[ \left\| w _ {t} - w _ {\theta_ {t}} ^ {*} \right\| _ {2} ^ {2} \right]}{\lambda^ {2}} + \frac {2 1 6 \Delta_ {c r i t i c}}{\lambda^ {2}}.
$$

where  $C \coloneqq \frac{18}{\lambda^2} \frac{24(1 + 2R_w)^2[1 + (\kappa - 1)\xi]}{B(1 - \xi)} + \frac{4}{\lambda^4(1 - \gamma)^2} \cdot \frac{8[1 + (\kappa - 1)\xi]}{(1 - \xi)B}$ . Using this in the bound and using  $\sqrt{a + b} \leqslant \sqrt{a} + \sqrt{b}$  for positive  $a, b$  above, we have,

$$
\begin{array}{l} V \left(\pi^ {*}\right) - \mathbb {E} \left[ V \left(\pi_ {\theta_ {t}}\right) \right] \\ \leqslant \frac {\mathbb {E} [ \mathrm {D} (\theta_ {\mathrm {t}}) ] - \mathbb {E} [ \mathrm {D} (\theta_ {\mathrm {t + 1}}) ]}{(1 - \gamma) \alpha} + \frac {2}{1 - \gamma} \left(\sqrt {\frac {C}{B}} + 1 1 \sqrt {\frac {\mathbb {E} \left[ \left\| w _ {t} - w _ {\theta_ {t}} ^ {*} \right\| _ {2} ^ {2} \right]}{\lambda^ {2}}} + 1 5 \sqrt {\frac {\Delta_ {\text {c r i t i c}}}{\lambda^ {2}}}\right) \\ + \frac {\alpha}{2 (1 - \gamma)} \left(\frac {C}{B} + 1 0 8 \frac {\mathbb {E} \left[ \left\| w _ {t} - w _ {\theta_ {t}} ^ {*} \right\| _ {2} ^ {2} \right]}{\lambda^ {2}} + 2 1 6 \frac {\Delta_ {c r i t i c}}{\lambda^ {2}}\right) \\ + \frac {\alpha}{\lambda^ {2} (1 - \gamma)} \mathbb {E} \left[ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} \right] + \sqrt {\frac {1}{(1 - \gamma) ^ {3}} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \sqrt {\Delta_ {a c t o r}} + \frac {C _ {s o f t} \lambda}{1 - \gamma}. \\ \end{array}
$$

Summing over all  $t = 0,1,\ldots ,T - 1$  and then dividing by  $T$  we get,

$$
\begin{array}{l} V (\pi^ {*}) - \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ V \left(\pi_ {\theta_ {t}}\right) \right] \\ \leqslant \frac {\mathbb {D} (\theta_ {0}) - \mathbb {E} [ \mathbb {D} (\theta_ {\mathrm {T}}) ]}{(1 - \gamma) \alpha T} + \frac {2}{(1 - \gamma)} \left(\sqrt {\frac {C}{B}} + 1 5 \sqrt {\frac {\Delta_ {\text {c r i t i c}}}{\lambda^ {2}}}\right) + \frac {2 2}{(1 - \gamma) T} \sum_ {t = 0} ^ {T - 1} \sqrt {\frac {\mathbb {E} \left[ \left\| w _ {t} - w _ {\theta_ {t}} ^ {*} \right\| _ {2} ^ {2} \right]}{\lambda^ {2}}} \\ + \frac {\alpha}{2 (1 - \gamma)} \left(\frac {C}{B} + 2 1 6 \frac {\Delta_ {c r i t i c}}{\lambda^ {2}}\right) + \frac {5 4 \alpha}{(1 - \gamma) T} \sum_ {t = 0} ^ {T - 1} \frac {\mathbb {E} \left[ \left\| w _ {t} - w _ {\theta_ {t}} ^ {*} \right\| _ {2} ^ {2} \right]}{\lambda^ {2}} \\ + \frac {\alpha}{\lambda^ {2} (1 - \gamma) T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} \right] + \sqrt {\frac {1}{(1 - \gamma) ^ {3}} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \sqrt {\Delta_ {a c t o r}} + \frac {C _ {s o f t} \lambda}{1 - \gamma}. \\ \end{array}
$$

We now put the value of  $\alpha \leqslant \frac{\lambda^2}{2\sqrt{ML}_V(1 + \lambda)}$ , we get,

$$
\begin{array}{l} V \left(\pi^ {*}\right) - \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ V \left(\pi_ {\theta_ {t}}\right) \right] \\ \leqslant C _ {1} \frac {\sqrt {M}}{T} + \frac {C _ {2}}{\sqrt {B}} + C _ {3} \sqrt {\Delta_ {c r i t i c}} + \frac {C _ {4}}{T} \sum_ {t = 0} ^ {T - 1} \sqrt {\mathbb {E} \left[ \left\| w _ {t} - w _ {\theta_ {t} ^ {*}} \right\| _ {2} ^ {2} \right]} \\ + \frac {C _ {5}}{B} + C _ {6} \sqrt {\Delta_ {c r i t i c}} + \frac {C _ {7}}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ \left\| w _ {t} - w _ {\theta_ {t}} ^ {*} \right\| _ {2} ^ {2} \right] + \frac {C _ {8}}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ \left\| \nabla_ {\theta} V (\theta_ {t}) \right\| _ {2} ^ {2} \right] \\ + \sqrt {\frac {1}{(1 - \gamma) ^ {3}} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \sqrt {\Delta_ {a c t o r}} + C _ {9} \lambda . \\ \end{array}
$$

Letting  $T = \mathcal{O}\left(\frac{\sqrt{M}}{(1 - \gamma)^2\varepsilon}\right)$ ,  $B = \mathcal{O}\left(\frac{1}{(1 - \gamma)^2\varepsilon^2}\right)$  then  $\mathbb{E}\left[\| \nabla_{\theta}V(\theta_t)\| _2^2\right]\leqslant \varepsilon^2$  and

$$
V \left(\pi^ {*}\right) - \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ V \left(\pi_ {\theta_ {t}}\right) \right] \leqslant \varepsilon + \mathcal {O} \left(\sqrt {\frac {\Delta_ {\text {a c t o r}}}{(1 - \gamma) ^ {3}}}\right) + \mathcal {O} \left(\Delta_ {\text {c r i t i c}}\right) + \mathcal {O} (\lambda).
$$

This leads to the total sample complexity as

$$
(B + H T _ {c}) T = \mathcal {O} \left(\left(\frac {1}{(1 - \gamma) ^ {2} \varepsilon^ {2}} + \frac {\sqrt {M}}{\varepsilon^ {2}} \log \frac {1}{\varepsilon}\right) \frac {\sqrt {M}}{(1 - \gamma) ^ {2} \varepsilon}\right) = \mathcal {O} \left(\frac {M}{(1 - \gamma) ^ {4} \varepsilon^ {3}} \log \frac {1}{\varepsilon}\right).
$$

# K. Simulation Details

In this section we describe the details of the Sec. 6. Recall that since neither value functions nor value gradients are available in closed-form, we modify SoftMax PG (Algorithm 1) to make it generally implementable using a combination of (1) rollouts to estimate the value function of the current (improper) policy and (2) a stochastic approximation-based approach to estimate its value gradient.

The Softmax PG with Gradient Estimation or SPGE (Algorithm 5), and the gradient estimation algorithm 6, GradEst, are shown below.

![](images/a7b686472ee8937eca99c7816a642a03e64fdb08a022cdceb0f3331fd32f0188.jpg)  
Figure 7: A chain MDP with 10 states.

# Algorithm 5 Softmax PG with Gradient Estimation (SPGE)

1: Input: learning rate  $\eta > 0$ , perturbation parameter  $\alpha > 0$ , Initial state distribution  $\mu$  
2: Initialize each  $\theta_{m}^{1} = 1$ , for all  $m \in [M]$ ,  $s_{1} \sim \mu$  
3: for  $t = 1$  to  $T$  do  
4: Choose controller  $m_t \sim \pi_t$ .  
5: Play action  $a_{t} \sim K_{m_{t}}(s_{t},:)$ .  
6: Observe  $s_{t+1} \sim \mathsf{P}(.|s_t, a_t)$ .  
7:  $\nabla_{\theta^t} \widehat{V^{\pi_{\theta_t}}}(\mu) = \operatorname{GradEst}(\theta_t, \alpha, \mu)$  
8: Update:  $\theta^{t + 1} = \theta^t +\eta .\nabla_{\theta^t V^{\pi_{\theta_t}}}(\mu).$  
9: end for

# Algorithm 6 GradEst (subroutine for SPGE)

1: Input: Policy parameters  $\theta$ , parameter  $\alpha > 0$ , Initial state distribution  $\mu$ .  
2: for  $i = 1$  to #runs do  
3:  $u^i\sim Unif(\mathbb{S}^{M - 1}).$  
4:  $\theta_{\alpha} = \theta + \alpha .u^{i}$  
5:  $\pi_{\alpha} = \mathrm{softmax}(\theta_{\alpha})$  
6: for  $l = 1$  to #rollouts do  
7: Generate trajectory  $(s_0, a_0, r_0, s_1, a_1, r_1, \ldots, s_{1t}, a_{1t}, r_{1t})$  using the policy  $\pi_{\alpha}$ : and  $s_0 \sim \mu$ .  
8: reward1 = ∑j=01t γj rj  
9: end for  
10:  $\mathrm{mr}(\mathrm{i}) = \mathrm{mean}(\mathrm{reward})$  
11: end for  
12: GradValue =  $\frac{1}{\#runs} \sum_{i=1}^{\#runs} mr(i).u^{i} \cdot \frac{M}{\alpha}$ .  
13: Return: GradValue.

Next we report some extra simulations we performed under different environments.

# K.1. State Dependent controllers - Chain MDP

We consider a linear chain MDP as shown in Figure 7. As evident from the figure,  $|S| = 10$  and the learner has only two actions available, which are  $\mathcal{A} = \{\text{left}, \text{right}\}$ . Hence the name 'chain'. The numbers on the arrows represent the reward obtained with the transition. The initial state is  $s_1$ . We let  $s_{100}$  as the terminal state. Let us define 2 base controllers,  $K_1$  and  $K_2$ , as follows.

$$
K _ {1} (\mathsf {l e f t} \mid \mathsf {s} _ {\mathrm {j}}) = \left\{ \begin{array}{l l} 1, & j \in [ 9 ] \backslash \{5 \} \\ 0. 1, & j = 5 \\ 0, & j = 1 0. \end{array} \right.
$$

$$
K _ {2} (\mathsf {l e f t} \mid \mathsf {s} _ {\mathsf {j}}) = \left\{ \begin{array}{l l} 1, & j \in [ 9 ] \backslash \{6 \} \\ 0. 1, & j = 6 \\ 0, & j = 1 0. \end{array} \right.
$$

and obviously  $K_{i}(\mathrm{right}|\mathbf{s}_{\mathrm{j}}) = 1 - \mathrm{K}_{\mathrm{i}}(\mathrm{left}|\mathbf{s}_{\mathrm{j}})$  for  $i = 1,2$ . An improper mixture of the two controllers, i.e.,  $(K_{1} + K_{2}) / 2$  is the optimal in this case. We show that our policy gradient indeed converges to the 'correct' combination, see Figure 8. We here provide an elementary calculation of our claim that the mixture  $K_{\mathrm{mix}} \coloneqq (K_1 + K_2) / 2$  is indeed better than applying  $K_{1}$  or  $K_{2}$  for all time. We first analyze the value function due to  $K_{i}, i = 1,2$  (which are the same due to symmetry of the problem and the probability values described).

$$
V ^ {K _ {i}} (s _ {1}) = \mathbb {E} \left[ \sum_ {t \geqslant 0} \gamma^ {t} r _ {t} (a _ {t}, s _ {t}) \right]
$$

![](images/f1741123f8a12745c199b5f15b56583d7092846f2697782fa97529526dd62d99.jpg)  
Figure 8: Softmax PG alg applied to the linear Chain MDP with various randomly chosen initial distribution. Plot shows probability of choosing controller  $K_{1}$  averaged over #trials

$$
\begin{array}{l} = 0. 1 \times \gamma^ {9} + 0. 1 \times 0. 9 \times 0. 1 \times \gamma^ {1 1} + 0. 1 \times 0. 9 \times 0. 1 \times 0. 9 \times 0. 1 \times \gamma^ {1 3}. \\ = 0. 1 \times \gamma^ {9} \left(1 + \left(0. 1 \times 0. 9 \gamma^ {2}\right) + \left(0. 1 \times 0. 9 \gamma^ {2}\right) ^ {2} + \dots\right) = \frac {0 . 1 \times \gamma^ {9}}{1 - 0 . 1 \times 0 . 9 \times \gamma^ {2}}. \\ \end{array}
$$

We will next analyze the value if a true mixture controller i.e.,  $K_{\mathrm{mix}}$  is applied to the above MDP. The analysis is a little more intricate than the above. We make use of the following key observations, which are elementary but crucial.

1. Let Paths be the set of all sequence of states starting from  $s_1$ , which terminate at  $s_{10}$  which can be generated under the policy  $K_{\mathrm{mix}}$ . Observe that

$$
V ^ {K _ {\mathrm {m i x}}} \left(s _ {1}\right) = \sum_ {\underline {{p}} \in \text {P a t h s}} \gamma^ {\operatorname {l e n g t h} (\underline {{p}})} \mathbb {P} [ \underline {{p}} ]. 1. \tag {20}
$$

Recall that reward obtained from the transition  $s_9 \rightarrow s_{10}$  is 1.

2. Number of distinct paths with exactly  $n$  loops:  $2^{n}$  
3. Probability of each such distinct path with  $n$  cycles:

$$
\begin{array}{l} = \underbrace {(0 . 5 5 \times 0 . 4 5) \times (0 . 5 5 \times 0 . 4 5) \times \ldots (0 . 5 5 \times 0 . 4 5)} _ {n t i m e s} \times 0. 5 5 \times 0. 5 5 \times \gamma^ {9 + 2 n} \\ = (0. 5 5) ^ {2} \times \gamma^ {9} (0. 5 5 \times 0. 4 5 \times \gamma^ {2}) ^ {n}. \\ \end{array}
$$

4. Finally, we put everything together to get:

$$
\begin{array}{l} V ^ {K _ {\mathrm {m i x}}} (s _ {1}) = \sum_ {n = 0} ^ {\infty} 2 ^ {n} \times (0. 5 5) ^ {2} \times \gamma^ {9} \times (0. 5 5 \times 0. 4 5 \times \gamma^ {2}) ^ {n} \\ = \frac {(0 . 5 5) ^ {2} \times \gamma^ {9}}{1 - 2 \times 0 . 5 5 \times 0 . 4 5 \times \gamma^ {2}} > V ^ {K _ {i}} (s _ {1}). \\ \end{array}
$$

This shows that a mixture performs better than the constituent controllers. The plot shown in Fig. 8 shows the Softmax PG algorithm (even with estimated gradients and value functions) converges to a  $(0.5,0.5)$  mixture correctly.

![](images/fbd718f60902d8b194a9eb3335544659304386bfc52ad07287471b15bf68577f.jpg)  
(a) Arrival rate:  $(\lambda_1, \lambda_2)$  = (0.49, 0.49)

![](images/5ce67b93ea5100967793b84e970b987c4f9f0a862ca1a04d5d59c71d038ad6a3.jpg)  
(b) Arrival rate:  $(\lambda_1, \lambda_2)$  = (0.49, 0.49)

![](images/957be6ba3cee6b73fdf8510554b9b0463e8fe37e2f2ea1e957ab18ea96be1546.jpg)  
(c) Arrival rate:  $(\lambda_1, \lambda_2) = (0.3, 0.4)$

![](images/260ef7d125f562e92fe50ffb71e56e5087ffe37fcbe04d05e4ff606bfa0cb651.jpg)  
(d) (Estimated) Value functions for case with the two base policies and Longest Queue First ("LQF")  
Figure 9: Softmax policy gradient algorithm applies show convergence to the best mixture policy.

![](images/714ed466e2cbd0be1f671d11d478498742416bab7767a4d9c1742be89df40857.jpg)  
(e) Case with 3 experts: Always Queue 1, Always Queue 2 and LQF.

# K.2. Stationary Bernoulli Queues

We study two different settings (1) where in the first case the optimal policy is a strict improper combination of the available controllers and (2) where it is at a corner point, i.e., one of the available controllers itself is optimal. Our simulations show that in both the cases, PG converges to the correct controller distribution.

Recall the example that we discussed in Sec. 2.2. We consider the case with Bernoulli arrivals with rates  $\lambda = [\lambda_1, \lambda_2]$  and are given two base/atomic controllers  $\{K_1, K_2\}$ , where controller  $K_i$  serves Queue  $i$  with probability  $1$ ,  $i = 1, 2$ . As can be seen in Fig. 9(b) when  $\lambda = [0.49, 0.49]$  (equal arrival rates), GradEst converges to an improper mixture policy that serves each queue with probability  $[0.5, 0.5]$ . Note that this strategy will also stabilize the system whereas both the base controllers lead to instability (the queue length of the unserved queue would obviously increase without bound). Figure 9(c), shows that with unequal arrival rates too, GradEst quickly converges to the best policy.

Fig. 9(d) shows the evolution of the value function of GradEst (in blue) compared with those of the base controllers (red) and the Longest Queue First policy (LQF) which, as the name suggests, always serves the longest queue in the system (black). LQF, like any policy that always serves a nonempty queue in the system whenever there is one<sup>3</sup>, is known to be optimal in the sense of delay minimization for this system (Mohan et al., 2016). See Sec. K in the Appendix for more details about this experiment.

Finally, Fig. 9(e) shows the result of the second experimental setting with three base controllers, one of which is delay optimal. The first two are  $K_{1}, K_{2}$  as before and the third controller,  $K_{3}$ , is LQF. Notice that  $K_{1}, K_{2}$  are both queue length-agnostic, meaning they could attempt to serve empty queues as well. LQF, on the other hand, always and only serves nonempty queues. Hence, in this case the optimal policy is attained at one of the corner points, i.e.,  $[0,0,1]$ . The plot shows the PG algorithm converging to the correct point on the simplex.

Here, we justify the value of the two policies which always follow one fixed queue, that is plotted as straight line in Figure 9(d). Let us find the value of the policy which always serves queue 1. The calculation for the other expert (serving queue 2 only) is similar. Let  $q_{i}(t)$  denote the length of queue  $i$  at time  $t$ . We note that since the expert (policy) always recommends

![](images/df1cf4d68e98d1dd2f5524e6a897c0dd33f37171eddc1ff529db31121cb36637.jpg)  
(a) A basic path-graph interference system with  $N = 4$  communication links.

![](images/efa182756bb31152d823b63ba7c199b6d5822e3e7c850eb597478f58ae238bb2.jpg)  
(b) The associated conflict (interference) graph is a path-graph.  
Figure 10: An example of a path graph network. The interference constraints are such that physically adjacent queues cannot be served simultaneously.

to serve one of the queue, the expected cost suffered in any round  $t$  is  $c_{t} = q_{1}(t) + q_{2}(t) = 0 + t.\lambda_{2}$ . Let us start with empty queues at  $t = 0$ .

$$
\begin{array}{l} V ^ {E x p e r t 1} (\underline {{\mathbf {0}}}) = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \gamma^ {t} c _ {t} \mid E x p e r t 1 \right] \\ = \sum_ {t = 0} ^ {T} \gamma^ {t}. t. \lambda_ {2} \\ \leqslant \lambda_ {2}. \frac {\gamma}{(1 - \gamma) ^ {2}}. \\ \end{array}
$$

With the values,  $\gamma = 0.9$  and  $\lambda_{2} = 0.49$ , we get  $V^{Expert1}(\underline{0}) \leqslant 44$ , which is in good agreement with the bound shown in the figure.

# K.3. Details of Path (Interference) Graph Networks

Consider a system of parallel transmitter-receiver pairs as shown in Figure 10(a). Due to the physical arrangement of the Tx-Rx pairs, no two adjacent systems can be served simultaneously because of interference. This type of communication system is commonly referred to as a path graph network (Mohan et al., 2020). Figure 10(b) shows the corresponding conflict graph. Each Tx-Rx pair can be thought of as a queue, and the edges between them represent that the two connecting queues, cannot be served simultaneously. On the other hand, the sets of queues which can be served simultaneously are called independent sets in the queuing theory literature. In the figure above, the independent sets are  $\{\emptyset, \{1\}, \{2\}, \{3\}, \{4\}, \{1, 3\}, \{2, 4\}, \{1, 4\}\}$ .

Finally, in Table 2, we report the mean delay values of the 5 base controllers we used in our simulation Fig. 2(c), Sec.6. We see the controller  $K_{2}$  which was chosen to be MER, indeed has the lowest cost associated, and as shown in Fig. 2(c), our Softmax PG algorithm (with estimated value functions and gradients) converges to it.

Table 2: Mean Packet Delay Values of Path Graph Network Simulation.  

<table><tr><td>Controller</td><td>Mean delay (# time slots) over 200 trials</td><td>Standard deviation</td></tr><tr><td>K1(MW)</td><td>22.11</td><td>0.63</td></tr><tr><td>K2(MER)</td><td>20.96</td><td>0.65</td></tr><tr><td>K3({1,3})</td><td>80.10</td><td>0.92</td></tr><tr><td>K4({2,4})</td><td>80.22</td><td>0.90</td></tr><tr><td>K5({1,4})</td><td>80.13</td><td>0.91</td></tr></table>

# K.4. Cartpole Experiments

We investigate further the example in our simulation in which the two constituent controllers are  $K_{opt} + \Delta$  and  $K_{opt} - \Delta$ . We use OpenAI gym to simulate this situation. In the Figure 2(b), it was shown our Softmax PG algorithm (with estimated values and gradients) converged to a improper mixture of the two controllers, i.e.,  $\approx (0.53, 0.47)$ . Let  $K_{conv}$  be defined as the (randomized) controller which chooses  $K_{1}$  with probability 0.53, and  $K_{2}$  with probability 0.47. Recall from Sec. 2.1 that this control law converts the linearized cartpole into an Ergodic Parameter Linear System (EPLS). In Table 3 we report the average number of rounds the pendulum stays upright when different controllers are applied for all time, over trajectories of length 500 rounds. The third column displays an interesting feature of our algorithm. Over 100 trials, the base controllers do not stabilize the pendulum for a relatively large number of trials, however,  $K_{conv}$  successfully does so most of the times.

Table 3: A table showing the number of rounds the constituent controllers manage to keep the cartpole upright.  

<table><tr><td>Controller</td><td>Mean number of rounds before the pendulum falls ∧ 500</td><td>#Trials out of 100 in which the pendulum falls before 500 rounds</td></tr><tr><td>K1(Kopt + Δ)</td><td>403</td><td>38</td></tr><tr><td>K2(Kopt - Δ)</td><td>355</td><td>46</td></tr><tr><td>Kconv</td><td>465</td><td>8</td></tr></table>

We mention here that if one follows  $K^{*}$ , which is the optimum controller matrix one obtains by solving the standard Discrete-time Algebraic Riccati Equation (DARE) (Bertsekas, 2011), the pole does not fall over 100 trials. However, as indicated in Sec.1, constructing the optimum controller for this system from scratch requires exponential, in the number of state dimension, sample complexity (Chen & Hazan, 2020). On the other hand  $K_{\mathrm{conv}}$  performs very close to the optimum, while being sample efficient.

Choice of hyperparameters. In the simulations, we set learning rate to be  $10^{-4}$ ,  $\# \text{runs} = 10$ ,  $\# \text{rollouts} = 10$ ,  $\mathsf{lt} = 30$ , discount factor  $\gamma = 0.9$  and  $\alpha = 1 / \sqrt{\# \text{runs}}$ . All the simulations have been run for 20 trials and the results shown are averaged over them. We capped the queue sizes at 1000.

# K.5. Some extra simulations for natural-actor-critic based improper learner NACIL

- First we show a queuing theory where we have 2 queues to be served and we have two base controllers similar to as we discussed in the Sec 2. However, here we have two different arrival rates for the two queues  $(\lambda_1, \lambda_2) \equiv (0.4, 0.3)$ , i.e., the arrival rates are unequal. We plot in Fig. 11 the probability of choosing the two different controllers. We see that ACIL converges to the "correct" mixture of the base controllers.  
- Next, we show a simulation on the setting in Sec. K.1, which we called a Chain MDP. We recall that this setting consists of two base controllers  $K_{1}$  and  $K_{2}$ , however a  $(1/2, 1/2)$  mixture of the two controllers was shown (analytically) to perform better than each individual ones. As the plot in Fig. 12 shows NACIL identifies the correct combination and follows it.

Choice of hyperparameters. For the queuing theoretic simulations of Algorithm 2 ACIL, we choose  $\alpha = 10^{-4}$ ,  $\beta = 10^{-3}$ . We choose the identity mapping  $\varphi(s) \equiv s$ , where  $s$  is the current state of the system which is a  $N$ -length vector, which consists of the  $i^{th}$  queue length at the  $i^{th}$  position.  $\lambda$  was chosen to be 0.1. The other parameters are chosen as  $B = 50$ ,  $H = 30$  and  $T_{c} = 20$ . We choose a buffer of size 1000 to keep the states bounded, i.e., if a queue exceeds a size 1000, those arrivals are ignored and queue length is not increased. This is used to normalize the  $\| \varphi(s) \|_2$  across time.

# L. Additional Comments

- Comment on the 'simple' experimental settings. The motivating examples may seem "simple" and trainable from scratch with respect to progress in the field of RL. However, our main point is that there are situations where, for example, one may have trained controllers for a range of environments in simulation. However, the real life environment may differ from the simulated ones. We demonstrate that exploiting such basic pre-learnt controllers via our approach can help in generating a better (meta) controller for a new, unseen environment, instead of learning a new controller for the new environment from scratch.

![](images/1a9c825a86d83251306bfa46603661d20effb0335909de0b831b0f0240afab28.jpg)  
Figure 11: NACIL alg applied to the a queuing system with two queues, having arrival rates  $(\lambda_1,\lambda_2)\equiv (0.4,0.3)$ . Plot shows probability of choosing controllers  $K_{1}$  and  $K_{2}$  averaged over 20 trials

![](images/85e76786f8dc8a9c8e0799b7a0c706969c242e98d378b9aa5c81380b33e0c400.jpg)  
Figure 12: NACIL alg applied to the linear Chain MDP with various randomly chosen initial distribution. Plot shows probability of choosing controller  $K_{1}$  averaged over 20 trials

- On characterizing the performance of the optimal mixture policy. As correctly noticed by the reviewer, the inverted pendulum experiment showed that the optimal mixture policy can vastly outperform the component controllers. Currently, however, we do not provide any theoretical guarantees regarding this, since this depends on the structure of the policy space and the underlying MDP, which is very challenging. We hope to explore this task in our future work.

# M. Discussion

We have considered the problem of using a menu of baseline controllers and combining them using improper probabilistic mixtures to form a superior controller. In many relevant MDP learning settings, we saw that this is indeed possible, and the policy gradient and actor-critic based analyses indicate that this approach may be widely applicable. This work opens up a plethora of avenues. One can consider a richer class of mixtures that can look at the current state and mix accordingly. For example, an attention model can be used to choose which controller to use, or other state-dependent models can be relevant. Another example is to artificially force switching across controllers to occur less frequently than in every round. The can help create momentum and allow the controlled process to 'mix' better, when using complex controllers.

A few caveats are in order regarding the potential societal impact and consequences of this work. As such, this paper offers a way of combining or 'blending' a given class of decision-making entities in the hope of producing a 'better' one. In this process, the definitions of what constitutes 'optimal' or 'expected' behavior from a policy are likely to be subjective, and may encode biases and attitudes of the system designer(s). More importantly, it is possible that the base policy class (or some elements of it) have undesirable properties to begin with (e.g., bias or insensitivity), which could get amplified in the improper learning process as an unintended outcome. We sound ample caution to practitioners who contemplate adopting this method.

Finally, in the present setting, the base controllers are fixed. It would be interesting to consider adding adaptive, or 'learning' controllers as well as the fixed ones. Including the base controllers can provide baseline performance below which the performance of the learning controllers would not drop.