File size: 112,258 Bytes
20091b1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
# Active Multi-Task Representation Learning

Yifang Chen<sup>1</sup> Simon S. Du<sup>1</sup> Kevin Jamieson<sup>1</sup>

# Abstract

To leverage the power of big data from source tasks and overcome the scarcity of the target task samples, representation learning based on multi-task pretraining has become a standard approach in many applications. However, up until now, choosing which source tasks to include in the multi-task learning has been more art than science. In this paper, we give the first formal study on resource task sampling by leveraging the techniques from active learning. We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance. Theoretically, we show that for the linear representation class, to achieve the same error rate, our algorithm can save up to a number of source tasks factor in the source task sample complexity, compared with the naive uniform sampling from all source tasks. We also provide experiments on real-world computer vision datasets to illustrate the effectiveness of our proposed method on both linear and convolutional neural network representation classes. We believe our paper serves as an important initial step to bring techniques from active learning to representation learning.

# 1. Introduction

Much of the success of deep learning is due to its ability to efficiently learn a map from high-dimensional, highly-structured input like natural images into a dense, relatively low-dimensional representation that captures the semantic information of the input. Multi-task learning leverages the observation that similar tasks may share a common representation to train a single representation to overcome a scarcity

*Equal contribution 1Paul G. Allen School of Computer Science & Engineering, University of Washington. Correspondence to: Yifang Chen <yifangc@cs.washington.edu>, Simon S. Du <ssdu@cs.washington.edu>, Kevin Jamieson <jamieson@cs.washington.edu>.

Proceedings of the  $39^{th}$  International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s).

of data for any one task. In particular, given only a small amount of data for a target task, but copious amounts of data from source tasks, the source tasks can be used to learn a high-quality low-dimensional representation, and the target task just needs to learn the map from this low-dimensional representation to its target-specific output. This paradigm has been used with great success in natural language processing domains GPT-2 (Radford et al.), GPT-3 (Brown et al., 2020), Bert (Devlin et al., 2018), as well as vision domains CLIP (Radford et al., 2021).

This paper makes the observation that not all tasks are equally helpful for learning a representation, and a priori, it can be unclear which tasks will be best suited to maximize performance on the target task. For example, modern datasets like CIFAR-10, ImageNet, and the CLIP dataset were created using a list of search terms and a variety of different sources like search engines, news websites, and Wikipedia. (Krizhevsky, 2009; Deng et al., 2009; Radford et al., 2021) Even if more data always leads to better performance, practicalities demand some finite limit on the size of the dataset that will be used for training. Up until now, choosing which source tasks to include in multi-task learning has been an ad hoc process and more art than science. In this paper, we aim to formalize the process of prioritizing source tasks for representation learning by formulating it as an active learning problem.

Specifically, we aim to achieve a target accuracy on a target task by requesting as little total data from source tasks as possible. For example, if a target task was to generate captions for images in a particular domain where few examples existed, each source task could be represented as a search term into Wikipedia from which (image, caption) pairs are returned. By sampling moderate numbers of (image, caption) pairs resulting from each search term (task), we can determine which tasks result in the best performance on the source task and increase the rate at which examples from those terms are sampled. By quickly identifying which source tasks are useful for the target task and sampling only from those, we can reduce the overall number of examples to train over, potentially saving time and money. Moreover, prioritizing relevant tasks in training, in contrast to uniformly weighting them, even has the potential to improve performance, as demonstrated in (Chen et al., 2021).

From the theoretical perspective, Tripuraneni et al. (2020; 2021); Du et al. (2020) study few-shots learning via multitask representation learning and gives the generalization guarantees, that, such representation learning can largely reduce the target sample complexity. But all those works only consider uniform sampling from each source task and thus establish the proof based on benign diversity assumptions on the sources tasks as well as some common assumptions between target and source tasks.

In this paper, we initiate the systematic study on using active learning to sample from source tasks. We aim to achieve the following two goals:

1. If there is a fixed budget on the source task data to use during training, we would like to select sources that maximize the accuracy of target task relative to naive uniform sampling from all source tasks. Equivalently, to achieve a given error rate, we want to reduce the amount of required source data. In this way, we can reduce the computation because the training complexity generally scales with the amount of data used, especially when the user has limited computing resources (e.g., a finite number of GPUs).  
2. Given a target task, we want to output a relevance score for each source task, which can be useful in at least two aspects. First, the scores suggest which certain source tasks are helpful for the target task and inform future task or feature selection (sometimes the task itself can be regard as some latent feature). Second, the scores help the user to decide which tasks to sample more, in order to further improve the target task accuracy.

# 1.1. Our contributions

In our paper, given a single target task and  $M$  source tasks we propose a novel quantity  $\nu^{*}\in \mathbb{R}^{M}$  that characterizes the relevance of each source task to the target task (cf. Defn 3.1). We design an active learning algorithm which can take any representation function class as input. The algorithm iteratively estimates  $\nu^{*}$  and samples data from each source task based on the estimated  $\nu^{*}$ . The specific contributions are summarized below:

- In Section 3, we give the definition of  $\nu^{*}$ . As a warm up, we prove that when the representation function class is linear and  $\nu^{*}$  is known, if we sample data from source tasks according to the given  $\nu^{*}$ , the sample complexity of the source tasks scales with the sparsity of  $\nu^{*} \in \mathbb{R}^{M}$  (the  $m$ -th task is relevant if  $\nu_{m}^{*} \neq 0$ ). This can save up to a factor of  $M$ , the number of source tasks, compared with the naive uniform sampling from all source tasks.  
- In Section 4, we drop the assumption of knowing  $\nu^{*}$

and describe our active learning algorithm that iteratively samples examples from tasks to estimate  $\nu^{*}$  from data. We prove that when the representation function class is linear, our algorithm never performs worse than uniform sampling, and achieves a sample complexity nearly as good as when  $\nu^{*}$  is known. The key technical innovation here is to have a trade-off on less related source tasks between saving sample complexity and collecting sufficient informative data for estimating  $\nu^{*}$ .

- In Section 5, we empirically demonstrate the effectiveness of our active learning algorithm by testing it on the corrupted MNIST dataset with both linear and convolutional neural network (CNN) representation function classes. The experiments show our algorithm gains substantial improvements compared to the non-adaptive algorithm on both models. Furthermore, we also observe that our algorithm generally outputs higher relevance scores for source tasks that are semantically similar to the target task.

# 1.2. Related work

There are many existing works on provable non-adaptive representation learning with various assumptions. Tripuraneni et al. (2020; 2021); Du et al. (2020); Thekumparamil et al. (2021); Collins et al. (2021); Xu & Tewari (2021) assume there exists an underlying representation shared across all tasks. (Notice that some works focus on learning a representation function for any possible target task, instead of learning a model for a specific target task as is the case in our work.) In particular, Tripuraneni et al. (2020); Thekumparamil et al. (2021) assume a low dimension linear representation. Furthermore, it assumes the covariance matrix of all input features is the identity and the linear representation model is orthonormal. Du et al. (2020); Collins et al. (2021) also study a similar setting but lift the identity covariance and orthonormal assumptions. Both works obtain similar conclusions. We will discuss our results in the context of these two settings in Section 2.

Going beyond the linear representation, Du et al. (2020) generalize their bound to a 2-layer ReLu network and Tripuraneni et al. (2021) further considers any general representation and linear predictor classes. More recent work has studied fine-tuning in both theoretical and empirical contexts Shachaf et al. (2021); Chua et al. (2021); Chen et al. (2021). We leave extending our theoretical analysis to more general representation function classes as future work. Other than the generalization perspective, Tripuraneni et al. (2021); Thekumparampil et al. (2021); Collins et al. (2021) propose computational efficient algorithms in solving this non-convex empirical minimization problems during representation learning, including Method-of-moments (MOM) algorithm and Alternating Minimization. Incorpor

rating these efficient algorithms into our framework would also be a possible direction in the future.

Chen et al. (2021) also consider learning a weighting over tasks. However, their motivations are much different since they are working under the hypothesis that some tasks are not only irrelevant, but even harmful to include in the training of a representation. Thus, during training they aim to down-weight potentially harmful source tasks and upweight those source tasks most relevant to the target task. But the critical difference between their work and ours is that they assume a pass over the complete datasets from all tasks is feasible whereas we assume it is not (e.g., where each task is represented by a search term to Wikipedia or Google). In our paper, their setting would amount to being able to solve for  $\nu^{*}$  for free, the equivalent of the "known  $\nu^{*}$ " setting of our warm-up section. However, our main contribution is an active learning algorithm that ideally only looks at a vanishing fraction of the data from all the sources to train a representation.

There exists some empirical multi-task representation learning/transfer learning works that have similar motivations as us. For example, Yao et al. (2021) use a heuristic retriever method to select a subset of target-related NLP source tasks and show training on a small subset of source tasks can achieve similar performance as large-scale training. Zamir et al. (2018); Devlin et al. (2018) propose a transfer learning algorithm based on learning the underlying structure among visual tasks, which they called Taskonomy, and gain substantial experimental improvements.

Many classification, regression, and even optimization tasks may fall under the umbrella term active learning (Settles, 2009). We use it in this paper to emphasize that a priori, it is unknown which source tasks are relevant to the target task. We overcome this challenge by iterating the closed-loop learning paradigm of 1) collect a small amount of data, 2) make inferences about task relevancy, and 3) leverage these inferences to return to 1) with a more informed strategy for data collection.

# 2. Preliminaries

In this section, we formally describe our problem setup which will be helpful for our theoretical development.

Problem setup. Suppose we have  $M$  source tasks and one target task, which we will denote as task  $M + 1$ . Each task  $m \in [M + 1]$  is associated with a joint distribution  $\mu_{m}$  over  $\mathcal{X} \times \mathcal{Y}$ , where  $\mathcal{X} \in \mathbb{R}^{d}$  is the input space and  $\mathcal{Y} \in \mathbb{R}$  is the output space. We assume there exists an underlying representation function  $\phi^{*}: \mathcal{X} \to \mathcal{Z}$  that maps the input to some feature space  $\mathcal{Z} \in \mathbb{R}^{K}$  where  $K \ll d$ . We restrict the representation function to be in some function

class  $\Phi$ , e.g., linear functions, convolutional nets, etc. We also assume the linear predictor to be a linear mapping from feature space to output space, which is represented by  $w_{m}^{*}\in \mathbb{R}^{K}$ . Specifically, we assume that for each task  $m\in [M + 1]$ , an i.i.d sample  $(x,y)\sim \mu_{m}$  can be represented as  $y = \phi (x)^{\top}w_{m}^{*} + z$ , where  $z\sim \mathcal{N}(0,\sigma^2)$ . Lastly, we also impose a regularity condition such that for all  $m$ , the distribution of  $x$  when  $(x,y)\sim \mu_{m}$  is 1-sub-Gaussian.

During the learning process, we assume that we have only a small, fixed amount of data  $\{x_{M + 1}^i,y_{M + 1}^i\}_{i\in [n_{M + 1} ]}$  drawn i.i.d. from the target task distribution  $\mu_{M + 1}$ . On the other hand, at any point during learning we assume we can obtain an i.i.d. sample from any source task  $m\in [M]$  without limit. This setting aligns with our main motivation for active representation learning where we usually have a limited sample budget for the target task but nearly unlimited access to large-scale source tasks (such as (image,caption) example pairs returned by a search engine from a task keyword).

Our goal is to use as few total samples from the source tasks as possible to learn a representation and linear predictor  $\phi, w_{M+1}$  that minimizes the excess risk on the target task defined as

$$
\operatorname {E R} _ {M + 1} (\phi , w) = L _ {M + 1} (\phi , w) - L _ {M + 1} \left(\phi^ {*}, w _ {M + 1} ^ {*}\right)
$$

where  $L_{M + 1}(\phi ,w) = \mathbb{E}_{(x,y)\sim \mu_{M + 1}}\left[(\langle \phi (x),w\rangle -y)^2\right].$

Our theoretical study focuses on the linear representation function class, which is studied in (Du et al., 2020; Tripuraneni et al., 2020; 2021; Thekumparamil et al., 2021).

Definition 2.1 (low-dimension linear representation).  $\Phi = \{x\to B^{\top}x\mid B\in \mathbb{R}^{d\times K}\}$ . We denote the true underlying representation function as  $B^{*}$ . Without loss of generality, we assume for all  $m\in [M + 1],\mathbb{E}_{\mu_m})[xx^\top ]$  are equal.

We also make the following assumption which has been used in (Tripuraneni et al., 2020). We note that Theorem 3.2 does not require this assumption, but Theorem E.4 does.

Assumption 2.2 (Benign low-dimension linear representation). We assume  $\mathbb{E}_{\mu_m}[xx^\top] = I$  and  $\Omega(1) \leq \|w_m^*\|_2 \leq R$  for all  $m \in [M+1]$ . We also assume  $B^*$  is not only linear, but also orthonormal.

Notations We denote the  $n_m$  i.i.d samples collected from source task  $m$  as the input matrix  $X_{m}\in \mathbb{R}^{n_{m}\times d}$ , output vector  $Y_{m}\in \mathbb{R}^{n_{m}}$  and noise vector  $Z_{m}\in \mathbb{R}^{n_{m}}$ . We then denote the expected and empirical input variances as  $\Sigma_{m} = \mathbb{E}_{(x,y)\sim \mu_{m}}xx^{\top}$  and  $\hat{\Sigma}_m = \frac{1}{n_m} (X_m)^\top X_m$ . In addition, we denote the collection of  $\{w_{m}\}_{m\in [M]}$  as  $W\in \mathbb{R}^{K\times M}$ . Note that, the learning process will be divided into several epochs in our algorithm stated later, so we sometimes add subscript or superscript  $i$  on those empirical notations to refer to the data used in certain epoch  $i$ . Finally, we use  $\widetilde{\mathcal{O}}$  to hide  $\log (K,M,d,1 / \varepsilon ,\sum_{m = 1}^{M}n_{m})$ .

Other data assumptions Based on our large-scale source tasks motivation, we assume  $M \geq K$  and  $\sigma_{\mathrm{min}}(W^{*}) > 0$ , which means the source tasks are diversified enough to learn all relevant representation features with respect to the low-dimension space. This is the standard diversity assumption used in many recent works (Du et al., 2020; Tripuraneni et al., 2020; 2021; Thekumparamil et al., 2021). In addition, we assume  $\sigma \geq \Omega(1)$  to make our main result easier to read. This assumption can be lifted by adding some corner case analysis.

# 3. Task Relevance  $\nu^{*}$  and More Efficient Sampling with Known  $\nu^{*}$

In this section, we give our key definition of task relevance, based on which, we design a more efficient source task sampling strategy.

Note because  $\sigma_{\mathrm{min}}(W^{*}) > 0$  we can regard  $w_{M + 1}^{*}$  as a linear combination of  $\{w_m^*\}_{m\in [M]}$ .

Definition 3.1.  $\nu^{*}\in \mathbb{R}^{M}$  is defined as

$$
\nu^ {*} = \underset {\nu} {\arg \min } \| \nu \| _ {2} \quad \text {s . t .} \quad W ^ {*} \nu = w _ {M + 1} ^ {*} \tag {1}
$$

where larger  $|\nu^{*}(m)|$  means higher relevance between source task  $m$  and the target task. If  $\nu^{*}$  is known to the learner, intuitively, it makes sense to draw more samples from source tasks that are most relevant.

For each source task  $m \in [M]$ , Line 3 in Alg. 1 draws  $n_m \propto (\nu^*(m))^2$  samples. The algorithm then estimates the shared representation  $\phi : \mathbb{R}^d \to \mathbb{R}^K$ , and task-specific linear predictors  $W = \{w_m^*\}_{m=1}^M$  by empirical risk minimization across all source tasks following the standard multi-task representation learning approach.

Below, we give our theoretical guarantee on the sample complexity from the source tasks when  $\nu^{*}$  is known.

Theorem 3.2. Under the low-dimension linear representation setting as defined in Definition 2.1, with probability at least  $1 - \delta$ , our algorithm's output satisfies  $\mathrm{ER}(\hat{B}, \hat{w}_{M + 1}) \leq \varepsilon^2$  whenever the total sampling budget from all sources  $N_{total}$  is at least

$$
\widetilde {\mathcal {O}} \left((K d + K M + \log (1 / \delta)) \sigma^ {2} s ^ {*} \| \nu^ {*} \| _ {2} ^ {2} \varepsilon^ {- 2}\right)
$$

and the number of target samples  $n_{M + 1}$  is at least

$$
\widetilde {\mathcal {O}} \left(\sigma^ {2} \left(K + \log (1 / \delta)\right) \varepsilon^ {- 2}\right)
$$

where  $s^* = \min_{\gamma \in [0,1]} (1 - \gamma) \| \nu^* \|_{0,\gamma} + \gamma M$  and  $\| \nu \|_{0,\gamma} := \left\{ m : |\nu_m| > \sqrt{\gamma \frac{\|\nu^*\|^2}{N_{total}}} \right\}$ .

Note that the number of target samples  $n_{M + 1}$  scales only with the dimension of the feature space  $K$ , and not the input

# Algorithm 1 Multi-task sampling strategy with Known  $\nu^{*}$

1: Input: confidence  $\delta$ , representation function class  $\Phi$ , combinatorial coefficient  $\nu^{*}$ , source-task sampling budget  $N_{\mathrm{total}} \gg M(Kd + \log(M / \delta))$  
2: Initialize the lower bound  $\underline{N} = Kd + \log (M / \delta)$  and number of samples  $n_m = \max \left\{(N_{\mathrm{total}} - M\underline{N})\frac{(\nu^*(m))^2}{\|\nu^*\|_2^2},\underline{N}\right\}$  for all  $m\in [M]$ .  
3: For each task  $m$ , draw  $n_m$  i.i.d samples from the corresponding offline dataset denoted as  $\{X_{m},Y_{m}\}_{m = 1}^{M}$  
4: Estimate the models as

$$
\hat {\phi}, \hat {W} = \underset {\phi \in \Phi , W = [ w _ {1}, \dots , w _ {M} ]} {\arg \min } \sum_ {m = 1} ^ {M} \| \phi \left(X _ {m}\right) w _ {m} - Y _ {m} \| ^ {2}. \tag {2}
$$

$$
\hat {w} _ {M + 1} = \underset {w} {\arg \min } \left\| \hat {\phi} \left(X _ {M + 1}\right) w - Y _ {M + 1} \right\| ^ {2} \tag {3}
$$

5: Return  $\hat{\phi}$ ,  $\hat{w}_{M+1}$

dimension  $d \gg K$  which would be necessary without multi-task learning. This dependence is known to be optimal (Du et al., 2020). The quantity  $s^*$  characterizes our algorithm's ability to adapt to the approximate sparsity of  $\nu^{*}$ . Noting that  $\sqrt{\frac{\|\nu^{*}\|_2^2}{N_{total}}}$  is roughly on the order of  $\varepsilon$ , taking  $\gamma \approx 1 / M$  suggests that to satisfy  $\mathrm{ER}(\hat{B},\hat{w}_{M + 1}) \leq \varepsilon^2$ , only those source tasks with relevance  $|\nu^{*}(m)| \gtrsim \varepsilon$  are important for learning.

For comparison, we rewrite the bound in (Du et al., 2020) in the form of  $\nu^{*}$ .

Theorem 3.3. Under Assumption 2.1, to obtain the same accuracy result, the non-adaptive (uniform) sampling of (Du et al., 2020) requires that the total sampling budget from all sources  $N_{\text{total}}$  is at least

$$
\widetilde {\mathcal {O}} \left((K d + K M + \log (1 / \delta)) \sigma^ {2} M \| \nu^ {*} \| _ {2} ^ {2} \varepsilon^ {- 2}\right)
$$

and requires the same amount of target samples as above.

Note the key difference is that the  $s^*$  in Theorem 3.2 is replaced by  $M$  in Theorem 3.3. Below we give a concrete example to show this difference is significant.

Example: Sparse  $\nu^{*}$ . Consider an extreme case where  $w_{m} = e_{m \bmod (K - 1) + 1}$  for all  $m \in [M - 1]$ , and  $w_{M} = w_{M + 1} = e_{K}$ . This suggests that the target task is exactly the same as the source task  $M$  and all the other source tasks are uninformative. It follows that  $\nu^{*}$  is a 1-sparse vector  $e_{M}$  and  $s^{*} = 1$  when  $\gamma = 0$ . We conclude that uniform sampling requires a sample complexity that is  $M$  times larger than that of our non-uniform procedure.

# 3.1. Proof sketch of Theorem 3.2

We first claim two inequalities that are derived via straightforward modifications of the proofs in Du et al. (2020):

$$
\operatorname {E R} \left(\hat {B}, \hat {w} _ {M + 1}\right) \lesssim \frac {\| P _ {X _ {M + 1}} ^ {\perp} \hat {B} X _ {M + 1} B ^ {*} w _ {M + 1} ^ {*} \| ^ {2}}{n _ {M + 1}} \tag {4}
$$

$$
\frac {\left\| P _ {X _ {M + 1}} ^ {\perp} \dot {B} X _ {M + 1} B ^ {*} \widetilde {W} ^ {*} \right\| _ {F} ^ {2}}{n _ {M + 1}} \lesssim \sigma^ {2} (K (M + d) + \log \frac {1}{\delta}) \tag {5}
$$

where  $P_A^\perp = I - A(A^\top A)^\dagger A^\top$ ,  $\tilde{\nu}^*(m) = \frac{\nu^*(m)}{\sqrt{n_m}}$ , and  $\tilde{W}$  is  $\left[\sqrt{n_1} w_1^*, \sqrt{n_2} w_2^*, \ldots, \sqrt{n_M} w_M^*\right]$ . By using these two results and noting that  $w_{M+1}^* = \widetilde{W}^* \tilde{\nu}^*$ , we have

$$
\begin{array}{l} \operatorname {E R} \left(\hat {B}, \hat {w} _ {M + 1}\right) \stackrel {(4)} {\lesssim} \frac {1}{n _ {M + 1}} \| P _ {X _ {M + 1} \hat {B}} ^ {\perp} X _ {M + 1} B ^ {*} \widetilde {W} ^ {*} \tilde {\nu} ^ {*} \| _ {2} ^ {2} \\ \leq \frac {1}{n _ {M + 1}} \| P _ {X _ {M + 1} \hat {B}} ^ {\perp} X _ {M + 1} B ^ {*} \widetilde {W ^ {*}} \| _ {F} ^ {2} \| \tilde {\nu} ^ {*} \| _ {2} ^ {2} \\ = (5) \times \| \tilde {\nu} ^ {*} \| _ {2} ^ {2}. \\ \end{array}
$$

The key step to our analysis is the decomposition of  $\| \tilde{\nu}^{*}\|_{2}^{2}$ . If we denote  $\epsilon^{-2} = \frac{N_{\mathrm{total}}}{\|\nu^{*}\|_{2}^{2}}$ , we have, for any  $\gamma \in [0,1]$

$$
\begin{array}{l} \sum_ {m} \frac {\nu^ {*} (m) ^ {2}}{n _ {m}} \left(\mathbf {1} \{| \nu^ {*} (m) | > \sqrt {\gamma} \epsilon \} + \mathbf {1} \{| \nu^ {*} (m) | \leq \sqrt {\gamma} \epsilon \}\right) \\ \lesssim \sum_ {m} \left(\epsilon^ {2} \mathbf {1} \left\{\left| \nu^ {*} (m) \right| > \sqrt {\gamma} \epsilon \right\} + \gamma \epsilon^ {2} \mathbf {1} \left\{\left| \nu^ {*} (m) \right| \leq \sqrt {\gamma} \epsilon \right\}\right) \\ \end{array}
$$

where the inequality comes from the definition of  $n_m$  and the fact  $N_{\mathrm{total}} \gg M \underline{N}$ . Now by replacing the value of  $\epsilon$  and  $\| \nu \|_{0,\gamma}$ , we get the desired result.

# 4. Main Algorithm and Theory

In the previous section, we showed the advantage of target-aware source task sampling when the optimal mixing vector  $\nu^{*}$  between source tasks and the target task is known. In practice, however,  $\nu^{*}$  is unknown and needs to be estimated based on the estimation of  $W^{*}$  and  $w_{M + 1}^{*}$ , which are themselves consequences of the unknown representation  $\phi^{*}$ . In this section, we design an algorithm that adaptively samples from source tasks to efficiently learn  $\nu^{*}$  and the prediction function for the target task  $B^{*}w_{M + 1}^{*}$ . The pseudocode for the procedure is found in Alg. 2.

We divide the algorithm into several epochs. At the end of each epoch  $i$ , we obtain estimates  $\hat{\phi}_i, \hat{W}_i$  and  $\hat{w}_{M+1}^i$  which are then used to calculate the task relevance denoted as  $\hat{\nu}_{i+1}$ . Then in the next epoch  $i+1$ , we sample data based on  $\hat{\nu}_{i+1}$ . The key challenge in this iterative estimation approach is that the error of the estimation propagates from round to round due to unknown  $\nu^*$  if we directly apply the sampling strategy proposed in Section 3. To avoid inconsistent estimation, we enforce the condition that each source task is sampled at least  $\beta \epsilon_i^{-1}$  times to guarantee that  $|\hat{\nu}_i(m)|$  is

# Algorithm 2 Active Task Relevance Sampling

1: Input: confidence  $\delta$ , a lower bound of  $\sigma_{\mathrm{min}}(W^{*})$  as  $\underline{\sigma}$ , representation function class  $\Phi$  
2: Initialize  $\hat{\nu}_1 = [1 / M, 1 / M, \dots]$ ,  $\epsilon_i = 2^{-i}$  and  $\{\beta_i\}_{i=1,2,\ldots}$ , which will be specified later

3: for  $i = 1,2,\ldots$  do

4: Set  $n_m^i = \max \left\{\beta_i\hat{\nu}_i^2 (m)\epsilon_i^{-2},\beta_i\epsilon_i^{-1}\right\} .$  
5: For each task  $m$ , draw  $n_m$  i.i.d samples from the corresponding offline dataset denoted as  $\{X_{m}^{i}, Y_{m}^{i}\}_{m=1}^{M}$  
6: Estimate  $\hat{\phi}^i, \hat{W}_i, \hat{w}_{M+1}^i$  with Eqn. (2) and (3)  
7: Estimate the coefficient as

$$
\hat {\nu} _ {i + 1} = \underset {\nu} {\arg \min } \| \nu \| _ {2} ^ {2} \quad \text {s . t .} \quad \hat {W} _ {i} \nu = \hat {w} _ {M + 1} ^ {i} \tag {6}
$$

8: end for

always  $\sqrt{\epsilon_i}$ -close to  $|c\nu^*(m)|$ , where  $c \in [1/16, 4]$ . We will show why such estimation is enough in our analysis.

# 4.1. Theoretical results under linear representation

Here we give a theoretical guarantee for the realizable linear representation function class. Under this setting, we choose  $\beta$  as follows<sup>1</sup>

$$
\begin{array}{l} \beta : = \beta_ {i} = \left(3 0 0 0 K ^ {2} R ^ {2} (K M + K d \log (\frac {N _ {\text {t o t a l}}}{\varepsilon M}) \right. \\ \left. + \log \left(\frac {M \log \left(1 / N _ {\text {t o t a l}}\right)}{\delta / 1 0}\right)\right) \frac {1}{\sigma^ {6}}, \forall i \\ \end{array}
$$

Theorem 4.1. Suppose we know in advance a lower bound of  $\sigma_{\mathrm{min}}(W^{*})$  denoted as  $\underline{\sigma}$ . Under the benign low-dimension linear representation setting as defined in Assumption 2.2, we have  $\operatorname{ER}(\hat{B}, \hat{w}_{M+1}) \leq \varepsilon^2$  with probability at least  $1 - \delta$  whenever the number of source samples  $N_{\text{total}}$  is at least

$$
\widetilde {\mathcal {O}} \left(\left(K (M + d) + \log \frac {1}{\delta}\right) \sigma^ {2} s ^ {*} \| \nu^ {*} \| _ {2} ^ {2} \varepsilon^ {- 2} + \square \sigma \varepsilon^ {- 1}\right)
$$

where  $\square = \left(MK^{2}dR / \underline{\sigma}^{3}\right)\sqrt{s^{*}}$  and the target task sample complexity  $n_{M + 1}$  is at least

$$
\widetilde {\mathcal {O}} \left(\sigma^ {2} K \varepsilon^ {- 2} + \diamondsuit \sqrt {s ^ {*}} \sigma \varepsilon^ {- 1}\right)
$$

where  $\diamondsuit = \min \left\{\frac{\sqrt{R}}{\underline{\sigma}^2K},\sqrt{K(M + d) + \log\frac{1}{\delta}}\right\}$  and  $s^*$  has been defined in Theorem 3.2.

Discussion. Comparing to the known  $\nu^{*}$  case studied in the previous section, in this unknown  $\nu^{*}$  setting our algorithm only requires an additional low order term  $\square \sigma \varepsilon^{-1}$  to

achieve the same objective (under the additional assumption of Assumption 2.2). Also, as long as  $\diamondsuit \leq \widetilde{\mathcal{O}} (\sigma K\varepsilon^{-1})$ , our target task sample complexity  $\widetilde{\mathcal{O}} (\sigma^2 K\varepsilon^{-2})$  remains the optimal rate (Du et al., 2020).

Finally, we remark that a limitation of our algorithm is that it requires some prior knowledge of  $\underline{\sigma}$ . However, because it only hits the low-order  $\epsilon^{-1}$  terms, this is unlikely to dominate either of the sample complexities for reasonable values of  $d$ ,  $K$ , and  $M$ .

# 4.2. Proof sketch

Step 1: We first show that the estimated distribution over tasks  $\hat{\nu}_i$  is close to the underlying  $\nu^{*}$ .

Lemma 4.2 (Closeness between  $\hat{\nu}_i$  and  $\nu^{*}$ ). With probability at least  $1 - \delta$ , for any  $i$ , as long as  $n_{M + 1} \geq \frac{2000\epsilon_i^{-1}}{\underline{\sigma}^4}$ , we have

$$
| \hat {\nu} _ {i + 1} (m) | \in \left\{ \begin{array}{l l} [ | \nu^ {*} (m) | / 1 6, 4 | \nu^ {*} (m) | ] & i f   \nu^ {*} (m) \geq \sigma \sqrt {\epsilon_ {i}} \\ [ 0, 4 \sqrt {\epsilon_ {i}} ] & i f   | \nu^ {*} (m) | \leq \sigma \sqrt {\epsilon_ {i}} \end{array} \right.
$$

Notice that the sample lower bound in the algorithm immediately implies sufficiently good estimation in the next epoch even if  $\hat{\nu}_{i + 1}$  goes to 0.

Proof sketch:

Under Assumption 2.2, by solving Eqn (3), we can rewrite the optimization problem on  $\nu^{*}$  and  $\hat{\nu}_i$  defined in Eqn.(1) and (6) roughly as the follows (see the formal definition in the proof of Lemma E.1 in Appendix E)

$$
\hat {\nu} _ {i + 1} = \underset {\nu} {\arg \min} \| \nu \| _ {2} ^ {2}
$$

s.t.  $\sum_{m}\hat{B}_{i}^{\top}\left(B^{*}w_{m}^{*} + \frac{1}{n_{m}^{i}}\left(X_{m}^{i}\right)^{\top}Z_{m}\right)\nu (m)$

$$
= \hat {B} _ {i} ^ {\top} \left(B ^ {*} w _ {M + 1} ^ {*} + \frac {1}{n _ {M + 1} ^ {i}} \left(X _ {M + 1} ^ {i}\right) ^ {\top} Z _ {M + 1}\right),
$$

and  $\nu^{*} = \underset {\nu}{\arg \min}\| \nu \|_{2}^{2}$

s.t.  $\sum_{m}w_{m}^{*}\nu (m) = w_{M + 1}^{*}.$

Solving these two optimization problems gives,

+ low order noise.

$$
\begin{array}{l} \nu^ {*} (m) = \left(B ^ {*} w _ {m} ^ {*}\right) ^ {T} \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {T}\right) ^ {+} \left(B ^ {*} w _ {M + 1} ^ {*}\right) \\ | \hat {\nu} _ {i + 1} (m) | \leq 2 \left| (B ^ {*} w _ {m} ^ {*}) ^ {T} (\hat {B} _ {i} \hat {W} _ {i} (\hat {B} _ {i} \hat {W} _ {i}) ^ {T}) ^ {+} B ^ {*} w _ {M + 1} ^ {*} \right| \\ \end{array}
$$

It is easy to see that the main difference between these two expressions is  $\left(B^{*}W^{*}(B^{*}W^{*})^{T}\right)^{+}$  and its corresponding empirical estimation. Therefore, by denoting the difference between these two terms as

$$
\Delta = (\hat {B} _ {i} \hat {W} _ {i} (\hat {B} _ {i} \hat {W} _ {i}) ^ {T}) ^ {+} - (B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {T}) ^ {+},
$$

we can establish the connection between the true and empirical task relevance as

$$
\left. \left| \hat {\nu} _ {i + 1} (m) \right| - 2 \left| \nu^ {*} (m) \right| \lesssim 2 \left| \left(B ^ {*} w _ {m} ^ {*}\right) ^ {T} \Delta \left(B ^ {*} w _ {M + 1} ^ {*}\right) \right| \right. \tag {7}
$$

Now the minimization on source tasks shown in Eqn. (2) ensures that

$$
\left\| B ^ {*} W ^ {*} - \hat {B} _ {i} \hat {W} _ {i} \right\| _ {F} \leq \sigma \operatorname {p o l y} (d, M) \sqrt {\epsilon_ {i}}.
$$

This helps us to further bound the  $(\hat{B}_i\hat{W}_i(\hat{B}_i\hat{W}_i)^T) - (B^* W^* (B^* W^*)^T)$  term, which can be regarded as a perturbation on the underlying matrix  $B^{*}W^{*}(B^{*}W^{*})^{T}$ . Then by using the generalized inverse matrix theorem (Kovanic, 1979), we can show that the inverse of the perturbed matrix is close to its original matrix on some low dimension space.

Therefore, we can upper bound Eqn. (7) by  $\sigma \sqrt{\epsilon_i}$ . We repeat the same procedure to lower bound the  $\frac{1}{2} |\nu^{*}(m)| - |\hat{\nu}_{i + 1}(m)|$ . Combining these two, we have

$$
| \hat {\nu} _ {i + 1} (m) | \in \left[ \frac {1}{2} | \nu^ {*} (m) | - \frac {7}{1 6} \sigma \sqrt {\epsilon_ {i}}, 2 | \nu^ {*} (m) | + 2 \sigma \sqrt {\epsilon_ {i}} \right]
$$

This directly lead to the result based on whether  $\nu^{*}(m)\geq$ $\sigma \sqrt{\epsilon}_i$  or not.

Step 2: Now we prove the following two main lemmas on the final accuracy and the total sample complexity.

Define event  $\mathcal{E}$  as the case that, for all epochs, the closeness between  $\hat{\nu}_i$  and  $\nu^{*}$  defined Lemma 4.2 has been satisfied.

Lemma 4.3 (Accuracy on each epoch (informal)). Under  $\mathcal{E}$ , after the epoch  $i$ , we have  $\mathrm{ER}(\hat{B}, \hat{w}_{M+1})$  roughly upper bounded by

$$
\frac {\sigma^ {2}}{\beta} \left(K M + K d + \log \frac {1}{\delta}\right) s _ {i} ^ {*} \epsilon_ {i} ^ {2} + \frac {\sigma^ {2} (K + \log (1 / \delta))}{n _ {M + 1}}
$$

where  $s_i^* = \min_{\gamma \in [0,1]}(1 - \gamma)\| \nu^*\|_{0,\gamma}^i +\gamma M$

and  $\| \nu \|_{0,\gamma}^i \coloneqq |\{m : \nu_m > \sqrt{\gamma} \epsilon_i\}|$ .

Proof sketch: As we showed in Section 3, the key for calculating the accuracy is to upper bound  $\sum_{m} \frac{\nu^{*}(m)^{2}}{n_{m}^{i}}$ . Similarly to Section 3, we employ the decomposition

$$
\sum_ {m} \frac {\nu^ {*} (m) ^ {2}}{n _ {m} ^ {i}} (\mathbf {1} \{| \nu^ {*} (m) | > \sqrt {\gamma} \epsilon_ {i} \} + \mathbf {1} \{| \nu^ {*} (m) | \leq \sqrt {\gamma} \epsilon_ {i} \}).
$$

The last sparsity-related term can again be easily upper bounded by  $\mathcal{O}((1 - \| \nu^{*}\|_{0,\gamma}^{i})\sigma^{2}\gamma \epsilon_{i}^{2})$

Then in order to make a connection between  $n_m^i$  and  $\nu^{*}(m)$  by using Lemma 4.2, we further decompose the first term

as follows and get the upper bound

$$
\begin{array}{l} \sum_ {m} \frac {\nu^ {*} (m) ^ {2}}{n _ {m} ^ {i}} \mathbf {1} \{\left| \nu^ {*} (m) \right| > \sigma \sqrt {\epsilon_ {i - 1}} \} \\ + \sum_ {m} \frac {\nu^ {*} (m) ^ {2}}{n _ {m} ^ {i}} \mathbf {1} \left\{\sigma \sqrt {\gamma} \epsilon_ {i} \leq | \nu^ {*} (m) | \leq \sqrt {\epsilon_ {i - 1}} \right\} \\ \lesssim \sum_ {m} \frac {\hat {\nu} _ {i} ^ {2}}{n _ {m} ^ {i}} \mathbf {1} \{| \nu^ {*} (m) | > \sigma \sqrt {\epsilon_ {i - 1}} \} \\ + \sum_ {m} \frac {\sigma^ {2} \epsilon_ {i}}{n _ {m} ^ {i}} \mathbf {1} \left\{\sqrt {\gamma} \epsilon_ {i - 1} \leq | \nu^ {*} (m) | \leq \sigma \sqrt {\epsilon_ {i - 1}} \right\} \\ \leq \sum_ {m} \epsilon_ {i} ^ {2} / \beta \mathbf {1} \left\{\left| \nu^ {*} (m) \right| > \sigma \sqrt {\epsilon_ {i - 1}} \right\} \\ + \sum_ {m} \sigma^ {2} \epsilon_ {i} ^ {2} / \beta \mathbf {1} \left\{\sqrt {\gamma} \epsilon_ {i} \leq | \nu^ {*} (m) | \leq \sigma \sqrt {\epsilon_ {i - 1}} \right\} \\ \leq \mathcal {O} (\| \nu^ {*} \| _ {0, \gamma} ^ {i} \epsilon_ {i} ^ {2} / \beta) \\ \end{array}
$$

where the second inequality is from the definition of  $n_m^i$ .

Lemma 4.4 (Sample complexity on each epoch(informal)). Under  $\mathcal{E}$ , after the epoch  $i$ , We have the total number of training samples from source tasks upper bounded by

$$
\mathcal {O} \left(\beta \left(M \varepsilon^ {- 1} + \| \nu^ {*} \| _ {2} ^ {2} \varepsilon^ {- 2}\right)\right) + l o w - o r d e r t e r m \times \Gamma .
$$

Proof sketch: For any fixed epoch  $i$ , by definition of  $n_m^i$ , we again decompose the summed source tasks based on  $\nu^{*}(m)$  and get the total sample complexity as follows

$$
\begin{array}{l} \sum_ {m + 1} ^ {M} \beta \hat {\nu} _ {i} ^ {2} (m) \epsilon_ {i} ^ {- 2} \mathbf {1} \left\{\left| \nu^ {*} (m) \right| > \sigma \sqrt {\epsilon_ {i - 1}} \right\} \\ + \sum_ {m + 1} ^ {M} \beta \hat {\nu} _ {i} ^ {2} (m) \epsilon_ {i} ^ {- 2} \mathbf {1} \left\{\left| \nu^ {*} (m) \right| \leq \sigma \sqrt {\epsilon_ {i - 1}} \right\} + M \beta \epsilon_ {i} ^ {- 1} \\ \end{array}
$$

Again by replacing the value of  $\hat{\nu}$  from Lemma 4.2, we can upper bounded second term in terms of  $\nu^{*}$  and we can also show that the third term is low order  $\epsilon^{-1}$ .

Theorem E.4 follows by combining the two lemmas.

# 5. Experiments

In this section, we empirically evaluate our active learning algorithm for multi-task by deriving tasks from the corrupted MNIST dataset (MNIST-C) proposed in Mu & Gilmer (2019). While our theoretical results only hold for the linear representations, our experiments demonstrate the effectiveness our algorithm on neural network representations as well. We show that our proposed algorithm: (1) achieves better performance when using the same amount of source samples as the non-adaptive sampling algorithm, and (2) gradually draws more samples on important source tasks.

# 5.1. Experiment setup

Dataset and problem setting. The MNIST-C dataset is a comprehensive suite of 16 different types of corruptions applied to the MNIST test set. To create source and target tasks, we divide each sub-dataset with a specific corruption into 10 tasks by applying one-hot encoding to  $0 - 9$  labels. Therefore, we have 160 tasks in total, which we denote as "corruption type + label". For example, brightness_0 denotes the data corrupted by brightness noise and are relabeled to  $1/0$  based on whether the data is number 0 or not. We choose a small number of fixed samples from the target task to mimic the scarcity of target task data. On the other hand, we set no budget limitation on source tasks. We compare the performance of our algorithm to the non-adaptive uniform sampling algorithm, where each is given the same number of source samples and same target task dataset.

Models. We start with the linear representation as defined in our theorem and set  $B \in \mathbb{R}^{28*28 \times 50}$  and  $w_{m}^{i} \in \mathbb{R}^{50}$ . Note that although the MNIST problem is usually a classification problem with cross-entropy loss, here we model it as a regression problem with  $\ell_{2}$  loss to align with the setting studied in this paper. Moreover, we also test our algorithm with 2-layer ReLU convolutional neural nets (CNN) followed by fully-connected linear layers, where all the source tasks share the same model except the last linear layer, also denoted as  $w_{m}^{i} \in \mathbb{R}^{50}$ .

AL algorithm implementation. We run our algorithm iteratively for 4 epochs. The non-adaptive uniform sampling algorithm is provided with the same amount of source samples. There are some differences between our proposed algorithm and what is implemented here. First we re-scale some parameters from the theorem to account for potential looseness in our analysis. Moreover, instead of drawing fresh i.i.d samples for each epoch and discarding the past, in practice, we reuse the samples from previous epochs and only draw what is necessary to meet the required number of samples of the current epoch. This introduces some randomness in the total source sample usage. For example, we may only require 100 samples from the source task A for the current epoch, but we may have sampled 200 from source task A in the previous epoch. So we always sample equal or less than non-adaptive algorithm in a single epoch. Therefore, in our result shown below, the total source sample numbers varies across target tasks. But we argue that this variation are roughly at the same level and will not effect our conclusion. Please refer to Appendix F.1 for details.

# 5.2. Results

Linear representation. We choose 500 target samples from each target task. After 4 epochs, we use in total around 30000 to 40000 source samples. As a result, our adaptive algorithm frequently outperforms the non-adaptive one as shown in Figure 1.

![](images/e367dd07e013d0d87811e484334c7be36d15d1c0d7d364265a40bc26a59ff58f.jpg)  
Figure 1. Performance between the adaptive (ada) and the non-adaptive (non-ada) algorithm on linear representation. Left: The prediction difference (in %) between ada and non-ada for all target tasks. The larger is the better. Respectively y-axis denotes noise type and x-axis denotes binarized label, with each grid representing a target task, e.g., the grid at the top left corner stands for target task brightness_0. In summary, the adaptive algorithm achieves  $1.1\%$  higher average accuracy than the non-adaptive one and results same or better accuracy in 136 out of 160 tasks. Middle: Histogram summary of incorrect prediction (left is better). There is a clear shift for adaptive algorithm towards left. Right: Sampling distribution for the target task glass_blur_2. Respectively, the plot shows numbers of samples from each source tasks at the beginning of epoch 3 by running adaptive algorithm. The samples clearly concentrated on several X_2 source tasks, which meets our intuition that all "2 vs. others" tasks should has closer connection with the glass_blur_2 target task.

![](images/965ceb0ce5718a834862d3824115f09a15a9e76d250bb90b4932f4db68bf128d.jpg)  
Figure 2. Performance between the adaptive (ada) and the non-adaptive (non-ada) algorithm on Convnet. Left: The prediction difference (\%) between ada and non-ada for all target tasks. (See more explanations for notations in Figure 1.) In summary, the adaptive algorithm achieves  $0.68\%$  higher average accuracy than the non-adaptive one and results same or better accuracy in 133 out of 160 tasks. Middle: Histogram summary of incorrect prediction (left is better). There is a clear for adaptive algorithm towards left. Although the average performance improvement is smaller than in the linear representation, the relative improvement is also significant given the already good baseline performance (most prediction error are below  $6\%$  while in linear most are above  $6\%$ ). Right: Sample distribution for target task as glass_blur_2. A large portion of samples again concentrate on several X_2 source tasks, which meets our intuition that all "2 vs. others" tasks should has closer connection with glass_blur_2 target task. But the overall sample distribution is more spread compared to the one on linear representation.

For those cases where gains are not observed, we conjecture that those tasks violate our realizable assumptions more than the others. We provide a more detailed discussion and supporting results for those failure cases in Appendix F.2.2. Next, we investigate the sample distribution at the beginning epoch 3. We show the result for glass_blur_2 as a representative case with more examples in the Appendix F.3. From the figure, we can clearly see the samples concentrate on more target-related source tasks.

Convnet. We choose 200 target samples from each target task. After 4 epochs, we use in total around 30000 to 40000 source samples. As a result, our adaptive algorithm again frequently outperforms the non-adaptive one as shown in Figure 1. Next, we again investigate the sample distribution at the beginning epoch 3 and show a representative result (more examples in Appendix F.3.2). First of all, there are still a number of source samples again concentrating on "2 vs. others". The reader may notice some other source tasks also contribute a relatively large amount of sample complexity. This is actually a typical phenomenon in our experiment on convnets, which is seldom observed in the

linear representation. This might be due to the more expressive power of CNN that captures some non-intuitive relationships between some source tasks and the target task. Or it might be simply due to the estimation error since our algorithm is theoretically justified only for the realizable linear representation. We provide more discussion in Appendix F.3.1.

# 6. Conclusion and future work

Our paper takes an important initial step to bring techniques from active learning to representation learning. There are many future directions. From the theoretical perspective, it is natural to analyze some fine-tuned models or even more general models like neural nets as we mentioned in the related work section. From the empirical perspective, our next step is to modify and apply the algorithm on more complicated CV or NLP datasets and further analyze its performance. Finally, it is also interesting to combine our task-wise active learning with instance-wise active learning results.

# Acknowledgements

YC want to thank Lei Chen for discussing about experiments. SSD acknowledges funding from NSF Award's IIS-2110170 and DMS- 2134106.

# References

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.  
Chen, S., Crammer, K., He, H., Roth, D., and Su, W. J. Weighted training for cross-task learning, 2021.  
Chua, K., Lei, Q., and Lee, J. D. How fine-tuning allows for effective meta-learning, 2021.  
Collins, L., Hassani, H., Mokhtari, A., and Shakkottai, S. Exploiting shared representations for personalized federated learning. arXiv preprint arXiv:2102.07078, 2021.  
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255, 2009. doi: 10.1109/CVPR.2009.5206848.  
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.  
Du, S. S., Hu, W., Kakade, S. M., Lee, J. D., and Lei, Q. Few-shot learning via learning the representation, provably. In International Conference on Learning Representations, 2020.  
Kovanic, P. On the pseudoinverse of a sum of symmetric matrices with applications to estimation. Kybernetika, 15(5):(341)-348, 1979. URL http://eudml.org/doc/28097.  
Krizhevsky, A. Learning multiple layers of features from tiny images. Technical report, 2009.  
Mu, N. and Gilmer, J. Mnist-c: A robustness benchmark for computer vision. arXiv preprint arXiv:1906.02337, 2019.  
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. Language models are unsupervised multitask learners.  
Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020, 2021.

Settles, B. Active learning literature survey. 2009.  
Shachaf, G., Brutzkus, A., and Globerson, A. A theoretical analysis of fine-tuning with linear teachers. Advances in Neural Information Processing Systems, 34, 2021.  
Thekumparampil, K. K., Jain, P., Netrapalli, P., and Oh, S. Sample efficient linear meta-learning by alternating minimization. arXiv preprint arXiv:2105.08306, 2021.  
Tripuraneni, N., Jordan, M., and Jin, C. On the theory of transfer learning: The importance of task diversity. Advances in Neural Information Processing Systems, 33, 2020.  
Tripuraneni, N., Jin, C., and Jordan, M. Provable meta-learning of linear representations. In International Conference on Machine Learning, pp. 10434-10443. PMLR, 2021.  
Xu, Z. and Tewari, A. Representation learning beyond linear prediction functions. arXiv preprint arXiv:2105.14989, 2021.  
Yao, X., Zheng, Y., Yang, X., and Yang, Z. Nlp from scratch without large-scale pretraining: A simple and efficient framework, 2021.  
Zamir, A. R., Sax, A., Shen, W., Guibas, L. J., Malik, J., and Savarese, S. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3712-3722, 2018.

# A. Appendix structure

In Appendix B, we define the commonly used notations in the following analysis. In Appendix C.2, we define three high probability events and prove three claims as variants of original results in Du et al. (2020). All these events and claims are widely used in the following theoretical analysis. Then we give formal proofs of Theorem 3.2 and Theorem 3.3 in Appendix D and formal proofs of Theorem E.4 in Appendix E. Finally, we show more comprehensive results of experiment in Appendix F.

# B. Notation

- Define  $\hat{\Delta} = \hat{B}\hat{W} - B^{*}W^{*}$  and correspondingly define  $\hat{\Delta}^i = \hat{B}_i\hat{W}_i - B^{*}W^{*}$  if the algorithm is divided into epochs.  
- Define  $\hat{\Delta}_m = \hat{B}\hat{w}_m - B^* w_m^*$  and correspondingly define  $\hat{\Delta}_m^i = \hat{B}_i\hat{w}_m^i - B^* w_m^*$ .  
- Restate  $P_A^\perp = I - A(A^\top A)^\dagger A^\top$  
- Restate that  $\hat{\Sigma}_m = (X_m)^\top X_m$  and correspondingly  $\hat{\Sigma}_m^i = (X_m^i)^\top X_m^i$  if the algorithm is divided into epochs.  
- Restate that  $\Sigma_{m} = \mathbb{E}\left[\hat{\Sigma}_{m}\right]$  and correspondingly  $\Sigma_{m}^{i} = \mathbb{E}\left[\hat{\Sigma}_{m}^{i}\right]$ .  
- Define  $\kappa = \frac{\lambda_{max}(\Sigma)}{\lambda_{min}(\Sigma)}$ , recall we assume all  $\Sigma_m = \Sigma$ . Note that in the analysis for adaptive algorithm, we assume identity covariance so  $\kappa = 1$ .  
- For convenience, we write  $\sum_{m=1}^{M}$  as  $\sum_{m}$ .  
- If the algorithm is divided into epochs, we denote the total number of epoch as  $\Gamma$ .

# C. Commonly used claims and definitions

# C.1. Co-variance concentration guarantees

We define the following guarantees on the feature covariance concentration that has been used in all proofs below.

$$
\mathcal {E} _ {\text {s o u r c e}} = \left\{0. 9 \Sigma_ {m} \leq \hat {\Sigma} _ {m} \leq 1. 1 \Sigma_ {m}, \forall m \in [ M ] \right\}
$$

$$
\mathcal {E} _ {\text {t a r g e t 1}} = \left\{0. 9 B _ {1} ^ {\top} B _ {2} \leq B _ {1} ^ {\top} \hat {\Sigma} _ {M + 1} B _ {2} \leq 1. 1 B _ {1} ^ {\top} B _ {2}, \text {f o r a n y o r t h o n o r m a l} B _ {1}, B _ {2} \in \mathbb {R} ^ {d \times K} \mid \Sigma_ {M + 1} = I \right\}
$$

$$
\mathcal {E} _ {\text {t a r g e t 2}} = \left\{0. 9 \Sigma_ {M + 1} \leq B ^ {\top} \hat {\Sigma} _ {M + 1} B \leq 1. 1 \Sigma_ {M + 1}, \text {f o r a n y} B \in \mathbb {R} ^ {d \times K} \right\}
$$

By Claim A.1 in Du et al. (2020), we know that, as long as  $n_m \gg d + \log(M / \delta), \forall m \in [M + 1]$

$$
\operatorname {P r o b} \left(\mathcal {E} _ {\text {s o u r c e}} ^ {i}\right) \geq 1 - \frac {\delta}{1 0},
$$

Moreover, as long as  $n_m \gg K + \log(1 / \delta)$ ,

$$
\operatorname {P r o b} \left(\mathcal {E} _ {\text {t a r g e t 1}}\right) \geq 1 - \frac {\delta}{1 0}
$$

$$
\operatorname {P r o b} \left(\mathcal {E} _ {\text {t a r g e t 2}}\right) \geq 1 - \frac {\delta}{1 0}.
$$

Correspondingly, if the algorithm is divided into epochs where each epoch we draw new set of data, then we define

$$
\mathcal {E} _ {\text {s o u r c e}} ^ {i} = \left\{0. 9 \Sigma_ {m} \leq \hat {\Sigma} _ {m} ^ {i} \leq 1. 1 \Sigma_ {m}, \forall m \in [ M ] \right\}
$$

Again, as long as  $n_m^i\gg d + \log (M\Gamma /\delta),\forall m\in [M],\forall i\in [\Gamma ]$  , we have

$$
\operatorname {P r o b} \left(\bigcup_ {i \in [ \Gamma ]} \mathcal {E} _ {\text {s o u r c e}} ^ {i}\right) \geq 1 - \frac {\delta}{5}
$$

Notice  $\mathcal{E}_{\mathrm{target1}}$  will only be used in analyze the main active learning Algorithm 2 while the other two is used in both Algorithm 1 and 2.

# C.2. Claims guarantees for unequal sample numbers from source tasks

Here we restate two claims and one result (not written in claim) from (Du et al., 2020), and prove that they still hold when the number of samples drawn from each of the source tasks are not equal, as long as the general low-dimension linear representation is satisfied as defined in Definition 2.1. No benign setting like Definition 2.2 is required.

# Algorithm 3 General sample procedure

1: For each task  $m$ , draw  $n_m$  i.i.d samples from the corresponding offline dataset denoted as  $\{X_m, Y_m\}_{m=1}^M$ .  
2: Estimate the models as

$$
\hat {\phi}, \hat {W} = \underset {\phi \in \Phi , \hat {W} = [ \hat {w} _ {1}, \hat {w} _ {2}, \dots ]} {\arg \min } \sum_ {m = 1} ^ {M} \| \phi (X _ {m}) \hat {w} _ {m} - Y _ {m} \| ^ {2}
$$

$$
\hat {w} _ {M + 1} = \underset {w} {\arg \min} \left\| \hat {\phi} (X _ {M + 1} ^ {i}) w - Y _ {M + 1} \right\| ^ {2}
$$

Specifically, consider the above procedure, we show that the following holds for any  $\{n_m\}_{m=1}^M$ .

Claim C.1 (Modified version of Claim A.3 in (Du et al., 2020)). Given  $\mathcal{E}_{\mathrm{source}}$ , with probability at least  $1 - \delta / 10$

$$
\sum_ {m} \| X _ {m} \hat {\Delta} _ {m} \| _ {2} ^ {2} \leq \sigma^ {2} \left(K M + K d \log (\kappa (\sum_ {m} n _ {m}) / M) + \log (1 / \delta)\right)
$$

Proof. We follow nearly the same steps as the proof in (Du et al., 2020), so some details are skipped and we only focus on the main steps that require modification. Also we directly borrow some notations including  $\overline{V},\mathcal{N},r$  from the original proof and will restate some of them here for clarity.

Notation restatement Since  $\mathrm{rank}(\hat{\Delta})\leq 2K$  , we can write  $\Delta = VR = [Vr_1,\dots ,Vr_M]$  where  $V\in \mathcal{O}_{d,2K}$  and  $R = [\pmb {r}_1,\dots ,\pmb {r}_M]\in \mathbb{R}^{2K\times M}$  . Here  $\mathcal{O}_{d_1,d_2}$ $(d_{1}\geq d_{2})$  is the set of orthonormal  $d_{1}\times d_{2}$  matrices (i.e., the columns are orthonormal). For each  $m\in [M]$  we further write  $X_{m}V = U_{m}Q_{m}$  where  $U_{m}\in \mathcal{O}_{n_{1},2K}$  and  $Q_{m}\in \mathbb{R}^{2K\times 2K}$  . To cover all possible  $V$  , we use an  $\epsilon$  -net argument, that is, there exists any fixed  $\overline{V}\in \mathcal{O}_{d,2K}$  and there exists an  $\epsilon$  -net  $\mathcal{N}_{\epsilon}$  of  $\mathcal{O}_{d,2K}$  in Frobenius norm such that  $\mathcal{N}\subset \mathcal{O}_{d,2K}$  and  $|\mathcal{N}_{\epsilon}|\leq \left(\frac{6\sqrt{2K}}{\epsilon}\right)^{2Kd}$  . (Please refer to original proof for why such  $\epsilon$  -net exists) Now we briefly state the proofs.

Step 1:

$$
\sum_ {m} \| X _ {m} (\hat {B} \hat {w} _ {m} - B ^ {*} w _ {m} ^ {*}) \| _ {2} ^ {2} \leq \sum_ {m = 1} ^ {M} \langle Z _ {m}, X _ {m} \overline {{V}} _ {m} r _ {m} \rangle + \sum_ {m = 1} ^ {M} \langle Z _ {m}, X _ {m} (V - \overline {{V}} _ {m}) r _ {m} \rangle
$$

This comes from the first three lines of step 4 in original proof. For the first term, with probability  $1 - \delta / 10$ , by using standard tail bound for  $\chi^2$  random variables and the  $\epsilon$ -net argument (details in eqn.(28) in original proof), we have it upper bounded by

$$
\sigma \sqrt {K M + \log (| \mathcal {N} _ {\epsilon} | / \delta)} \sqrt {\sum_ {m} ^ {M} \| X _ {m} V r _ {m} \| _ {2} ^ {2}} + \sigma \sqrt {K M + \log (| \mathcal {N} _ {\epsilon} | / \delta)} \sqrt {\sum_ {m} ^ {M} \| X _ {m} (\overline {{V}} - V) r _ {m} \| _ {2} ^ {2}}
$$

And for the second term, since  $\sigma^{-2}\sum_{m}\| Z_{m}\|^{2}\sim \chi^{2}\left(\sum_{m = 1}^{M}n_{m}\right)$ , again by using standard tail bound (details in eqn.(29) in original proof), we have that with high probability  $1 - \delta /20$ , it is upper bounded by

$$
\sum_ {m = 1} ^ {M} \langle Z _ {m}, X _ {m} (V - \bar {V} _ {m}) r _ {m} \rangle \lesssim \sigma \sqrt {\sum_ {m = 1} ^ {M} n _ {m} + \log (1 / \delta)} \sqrt {\sum_ {m} \| X _ {m} (V - \bar {V} _ {m}) r _ {m} \| _ {2} ^ {2}}
$$

Step 2: Now we can further bound the term  $\sqrt{\sum_{m}\|X_{m}(V - \overline{V}_{m})r_{m}\|_{2}^{2}}$  by showing

$$
\begin{array}{l} \sum_ {m} \| X _ {m} (V - \overline {{V}} _ {m}) r _ {m} \| _ {2} ^ {2} \leq \sum_ {m} \| X _ {m} \| _ {F} ^ {2} \| V - \overline {{V}} \| _ {F} ^ {2} \| r _ {m} \| _ {2} ^ {2} \\ \leq 1. 1 \bar {\lambda} \sum_ {m} n _ {m} \| V - \bar {V} \| _ {F} ^ {2} \| r _ {m} \| _ {2} ^ {2} \\ \leq 1. 1 \bar {\lambda} \epsilon^ {2} \sum_ {m} n _ {m} \| r _ {m} \| _ {2} ^ {2} \\ \leq 1. 1 \overline {{\lambda}} \epsilon^ {2} \sum_ {m} n _ {m} \| \hat {\Delta} _ {m} \| _ {2} ^ {2} \\ \leq 1. 1 \kappa \epsilon^ {2} \sum_ {m} \| X _ {m} (\hat {\Delta} _ {m}) \| _ {2} ^ {2} \\ \end{array}
$$

Note this proof is the combination of step 2 and step 3 in the original proof. The only difference here is  $n_m$  is different for each  $m$  so you need to be more careful on those upper and lower bounds.

Step 3: Finally, we again use the self-bounding techniques. Recall that we have

$$
\sqrt {\sum_ {m} \| X _ {m} (\hat {\Delta} _ {m}) \| _ {2} ^ {2}} \leq \sqrt {\sum_ {m} ^ {M} \| Z _ {m} \| _ {2} ^ {2}} \sqrt {\sum_ {m} ^ {M} \| X _ {m} (\hat {\Delta} _ {m}) \| _ {2} ^ {2}}
$$

By rearranging the inequality and the distribution of  $Z_{m}$ , we have

$$
\sum_ {m} \| X _ {m} (\hat {\Delta} _ {m}) \| _ {2} ^ {2} \leq \sigma^ {2} \left(\sum_ {m = 1} ^ {M} n _ {m} + \log (1 / \delta)\right)
$$

Step 4: Now replace these into the inequality in step 1, we have

$$
\sum_ {m} \| X _ {m} \hat {\Delta} _ {m} \| _ {2} ^ {2} \leq \sigma \sqrt {K M + \log (| \mathcal {N} _ {\epsilon} | / \delta)} \sqrt {\sum_ {m} \| X _ {m} \hat {\Delta} _ {m} \| _ {2} ^ {2}} + 2 \epsilon \sigma^ {2} \left(\sum_ {m} n _ {m} + \log (1 / \delta)\right)
$$

Then rearrange the inequality and choose proper  $\epsilon$ , we get the result.

![](images/c71f2248e8fa208c180e2398f3fa5dc951f21b4e80d4aa88e5050d49b5095021.jpg)

Claim C.2 (Modified version of Claim A.4 in (Du et al., 2020)). Given  $\mathcal{E}_{\mathrm{source}}$  and  $\mathcal{E}_{\mathrm{target2}}$ , with probability at least  $1 - \delta / 10$ ,

$$
\| P _ {X _ {M + 1} \hat {B}} ^ {\perp} X _ {M + 1} B ^ {*} \widetilde {W} ^ {*} \| _ {F} ^ {2} \leq 1. 3 n _ {M + 1} \sigma^ {2} \left(K T + K d \log \left(\left(\kappa \sum_ {m} n _ {m}\right) / M\right) + \log \frac {1}{\delta}\right)
$$

where  $\widetilde{W}^{*} = W^{*}\sqrt{\operatorname{diag}([n_{1},n_{2},\ldots,n_{M}])}$

Proof. The proof is almost the same as the first part of the proof except we don't need to extract  $n_m$  out.

$$
\begin{array}{l} \sum_ {m} \| X _ {m} (\hat {B} \hat {w} _ {m} - B ^ {*} w _ {m} ^ {*}) \| _ {2} ^ {2} \geq \sum_ {m} \| P _ {X _ {m} \hat {B}} ^ {\perp} X _ {m} B ^ {*} w _ {m} ^ {*} \| ^ {2} \\ \geq 0. 9 \sum_ {m} n _ {m} \| P _ {\Sigma_ {m} \hat {B}} ^ {\perp} \Sigma_ {m} B ^ {*} w _ {m} ^ {*} \| ^ {2} \\ = 0. 9 \sum_ {m} n _ {m} \| P _ {\Sigma_ {M + 1} \hat {B}} ^ {\perp} \Sigma_ {M + 1} B ^ {*} w _ {m} ^ {*} \| ^ {2} \\ = 0. 9 \left\| P _ {\Sigma_ {M + 1} \hat {B}} ^ {\perp} \Sigma_ {M + 1} B ^ {*} \tilde {W} ^ {*} \right\| ^ {2} \\ \geq \frac {0 . 9}{1 . 1} \frac {1}{n _ {M + 1}} \| P _ {X _ {M + 1} \hat {B}} ^ {\perp} X _ {M + 1} B ^ {*} \widetilde {W} ^ {*} \| _ {F} ^ {2} \\ \end{array}
$$

where the first and second inequality is the same as original proof. The third equation comes from our assumption that all  $\Sigma_{m}$  are the same and the forth equation is just another form of the above term. The last inequality comes from the same reason as second equality, which can again be found in original proof.

Now by using Claim C.1 as an upper bound, we get our desired result.

Basically, this claim is just another way to write Claim A.4 in (Du et al., 2020). Here we combined the  $n_m$  with  $W^*$  and in original proof they extract  $n_m$  can lower bound  $W^*$  with its minimum singular value since in their case all  $n_m$  are the same.

Claim C.3. Given  $\mathcal{E}_{\mathrm{source}}$  and  $\mathcal{E}_{\mathrm{target2}}$ , with probability at least  $1 - \delta / 5$ ,

$$
\operatorname {E R} \left(\hat {B}, \hat {\boldsymbol {w}} _ {M + 1}\right) \leq \frac {1}{n _ {M + 1}} \left\| P _ {X _ {M + 1} \hat {B}} ^ {\perp} X _ {M + 1} B ^ {*} \boldsymbol {w} _ {M + 1} ^ {*} \right\| _ {F} ^ {2} + \sigma^ {2} \frac {K + \log (1 / \delta)}{n _ {M + 1}}.
$$

Proof. This bound comes exactly from some part of Proof of Theorem 4.1. Nothing need to change.

# D. Analysis for Warm-up

# D.1. Proof for Theorem 3.2

Suppose event  $\mathcal{E}_{\mathrm{source}}$  and  $\mathcal{E}_{\mathrm{target2}}$  holds, then we have with probability at least  $1 - \frac{3\delta}{10}$ ,

$$
\begin{array}{l} \mathbf {E R} (\hat {B}, \hat {w} _ {M + 1}) \leq \frac {\| P _ {X _ {M + 1} \hat {B}} ^ {\perp} X _ {M + 1} B ^ {*} w _ {M + 1} ^ {*} \| ^ {2}}{n _ {M + 1}} + \sigma^ {2} \frac {K + \log (1 / \delta)}{n _ {M + 1}} \\ = \frac {1}{n _ {M + 1}} \| P _ {X _ {M + 1} \hat {B}} ^ {\perp} X _ {M + 1} B ^ {*} \widetilde {W} ^ {*} \tilde {\nu} ^ {*} \| _ {2} ^ {2} + \sigma^ {2} \frac {K + \log (1 / \delta)}{n _ {M + 1}} \\ \leq \frac {1}{n _ {M + 1}} \| P _ {X _ {M + 1} \hat {B}} ^ {\perp} X _ {M + 1} B ^ {*} \widetilde {W} ^ {*} \| _ {F} ^ {2} \| \tilde {\nu} ^ {*} \| _ {2} ^ {2} + \sigma^ {2} \frac {K + \log (1 / \delta)}{n _ {M + 1}} \\ = 1. 3 \sigma^ {2} \left(K T + K d \log ((\kappa \sum_ {m} n _ {m}) / M) + \log \frac {1}{\delta}\right) \| \tilde {\nu} ^ {*} \| _ {2} ^ {2} + \sigma^ {2} \frac {K + \log (1 / \delta)}{n _ {M + 1}} \\ \end{array}
$$

where  $\hat{\nu}^{*}(m) = \frac{\nu^{*}(m)}{\sqrt{n_{m}}}$ . Here the first inequality comes from Claim C.3 and the last inequality comes from Claim C.2. By use both claim, we have probability at least  $1 - \frac{\delta}{10} -\frac{\delta}{5}$ . The third inequality comes from holder's inequality.

The key step to our analysis is to decompose and upper bound  $\| \tilde{\nu}^{*}\|_{2}^{2}$ . Denote  $\epsilon^{-2} = \frac{N_{\mathrm{total}}}{\|\nu^{*}\|_{2}^{2}}$ , we have, for any  $\gamma \in [0,1]$

$$
\begin{array}{l} \sum_ {m} \frac {\nu^ {*} (m) ^ {2}}{n _ {m}} \left(\mathbf {1} \left\{\left| \nu^ {*} (m) \right| > \sqrt {\gamma} \epsilon \right\} + \mathbf {1} \left\{\left| \nu^ {*} (m) \right| \leq \sqrt {\gamma} \epsilon \right\}\right) \lesssim \sum_ {m} \left(\epsilon^ {2} \mathbf {1} \left\{\left| \nu^ {*} (m) \right| > \sqrt {\gamma} \epsilon \right\} + \gamma \epsilon^ {2} \mathbf {1} \left\{\left| \nu^ {*} (m) \right| \leq \sqrt {\gamma} \epsilon \right\}\right) \\ \leq \| \nu \| _ {0, \gamma} \epsilon^ {2} + (M - \| \nu \| _ {0, \gamma}) \gamma \epsilon^ {2} \\ = (1 - \gamma) \| \nu \| _ {0, \gamma} \epsilon^ {2} + M \gamma \epsilon^ {2} \\ \leq \frac {\left\| \nu^ {*} \right\| _ {2} ^ {2}}{N _ {\text {t o t a l}}} ((1 - \gamma) \| \nu \| _ {0, \gamma} + \gamma M) \\ \end{array}
$$

where the inequality comes from  $n_m \geq \frac{1}{2} (\nu^*(m))^2 \epsilon^{-2}$ .

Finally, combine this with the probability of  $\mathcal{E}_{\mathrm{source}}$  and  $\mathcal{E}_{\mathrm{target2}}$ , we finish the bound.

# D.2. Proof for Theorem 3.3

By the same procedure as before, we again get

$$
\operatorname {E R} (\hat {B}, \hat {w} _ {M + 1}) \leq \sigma^ {2} \left(K T + K d \log (\kappa (\sum_ {m} n _ {m}) / M) + \log \frac {1}{\delta}\right) \| \tilde {\nu} ^ {*} \| _ {2} ^ {2} + \sigma^ {2} \frac {K + \log (1 / \delta)}{n _ {M + 1}}
$$

Now due to uniform sampling, so we have all  $n_m = N_{\mathrm{total}} / M$ , which means

$$
\| \tilde {\nu} ^ {*} \| _ {2} ^ {2} = \| \nu^ {*} \| _ {2} ^ {2} \frac {M}{N _ {\mathrm {t o t a l}}}.
$$

Then we get the result by direction calculation.

# E. Analysis for Theorem E.4

# E.1. Main analysis

Step 1: We first show that the estimated distribution over tasks  $\hat{\nu}_i$  is close to the actual distribution  $\nu^{*}$  for any fixed  $i$ . Notice that Assumption 2.2 is necessary for the proofs in this part.

Lemma E.1 (Closeness between  $\hat{\nu}_i$  and  $\nu^{*}$ ). Under the Assumption 2.2, given  $\mathcal{E}_{source}^i$  and  $\mathcal{E}_{target1}$ , for any  $i, m$ , as long as  $n_{M+1} \geq \frac{2000\epsilon_i^{-1}}{\sigma^4}$ , we have with probability at least  $1 - \delta / 10M\Gamma$

$$
\left| \hat {\nu} _ {i + 1} (m) \right| \in \left\{ \begin{array}{l l} \left[ \left| \nu^ {*} (m) \right| / 1 6, 4 \mid \nu^ {*} (m) \right] & i f \nu^ {*} \geq \sigma \sqrt {\epsilon_ {i}} \\ \left[ 0, 4 \sqrt {\epsilon_ {i}} \right] & i f \left| \nu^ {*} \right| \leq \sigma \sqrt {\epsilon_ {i}} \end{array} \right. \tag {8}
$$

We define this conditional event as

$$
\mathcal {E} _ {\text {r e l e v a n c e}} ^ {i, m} = \left\{E q n. (8) h o l d s \mid \mathcal {E} _ {\text {s o u r c e}} ^ {i}, \mathcal {E} _ {\text {t a r g e t I}} \right\}
$$

Proof. By the definition of  $\nu^{*}$  and Lemma E.9, we have the following optimization problems,

$$
\hat {\nu} _ {i + 1} = \underset {\nu} {\arg \min} \| \nu \| _ {2} ^ {2}
$$

s.t.  $\sum_{m}\alpha_{m}\left(\hat{\Sigma}_{m}^{i}B^{*}w_{m}^{*} + \frac{1}{n_{m}^{i}}\left(X_{m}^{i}\right)^{\top}Z_{m}\right)\nu (m) = \alpha_{M + 1}\left(\hat{\Sigma}_{M}^{i}B^{*}w_{M + 1}^{*} + \frac{1}{n_{M + 1}^{i}}\left(X_{M + 1}^{i}\right)^{\top}Z_{M + 1}\right)$

$$
\nu^{*} = \operatorname *{arg  min}_{\nu}\| \nu \|_{2}^{2}
$$

s.t.  $\sum_{m}w_{m}^{*}\nu (m) = w_{M + 1}^{*}$

where  $\alpha_{m}^{i} = \left(\hat{B}_{i}^{\top}\hat{\Sigma}_{m}^{i}\hat{B}_{i}\right)^{-1}\hat{B}_{i}^{\top}$

Now we are ready to show that  $\hat{\nu}_{i + 1}$  is close to  $\nu^{*}$  by comparing their closed form solution.

First by using the lemma E.8 based on the standard KKT condition, it is easy to get a closed form solution of  $\nu^{*}$ , that is, for any  $m$ .

$$
\begin{array}{l} \nu^ {*} (m) = \left(w _ {m} ^ {*}\right) ^ {\top} \left(W ^ {*} \left(W ^ {*}\right) ^ {\top}\right) ^ {- 1} w _ {M + 1} ^ {*} \\ = \left(B ^ {*} w _ {m} ^ {*}\right) ^ {\top} \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {+} \left(B ^ {*} w _ {M + 1} ^ {*}\right) \\ \end{array}
$$

where the last inequality comes from the fact that  $B^{*}$  is orthonormal. And,

$$
\begin{array}{l} | \hat {\nu} _ {i + 1} (m) | = \left| \left((\hat {\Sigma} _ {m} ^ {i} B ^ {*} w _ {m} ^ {*}) ^ {\top} + \frac {1}{n _ {m} ^ {i}} (Z _ {m} ^ {i}) ^ {\top} X _ {m} ^ {i}\right) \alpha_ {m} ^ {\top} (\hat {W} _ {i} \hat {W} _ {i} ^ {\top}) ^ {\dagger} \alpha_ {M + 1} \left(\hat {\Sigma} _ {M + 1} ^ {i} B ^ {*} w _ {M + 1} ^ {*} + \frac {1}{n _ {M + 1} ^ {i}} (X _ {M + 1} ^ {i}) ^ {\top} Z _ {M + 1} ^ {i}\right) \right| \\ \leq 1. 7 \left| \left(B ^ {*} w _ {m} ^ {*}\right) ^ {\top} \hat {B} _ {i} \left(\hat {B} _ {i} ^ {\top} \hat {B} _ {i}\right) ^ {- 1} \left(\hat {W} _ {i} \hat {W} _ {i} ^ {\top}\right) ^ {\dagger} \left(\hat {B} _ {i} ^ {\top} \hat {B} _ {i}\right) ^ {- 1} \hat {B} _ {i} ^ {\top} B ^ {*} w _ {M + 1} ^ {*} \right| \\ + 1. 3 \left| \frac {1}{n _ {m} ^ {i}} (Z _ {m} ^ {i}) ^ {\top} X _ {m} ^ {i} \hat {B} _ {i} (\hat {B} _ {i} ^ {\top} \hat {B} _ {i}) ^ {- 1} (\hat {W} _ {i} \hat {W} _ {i} ^ {\top}) ^ {\dagger} (\hat {B} _ {i} ^ {\top} \hat {B} _ {i}) ^ {- 1} \hat {B} _ {i} ^ {\top} B ^ {*} w _ {M + 1} ^ {*} \right| \\ + 1. 3 \left| \left(B ^ {*} w _ {m} ^ {*}\right) ^ {\top} \hat {B} _ {i} (\hat {B} _ {i} ^ {\top} \hat {B} _ {i}) ^ {- 1} (\hat {W} _ {i} \hat {W} _ {i} ^ {\top}) ^ {\dagger} (\hat {B} _ {i} ^ {\top} \hat {B} _ {i}) ^ {- 1} \hat {B} _ {i} ^ {\top} \frac {1}{n _ {M + 1} ^ {i}} (X _ {M + 1} ^ {i}) ^ {\top} Z _ {M + 1} ^ {i} \right| \\ + \left| \frac {1}{n _ {m} ^ {i}} (Z _ {m} ^ {i}) ^ {\top} X _ {m} ^ {i} \hat {B} _ {i} (\hat {B} _ {i} ^ {\top} \hat {B} _ {i}) ^ {- 1} (\hat {W} _ {i} \hat {W} _ {i} ^ {\top}) ^ {\dagger} (\hat {B} _ {i} ^ {\top} \hat {B} _ {i}) ^ {- 1} \hat {B} _ {i} ^ {\top} \frac {1}{n _ {M + 1} ^ {i}} (X _ {M + 1} ^ {i}) ^ {\top} Z _ {M + 1} ^ {i} \right| \\ \leq 1. 7 \left| \left(B ^ {*} w _ {m} ^ {*}\right) ^ {\top} \hat {B} _ {i} \left(\hat {W} _ {i} \hat {W} _ {i} ^ {\top}\right) ^ {\dagger} \hat {B} _ {i} ^ {\top} B ^ {*} w _ {M + 1} ^ {*} \right| \\ + 1. 3 \left| \frac {1}{n _ {m} ^ {i}} (Z _ {m} ^ {i}) ^ {\top} X _ {m} ^ {i} \hat {B} _ {i} (\hat {W} _ {i} \hat {W} _ {i} ^ {\top}) ^ {\dagger} \hat {B} _ {i} ^ {\top} B ^ {*} w _ {M + 1} ^ {*} \right| \\ + 1. 3 \left| \left(B ^ {*} w _ {m} ^ {*}\right) ^ {\top} \hat {B} _ {i} \left(\hat {W} _ {i} \hat {W} _ {i} ^ {\top}\right) ^ {\dagger} \hat {B} _ {i} ^ {\top} \frac {1}{n _ {M + 1} ^ {i}} \left(X _ {M + 1} ^ {i}\right) ^ {\top} Z _ {M + 1} ^ {i} \right| \\ + \left| \frac {1}{n _ {m} ^ {i}} (Z _ {m} ^ {i}) ^ {\top} X _ {m} ^ {i} \hat {B} _ {i} (\hat {W} _ {i} \hat {W} _ {i} ^ {\top}) ^ {\dagger} \hat {B} _ {i} ^ {\top} \frac {1}{n _ {M + 1} ^ {i}} (X _ {M + 1} ^ {i}) ^ {\top} Z _ {M + 1} ^ {i} \right| \\ \leq 2 \left| \left(B ^ {*} w _ {m} ^ {*}\right) ^ {\top} \left(\hat {B} _ {i} \hat {W} _ {i} \left(\hat {B} _ {i} \hat {W} _ {i}\right) ^ {\top}\right) ^ {\dagger} B ^ {*} w _ {M + 1} ^ {*} \right| + n o i s e t e r m (m) \\ \end{array}
$$

The first inequality comes from the definition of  $\alpha_{m}$  and the event  $\mathcal{E}_{\mathrm{source}}$ ,  $\mathcal{E}_{\mathrm{target1}}$ . The second equality comes from that  $\hat{B}_i$  are always orthonormal matrix. Notice that in practice this is not required. Finally we set the last three terms in the second inequality as noise term(m), which is a low order term that we will shown later.

—Sub-step 1 (Analyze the non noise-term): We have the difference between  $|\dot{\nu}_{i + 1}(m)|$  and  $2|\nu^{*}(m)|$  is

$$
\begin{array}{l} | \hat {\nu} _ {i + 1} (m) | - 2 | \nu^ {*} (m) | - n o i s e t e r m (m) \\ \leq 2 \left| \left(B ^ {*} w _ {m} ^ {*}\right) ^ {\top} \left(\hat {B} _ {i} \hat {W} _ {i} \left(\hat {B} _ {i} \hat {W} _ {i}\right) ^ {\top}\right) ^ {\dagger} B ^ {*} w _ {M + 1} ^ {*} \right| - 2 \left| \left(B ^ {*} w _ {m} ^ {*}\right) ^ {\top} \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {+} \left(B ^ {*} w _ {M + 1} ^ {*}\right) \right| \\ \leq 2 \left| \right. (B ^ {*} w _ {m} ^ {*}) ^ {\top} \left((\hat {B} _ {i} \hat {W} _ {i} (\hat {B} _ {i} \hat {W} _ {i}) ^ {\top}) ^ {\dagger} - (B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}\right) ^ {+}\left. \right) (B ^ {*} w _ {M + 1} ^ {*}) \left. \right| \\ \leq 2 \left| \left(B ^ {*} w _ {m} ^ {*}\right) ^ {\top} \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {\dagger} \left(\hat {\Delta} ^ {i} \hat {\Delta} _ {i} ^ {\top} + \hat {\Delta} ^ {i} \left(B ^ {*} W ^ {*}\right) ^ {\top} + \left(B ^ {*} W ^ {*}\right) \left(\hat {\Delta} ^ {i}\right) ^ {\top}\right) \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {\dagger} \left(B ^ {*} w _ {M + 1} ^ {*}\right) \right| \\ \leq 2 \| B ^ {*} w _ {m} ^ {*} \| _ {2} \| B ^ {*} w _ {M + 1} ^ {*} \| _ {2} \| \left(B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}\right) ^ {\dagger} \left(\hat {\Delta} ^ {i} (\hat {\Delta} ^ {i}) ^ {\top} + \hat {\Delta} ^ {i} (B ^ {*} W ^ {*}) ^ {\top} + (B ^ {*} W ^ {*}) (\hat {\Delta} ^ {i}) ^ {\top}\right) (B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}) ^ {\dagger} \| _ {F} \\ \leq 2 R \| \left(B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}\right) ^ {\dagger} \left(\hat {\Delta} ^ {i} (\hat {\Delta} ^ {i}) ^ {\top} + \hat {\Delta} ^ {i} (B ^ {*} W ^ {*}) ^ {\top} + (B ^ {*} W ^ {*}) (\hat {\Delta} ^ {i}) ^ {\top}\right) (B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}) ^ {\dagger} \| _ {F} \\ \leq \sigma \sqrt {\epsilon_ {i}} + n o i s e t e r m (m) \\ \end{array}
$$

where the second inequality comes from the triangle inequality, the third inequality holds with probability at least  $1 - \delta$  by using the application of generalized inverse of matrices theorem (see Lemma E.10 for details) and the fifth inequality comes from the fact that  $\| B^{*}w_{m}^{*}\|_{2}\leq \| w_{m}^{*}\|_{2}\leq R$ . Finally, by using the assumptions of  $B^{*},W^{*}$  as well as the multi-task model estimation error  $\hat{\Delta}^i$ , we can apply Lemma E.12 to get the last inequality.

With the same reason, we get that, for the other direction,

$$
0. 5 \left| \nu^ {*} (m) \right| - \left| \hat {\nu} _ {i + 1} (m) \right| - \text {n o i s e t e r m} (m) \leq \sigma \sqrt {\epsilon_ {i}} / 4
$$

Combine these two, we have

$$
\left| \hat {\nu} _ {i + 1} (m) \right| \in \left[ 0. 5 \left| \nu^ {*} (m) \right| - \sigma \sqrt {\epsilon_ {i}} / 4 - 1. 5 n o i s e t e r m (m) \quad , \quad 2 \left| \nu^ {*} (m) \right| + \sigma \sqrt {\epsilon_ {i}} + 1. 5 n o i s e t e r m (m) \right]
$$

--Sub-step 2 (Analyze the noise-term): Now let's deal with the noise term(m), we restate it below for convenience,

$$
\begin{array}{l} 1. 3 \left| \frac {1}{n _ {m} ^ {i}} (Z _ {m} ^ {i}) ^ {\top} X _ {m} ^ {i} \hat {B} _ {i} (\hat {W} _ {i} \hat {W} _ {i} ^ {\top}) ^ {\dagger} \hat {B} _ {i} ^ {\top} B ^ {*} w _ {M + 1} ^ {*} \right| + 1. 3 \left| (B ^ {*} w _ {m} ^ {*}) ^ {\top} \hat {B} _ {i} (\hat {W} _ {i} \hat {W} _ {i} ^ {\top}) ^ {\dagger} \hat {B} _ {i} ^ {\top} \frac {1}{n _ {M + 1} ^ {i}} (X _ {M + 1} ^ {i}) ^ {\top} Z _ {M + 1} ^ {i} \right| \\ + \left| \frac {1}{n _ {m} ^ {i}} (Z _ {m} ^ {i}) ^ {\top} X _ {m} ^ {i} \hat {B} _ {i} (\hat {W} _ {i} \hat {W} _ {i} ^ {\top}) ^ {\dagger} \hat {B} _ {i} ^ {\top} \frac {1}{n _ {M + 1} ^ {i}} (X _ {M + 1} ^ {i}) ^ {\top} Z _ {M + 1} ^ {i} \right|. \\ \end{array}
$$

By the assumption on  $B^{*},W^{*}$  , it is easy to see that with high probability at least  $1 - \delta^{\prime}$  , where  $\delta^{\prime} = \delta /10\Gamma M$

$$
\begin{array}{l} \left| \frac {1}{n _ {m} ^ {i}} (Z _ {m} ^ {i}) ^ {\top} X _ {m} ^ {i} (\hat {B} _ {i} \hat {W} _ {i} (\hat {B} _ {i} \hat {W} _ {i}) ^ {\top}) ^ {\dagger} B ^ {*} w _ {M + 1} ^ {*} \right| \\ \leq \left| \frac {1}{\lambda_ {\min} (B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top})} \frac {1}{n _ {m} ^ {i}} (Z _ {m} ^ {i}) ^ {\top} X _ {m} ^ {i} B ^ {*} w _ {M + 1} ^ {*} \right| \\ \leq \frac {\sigma}{n _ {m} ^ {i} \lambda_ {\min} (B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top})} \sqrt {(w _ {M + 1} ^ {*}) ^ {\top} (B ^ {*}) ^ {\top} (X _ {m} ^ {i}) ^ {\top} X _ {m} ^ {i} B ^ {*} w _ {M + 1} ^ {*}} \log (1 / \delta^ {\prime}) \\ \leq \frac {2 . 2 \sigma \| w _ {M + 1} ^ {*} \| _ {2} \sqrt {\log (1 / \delta^ {\prime})}}{\sqrt {n _ {m} ^ {i}} \lambda_ {\min} (B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top})} \\ \leq \sqrt {\epsilon_ {i} / \beta} \frac {2 . 2 \sigma \sqrt {R \log (1 / \delta^ {\prime})}}{\lambda_ {\mathrm {m i n}} (B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top})} \\ \end{array}
$$

where the first inequality comes from Lemma E.6, the second the inequality comes from Chernoff inequality and the last inequality comes from the definition  $n_m^i = \max \{\beta \hat{v}_i^2\epsilon_i^{-2},\beta \epsilon_i^{-1},\underline{N}\}$ . Note that we choose  $\beta = 3000K^2 R^2 (KM + Kd\log (1 / \varepsilon M) + \log (M\Gamma /\delta) / \underline{\sigma}^6$ . Therefore, above can be upper bounded by  $\sqrt{\epsilon_i} /24$

By the similar argument and the assumption that  $n_{M + 1} \geq \frac{3000R\epsilon_i^{-1}}{\underline{\sigma}^4}$ , we can also show that with high probability at least  $1 - \delta'$

$$
\begin{array}{l} \left| \left(B ^ {*} w _ {m} ^ {*}\right) ^ {\top} \hat {B} _ {i} (\hat {W} _ {i} \hat {W} _ {i} ^ {\top}) ^ {\dagger} \hat {B} _ {i} ^ {\top} \frac {1}{n _ {M + 1} ^ {i}} \left(X _ {M + 1} ^ {i}\right) ^ {\top} Z _ {M + 1} ^ {i} \right| \leq \frac {2 . 2 \sigma \| w _ {m} ^ {*} \| _ {2} \sqrt {\log (1 / \delta^ {\prime})}}{\sqrt {n _ {M + 1}} \lambda_ {\min} \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right)} \\ \leq \sigma \sqrt {\epsilon_ {i}} / 2 4 \\ \end{array}
$$

Finally, we have that

$$
\begin{array}{l} \left| \frac {1}{n _ {m} ^ {i}} \left(Z _ {m} ^ {i}\right) ^ {\top} X _ {m} ^ {i} \hat {B} _ {i} \left(\hat {W} _ {i} \hat {W} _ {i} ^ {\top}\right) ^ {\dagger} \hat {B} _ {i} ^ {\top} \frac {1}{n _ {M + 1} ^ {i}} \left(X _ {M + 1} ^ {i}\right) ^ {\top} Z _ {M + 1} ^ {i} \right| \leq \frac {2 . 2 \sigma^ {2} \sqrt {\epsilon_ {i} / \beta} \| w _ {m} ^ {*} \| _ {2} \| w _ {M + 1} ^ {*} \| _ {2} \log (1 / \delta^ {\prime})}{\sqrt {n _ {M + 1}} \lambda_ {\min } \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right)} \\ \leq \sigma \sqrt {\epsilon_ {i}} / 2 4 \\ \end{array}
$$

So overall we have we have noise term  $(m)\leq \sigma \sqrt{\epsilon_i} /8$

--Sub-step 3 (Combine the non noise-term and noise-term):

Now when  $|\nu^{*}(m)| \geq \sigma \sqrt{\epsilon_{i}}$ , combine the above results show that  $|\hat{\nu}_{i + 1}(m)| \in [\nu^{*}(m) / 16, 4|\nu^{*}(m)|]$ .

On the other hand, if  $|\nu^{*}|\leq \sigma \sqrt{\epsilon_{i}}$ , then we directly have  $\hat{\nu}_{i + 1}\in [0,4\sigma \sqrt{\epsilon_i} ]$

Step 2: Now we are ready to prove the following two main lemmas on the final accuracy and the total sample complexity.

Lemma E.2 (Accuracy on each epoch). Given  $\mathcal{E}_{\text{source}}, \mathcal{E}_{\text{target 1}}, \mathcal{E}_{\text{target 2}}$  and  $\mathcal{E}_{\text{relevance}}^{i,m}$  for all  $m$ , after the epoch  $i$ , we have  $\mathrm{ER}(\hat{B}, \hat{w}_{M+1})$  upper bounded by with probability at least  $1 - \delta / 10M\Gamma$ .

$$
\frac {\sigma^ {2}}{\beta} \left(K M + K d + \log \frac {1}{\delta}\right) s _ {i} ^ {*} \epsilon_ {i} ^ {2} + \sigma^ {2} \frac {(K + \log (1 / \delta))}{n _ {M + 1}}
$$

where  $s_i^* = \min_{\gamma \in [0,1]}(1 - \gamma)\| \nu^*\|_{0,\gamma}^i +\gamma M$  and  $\| \nu \|_{0,\gamma}^{i}\coloneqq |\{m:\nu_{m} > \sqrt{\gamma}\epsilon_{i}\} |.$

Proof. The first step is the same as proof of Theorem 3.2 in Appendix 3. Suppose event  $\mathcal{E}_{\mathrm{source}}^i$  and  $\mathcal{E}_{\mathrm{target2}}$  holds, then we have with probability at least  $1 - \frac{3\delta}{10\Gamma M}$ ,

$$
\mathsf {E R} (\hat {B}, \hat {w} _ {M + 1}) \leq 1. 3 \sigma^ {2} \left(K T + K d \log ((\sum_ {m} n _ {m}) / M) + \log \frac {1}{\delta}\right) \| \tilde {\nu} ^ {*} \| _ {2} ^ {2} + \sigma^ {2} \frac {K + \log (1 / \delta)}{n _ {M + 1}}
$$

where  $\tilde{\nu}^{*}(m) = \nu^{*}(m) / \sqrt{n_{m}}$

Now for any  $\gamma \in [0,1]$ , given  $\mathcal{E}_{\mathrm{relevance}}^{i,m}$ , we are going to bound  $\sum_{m} \frac{\nu^{*}(m)^{2}}{n_{m}^{i}}$  as

$$
\begin{array}{l} \sum_ {m} \frac {\nu^ {*} (m) ^ {2}}{n _ {m} ^ {i}} \leq \sum_ {m} \frac {\nu^ {*} (m) ^ {2}}{n _ {m} ^ {i}} {\bf 1} \big \{| \nu^ {*} (m) | > \sigma \sqrt {\epsilon_ {i - 1}} \big \} + \sum_ {m} \frac {\nu^ {*} (m) ^ {2}}{n _ {m} ^ {i}} {\bf 1} \big \{\sqrt {\gamma} \epsilon_ {i - 1} \leq | \nu^ {*} (m) | \leq \sigma \sqrt {\epsilon_ {i - 1}} \big \} \\ + \sum_ {m} \frac {\nu^ {*} (m) ^ {2}}{n _ {m} ^ {i}} \mathbf {1} \left\{\left| \nu^ {*} (m) \right| \leq \sqrt {\gamma} \epsilon_ {i} \right\} \\ \leq \sum_ {m} \frac {2 5 6 \hat {\nu} _ {i} ^ {2}}{n _ {m} ^ {i}} \mathbf {1} \{\left| \nu^ {*} (m) \right| > \sigma \sqrt {\epsilon_ {i - 1}} \} + \sum_ {m} \frac {\sigma^ {2} \epsilon_ {i - 1}}{n _ {m} ^ {i}} \mathbf {1} \{\sqrt {\gamma} \epsilon_ {i - 1} \leq \left| \nu^ {*} (m) \right| \leq \sqrt {\epsilon_ {i - 1}} \} \\ + \gamma \epsilon_ {i} ^ {2} / \beta \sum_ {m} \mathbf {1} \left\{\left| \nu^ {*} (m) \right| \leq \sqrt {\gamma} \epsilon_ {i} \right\} \\ \leq \mathcal {O} \left(\sum_ {m} \epsilon_ {i} ^ {2} / \beta \mathbf {1} \{\left| \nu^ {*} (m) \right| > \sigma \sqrt {\epsilon_ {i - 1}} \}\right) + \mathcal {O} \left(\sum_ {m} \sigma^ {2} \epsilon_ {i} ^ {2} / \beta \mathbf {1} \{\sqrt {\gamma} \epsilon_ {i - 1} \leq \left| \nu^ {*} (m) \right| \leq \sigma \sqrt {\epsilon_ {i - 1}} \}\right) \\ + (M - \| \nu \| _ {0, \gamma} ^ {i}) \gamma \epsilon_ {i} ^ {2} / \beta \\ \leq \mathcal {O} \left(\sum_ {m} \epsilon_ {i} ^ {2} / \beta \mathbf {1} \left\{\left| \nu^ {*} (m) \right| > \sigma \sqrt {\gamma} \epsilon_ {i - 1}\right) + (M - \| \nu \| _ {0, \gamma} ^ {i}) \gamma \epsilon_ {i} ^ {2} / \beta \right. \\ \leq \left(\left(1 - \gamma\right) \| \nu \| _ {0, \gamma} ^ {i} + \gamma M\right) \epsilon_ {i} ^ {2} / \beta \\ \end{array}
$$

![](images/0dd90924ff245a28a51420056db86f43f54f20a445cf2669b071caef1f0df208.jpg)

Lemma E.3 (Sample complexity on each epoch). Given  $\mathcal{E}_{\mathrm{source}}$ ,  $\mathcal{E}_{\mathrm{target1}}$  and  $\mathcal{E}_{\mathrm{relevance}}^{i,m}$  for all  $m$ , we have the total number of source samples used in epoch  $i$  as as

$$
\mathcal {O} \left(\beta \left(M \varepsilon^ {- 1} + \| \nu^ {*} \| _ {2} ^ {2} \varepsilon^ {- 2}\right)\right)
$$

Proof. Given  $\mathcal{E}_{\mathrm{relevance}}^{i,m}$ , we can get the sample complexity for block  $i$  as the follows.

$$
\begin{array}{l} \sum_ {m} ^ {M} n _ {m} ^ {i} = \sum_ {m} ^ {M} \max \{\hat {\nu} _ {i} (m) ^ {2} \epsilon_ {i} ^ {- 2}, \epsilon_ {i} ^ {- 1}, \underline {{N}} \} \\ \leq \sum_ {m} ^ {M} \beta \hat {\nu} _ {i} ^ {2} \epsilon_ {i} ^ {- 2} + \sum_ {m} ^ {M} \beta \epsilon_ {i} ^ {- 1} \\ \leq \sum_ {m} ^ {M} \beta \hat {\nu} _ {i} ^ {2} (m) \epsilon_ {i} ^ {- 2} \mathbf {1} \left\{\left| \nu^ {*} (m) \right| > \sigma \sqrt {\epsilon_ {i - 1}} \right\} + \sum_ {m} ^ {M} \beta \hat {\nu} _ {i} ^ {2} (m) \epsilon_ {i} ^ {- 2} \mathbf {1} \left\{\left| \nu^ {*} (m) \right| \leq \sigma \sqrt {\epsilon_ {i - 1}} \right\} + \sum_ {m} ^ {M} \beta \epsilon_ {i} ^ {- 1} \\ \leq \sum_ {m} ^ {M} \beta (4 \nu^ {*} (m)) ^ {2} \epsilon_ {i} ^ {- 2} \mathbf {1} \{| \nu^ {*} (m) | > \sigma \sqrt {\epsilon_ {i - 1}} \} + \sum_ {m} ^ {M} \beta (4 \sigma \sqrt {\epsilon_ {i - 1}}) ^ {2} / \epsilon_ {i} ^ {- 2} \mathbf {1} \{| \nu^ {*} (m) | \leq \sigma \sqrt {\epsilon_ {i - 1}} \} + \sum_ {m} ^ {M} \beta \epsilon_ {i} ^ {- 1} \\ = \mathcal {O} \left(\beta \left(M \epsilon_ {i} ^ {- 1} + \| \nu^ {*} \| _ {2} ^ {2} \epsilon_ {i} ^ {- 2}\right)\right) \\ \end{array}
$$

![](images/ccdc951cd72f2bc88a81dff7f96ecc4828d3f893ecb7e9aa6a55da1192d52720.jpg)

Theorem E.4. Suppose we know in advance a lower bound of  $\sigma_{\mathrm{min}}(W^{*})$  denoted as  $\underline{\sigma}$ . Under the benign low-dimension linear representation setting as defined in Assumption 2.2, we have  $\mathbb{E}\mathbb{R}(\hat{B},\hat{w}_{M + 1})\leq \varepsilon^2$  with probability at least  $1 - \delta$  whenever the number of source samples  $N_{total}$  is at least

$$
\widetilde {\mathcal {O}} \left(\left(K (M + d) + \log \frac {1}{\delta}\right) \sigma^ {2} s ^ {*} \| \nu^ {*} \| _ {2} ^ {2} \varepsilon^ {- 2} + \square \sigma \varepsilon^ {- 1}\right)
$$

where  $\square = \left(MK^{2}dR / \underline{\sigma}^{3}\right)\sqrt{s^{*}}$  and the target task sample complexity  $n_{M + 1}$  is at least

$$
\widetilde {\mathcal {O}} \left(\sigma^ {2} K \varepsilon^ {- 2} + \diamondsuit \sqrt {s ^ {*}} \sigma \varepsilon^ {- 1}\right)
$$

where  $\diamondsuit = \min \left\{\frac{\sqrt{R}}{\sigma^2K},\sqrt{K(M + d) + \log\frac{1}{\delta}}\right\}$  and  $s^*$  has been defined in Theorem 3.2.

Proof. Given  $\mathcal{E}_{\mathrm{source}}$ ,  $\mathcal{E}_{\mathrm{target1}}$ ,  $\mathcal{E}_{\mathrm{target2}}$  and  $\mathcal{E}_{\mathrm{relevance}}^{i,m}$  then by Lemma E.2, we the final accuracy from last epoch as

$$
\mathrm {E R} _ {M + 1} (\hat {B}, \hat {w} _ {M + 1}) \leq \sigma^ {2} \left(K M + K d \log ((\sum_ {m} n _ {m}) / M) + \log \frac {1}{\delta}\right) s _ {\Gamma} ^ {*} \epsilon_ {\Gamma} ^ {2} / \beta + \sigma^ {2} \frac {(K + \log (1 / \delta))}{n _ {M + 1}}
$$

Denote the final accuracy of the first term as  $\varepsilon^2$ . So we can write  $\epsilon_{\Gamma}$  as

$$
\varepsilon / \left(\sigma \sqrt {K M + K d \log ((\sum_ {m} n _ {m}) / M) + \log \frac {1}{\delta}} \sqrt {s _ {\Gamma} ^ {*} / \beta}\right)
$$

By applying lemma 4.4, we requires the total source sample complexity

$$
\begin{array}{l} \sum_ {i = 1} ^ {\Gamma} \beta (M \epsilon_ {i} ^ {- 1} + \| \nu^ {*} \| _ {2} ^ {2} \epsilon_ {i} ^ {- 2}) \leq 2 \beta (M \epsilon_ {\Gamma} ^ {- 1} + 2 \| \nu^ {*} \| _ {2} ^ {2} \epsilon_ {\Gamma} ^ {- 2}) \\ = \beta M \sigma \sqrt {K M + K d \log ((\sum_ {m} n _ {m}) / M) + \log \frac {1}{\delta}} \sqrt {s _ {\Gamma} ^ {*} / \beta} \varepsilon^ {- 1} \\ + \beta \| \nu^ {*} \| _ {2} ^ {2} \sigma^ {2} \left(K M + K d \log ((\sum_ {m} n _ {m}) / M) + \log \frac {1}{\delta}\right) s _ {\Gamma} ^ {*} \varepsilon^ {- 2} / \beta \\ = \sqrt {\beta} M \sqrt {s _ {\Gamma} ^ {*}} \sigma \sqrt {K M + K d \log ((\sum_ {m} n _ {m}) / M) + \log \frac {1}{\delta} \varepsilon^ {- 1}} \\ + s _ {\Gamma} ^ {*} \sigma^ {2} \left(K M + K d \log \left(\left(\sum_ {m} n _ {m}\right) / M\right) + \log \frac {1}{\delta}\right) \| \nu^ {*} \| _ {2} ^ {2} \varepsilon^ {- 2} \\ = \widetilde {\mathcal {O}} \left(\left(M K ^ {2} d + M \sqrt {K d} / \underline {{\sigma}} ^ {2}\right) \sqrt {s _ {\Gamma} ^ {*}} \sigma \varepsilon^ {- 1} + K d s _ {\Gamma} ^ {*} \sigma^ {2} \| \nu^ {*} \| _ {2} ^ {2} \varepsilon^ {- 2}\right) \\ \end{array}
$$

Also in order to satisfy the assumption in Lemma E.1, we required  $n_{M + 1}$  to be at least

$$
\begin{array}{l} \frac {\varepsilon^ {- 1}}{\underline {{\sigma}} ^ {2}} = \frac {1}{\underline {{\sigma}} ^ {2}} \sqrt {s _ {\Gamma} ^ {*} / \beta} \sigma \sqrt {K M + K d \log ((\sum_ {m} n _ {m}) / M) + \log \frac {1}{\delta}} \varepsilon^ {- 1} \\ \leq \min \left\{\frac {1}{\underline {{\sigma}} ^ {2} K}, \sqrt {K M + K d \log ((\sum_ {m} n _ {m}) / M) + \log \frac {1}{\delta}} \right\} \sqrt {s _ {\Gamma} ^ {*}} \sigma \varepsilon^ {- 1} \\ \end{array}
$$

Notice that  $\Gamma$  is a algorithm-dependent parameter, therefore, the final step is to bound  $s_{\Gamma}^{*}$  by an algorithm independent term by write  $\Gamma$  as

$$
\Gamma = - \log \epsilon_ {\Gamma} \leq \min \left\{\log \sqrt {\frac {N _ {t o t a l}}{\beta \| \nu^ {*} \| _ {2} ^ {2}}}, \log \frac {N _ {t o t a l}}{\beta M} \right\}
$$

So we have.

$$
\| \nu \| _ {0, \gamma} ^ {\Gamma} = | \{m: \nu_ {m} > \sqrt {\gamma} \max \{\frac {\beta \| \nu^ {*} \| _ {2} ^ {2}}{N _ {t o t a l}}, \frac {\beta M}{N _ {t o t a l}} \} |
$$

To further simply this, notice that, for any  $\epsilon' < \epsilon_i$ ,

$$
\| \nu \| _ {0, \gamma} ^ {i} = | \{m: \nu_ {m} > \sqrt {\gamma} \epsilon_ {i} \} | \leq | \{m: \nu_ {m} > \sqrt {\gamma} \epsilon^ {\prime} \} |
$$

So we further have

$$
\| \nu \| _ {0, \gamma} ^ {\Gamma} = | \{m: \nu_ {m} > \sqrt {\gamma} \frac {\| \nu^ {*} \| _ {2} ^ {2}}{N _ {t o t a l}} \} | := \| \nu \| _ {0, \gamma}
$$

Finally, by union bound  $\mathcal{E}_{\mathrm{source}}$ ,  $\mathcal{E}_{\mathrm{target1}}$ ,  $\mathcal{E}_{\mathrm{target2}}$  and  $\mathcal{E}_{\mathrm{relevance}}^{i,m}$  on all epochs, we show that all the lemmas hold with probability at least  $1 - \delta$ .

# E.2. Auxiliary Lemmas

Lemma E.5 (Convergence on estimated model  $\hat{B}_i\hat{W}_i$ ). For any fixed  $i$ , given  $\mathcal{E}_{source}^i$ , we have

$$
\| \hat {\Delta} ^ {i} \| _ {F} ^ {2} \leq 1. 3 \sigma^ {2} \left(K M + K d \log ((\sum_ {m} n _ {m} ^ {i}) / M) + \log \frac {1 0 \Gamma}{\delta}\right) \epsilon_ {i} / \beta
$$

And therefore, when  $\beta = 3000K^2 R^2 (KM + Kd\log (N_{total} / M) + \log (M\Gamma /\delta) / \underline{\sigma}^6$ . Therefore, above can be upper bounded by  $\sqrt{\epsilon_i} /24$

we have  $\| \Delta \| _F^2\leq \frac{\sigma^2\epsilon_i}{4K^2R^2}$

Proof. Denote  $\Delta_m$  as the  $m$ -th column of  $\hat{\Delta}^i$ .

$$
\begin{array}{l} \sum_ {m = 1} ^ {M} \left\| X _ {m} \Delta_ {m} \right\| _ {2} ^ {2} = \sum_ {m = 1} ^ {M} \Delta_ {m} ^ {\top} X _ {m} ^ {\top} X _ {m} \Delta_ {m} \\ \geq 0. 9 \sum_ {m = 1} ^ {M} n _ {m} \Delta_ {m} ^ {\top} \Delta_ {m} \\ \geq 0. 9 \min _ {m} n _ {m} ^ {i} \sum_ {m = 1} ^ {M} \| \Delta_ {m} \| _ {2} ^ {2} = 0. 9 \min _ {m} n _ {m} ^ {i} \| \hat {\Delta} ^ {i} \| _ {F} ^ {2} \\ \end{array}
$$

Recall that our definition on  $n_m^i = \max \left\{\beta \hat{\nu}_i^2(m)\epsilon_i^{-2}, \beta \epsilon_i^{-1}\right\}$  and also use the upper bound derived in Claim C.1, we finish the proof.

Lemma E.6 (minimum singular value guarantee for  $\hat{B}_i\hat{W}_i$ ). For all  $i$ , we can guarantee that

$$
\sigma_ {m i n} (\hat {B} _ {i} \hat {W} _ {i}) \geq \sigma_ {m i n} (W ^ {*}) / 2
$$

Also because  $M \geq K$ , so there is always a feasible solution for  $\hat{\nu}_i$ .

Proof. Because  $B^{*}$  is a orthonormal, so  $\sigma_{\min}(B^{*}W^{*}) = \sigma_{\min}(W^{*})$ . Also from Lemma E.5 and Weyl's theorem stated below, we have  $|\sigma_{\min}(\hat{B}_i\hat{W}_i)) - \sigma_{\min}(B^* W^*)| \leq \| \hat{B}_i\hat{W}_i - B^* W^*\|_F \leq \frac{\sigma}{2} \leq \frac{\sigma_{\min}(W^*)}{2}$ . Combine these two inequalities we can easily get the result.

Theorem E.7 (Weyl's inequality for singular values). Let  $M$  be a  $p \times n$  matrix with  $1 \leq p \leq n$ . Its singular values  $\sigma_k(M)$  are the  $p$  positive eigenvalues of the  $(p + n) \times (p + n)$  Hermitian augmented matrix

$$
\left[ \begin{array}{c c} 0 & M \\ M ^ {*} & 0 \end{array} \right]
$$

Therefore, Weyl's eigenvalue perturbation inequality for Hermitian matrices extends naturally to perturbation of singular values. This result gives the bound for the perturbation in the singular values of a matrix  $M$  due to an additive perturbation  $\Delta$ :

$$
\left| \sigma_ {k} (M + \Delta) - \sigma_ {k} (M) \right| \leq \sigma_ {1} (\Delta) \leq \| \Delta \| _ {F}
$$

![](images/a8e06ccb5d188b8005cafcc6d19670834cf0e5d3664f88609a77a956319ca141.jpg)

Lemma E.8. For any two matrix  $M_1 \in \mathbb{R}^{K \times M}$ ,  $M_2 \in \mathbb{R}^K$  where  $K \leq M$ . Suppose  $\operatorname{rank}(M_1) = K$  and define  $\tilde{\nu}$  as

$$
\operatorname *{arg  min}_{\nu \in \mathbb{R}^{M}}\| \nu \|_{2}^{2}\quad s.t. M_{1}\nu = M_{2},\nu \in \mathbb{R}^{m},
$$

then we have

$$
\tilde {\nu} = M _ {1} ^ {\top} (M _ {1} M _ {1} ^ {\top}) ^ {- 1} M _ {2}
$$

Proof. We prove this by using KKT conditions,

$$
L (\nu , \lambda) = \| \nu \| _ {2} ^ {2} + \lambda^ {\top} (M _ {1} \nu_ {1} - M _ {2})
$$

Given  $0 \in \partial L$ , we have  $\tilde{\nu} = -(M_1)^\top \lambda / 2$ . Then by replace this into the constrains, we have

$$
M _ {1} M _ {1} ^ {\top} \lambda = - 2 M _ {2} \rightarrow \lambda = - 2 \left(M _ {1} M _ {1} ^ {\top}\right) ^ {- 1} M _ {2}
$$

and therefore  $\tilde{\nu} = (M_1)^\top (M_1M_1^\top)^{-1}M_2$

Lemma E.9 (A closed form expression for  $\hat{\nu}_i$ ). For any epoch  $i$ , given the estimated representation  $\hat{B}_i$ , we have

$$
\hat {\nu} _ {i + 1} = \underset {\nu} {\arg \min} \| \nu \| _ {2} ^ {2}
$$

$$
s. t. \sum_ {m} \alpha_ {m} ^ {i} \left(\hat {\Sigma} _ {m} ^ {i} B ^ {*} w _ {m} ^ {*} + \frac {1}{n _ {m} ^ {i}} \left(X _ {m} ^ {i}\right) ^ {\top} Z _ {m}\right) \nu (m) = \alpha_ {M + 1} ^ {i} \left(\hat {\Sigma} _ {M} ^ {i} B ^ {*} w _ {M + 1} ^ {*} + \frac {1}{n _ {M + 1} ^ {i}} \left(X _ {M + 1} ^ {i}\right) ^ {\top} Z _ {M + 1}\right)
$$

where  $\alpha_{m}^{i} = \left(\hat{B}_{i}^{\top}\hat{\Sigma}_{m}^{i}\hat{B}_{i}\right)^{-1}\hat{B}_{i}^{\top}$

Proof. For any epoch  $i$  and it's estimated representation  $\hat{B}_i$ , by least square argument, we have

$$
\begin{array}{l} \hat {w} _ {m} ^ {i} = \underset {w} {\arg \min} \| X _ {m} ^ {i} \hat {B} _ {i} w - Y _ {m} \| _ {2} \\ = \left(\left(X _ {m} ^ {i} \hat {B} _ {i}\right) ^ {\top} X _ {m} ^ {i} \hat {B} _ {i}\right) ^ {- 1} \left(X _ {m} ^ {i} \hat {B} _ {i}\right) ^ {\top} Y _ {m} \\ = \left(\left(X _ {m} ^ {i} \hat {B} _ {i}\right) ^ {\top} X _ {m} ^ {i} \hat {B} _ {i}\right) ^ {- 1} \left(X _ {m} ^ {i} \hat {B} _ {i}\right) ^ {\top} X _ {m} ^ {i} B ^ {*} w _ {m} ^ {*} + \left(\left(X _ {m} ^ {i} \hat {B} _ {i}\right) ^ {\top} X _ {m} ^ {i} \hat {B} _ {i}\right) ^ {- 1} \left(X _ {m} ^ {i} \hat {B} _ {i}\right) ^ {\top} Z _ {m} \\ = \underbrace {\left(\hat {B} _ {i} ^ {\top} \hat {\Sigma} _ {m} ^ {i} \hat {B} _ {i}\right) ^ {- 1} \hat {B} _ {i} ^ {\top}} _ {\alpha_ {m} ^ {i} \in \mathbb {R} ^ {k \times d}} \hat {\Sigma} _ {m} ^ {i} B ^ {*} w _ {m} ^ {*} + \underbrace {\left(\hat {B} _ {i} ^ {\top} \hat {\Sigma} _ {m} ^ {i} \hat {B} _ {i}\right) ^ {- 1} \hat {B} _ {i} ^ {\top}} _ {\alpha_ {m} ^ {i}} \left(X _ {m} ^ {i}\right) ^ {\top} Z _ {m} \\ \end{array}
$$

Therefore, combine this with the previous optimization problem, we have which implies that

$$
\hat {W} _ {i} \nu = \sum_ {m} \alpha_ {m} ^ {i} \left(\hat {\Sigma} _ {m} ^ {i} B ^ {*} w _ {m} ^ {*} + \frac {1}{n _ {m} ^ {i}} \left(X _ {m} ^ {i}\right) ^ {\top} Z _ {m}\right) \nu (m)
$$

$$
\hat {w} _ {M + 1} ^ {i} = \alpha_ {M + 1} ^ {i} \left(\hat {\Sigma} _ {M} ^ {i} B ^ {*} w _ {M + 1} ^ {*} + \frac {1}{n _ {M + 1} ^ {i}} \left(X _ {M + 1} ^ {i}\right) ^ {\top} Z _ {M + 1}\right)
$$

Recall the definition of  $\hat{\nu}_{i + 1}$  as

$$
\min \| \nu \| _ {2} ^ {2} \mathrm {s . t .} \hat {W} _ {i} \nu = \hat {w} _ {M + 1} ^ {i}
$$

Therefore we get the closed-form by replace  $\hat{W}_i\nu, \hat{w}_{M+1}^i$  with the value calculated above.

Lemma E.10 (Difference of the inverse covariance matrix). For any fixed  $i$ ,  $m$  and any proper matrices  $M_1, M_2, M_3, M_4$ ,

$$
\begin{array}{l} \left| M _ {1} \left(\left(\hat {B} _ {i} \hat {W} _ {i} \left(\hat {B} _ {i} \hat {W} _ {i}\right) ^ {\top}\right) ^ {\dagger} - \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {+}\right) B ^ {*} M _ {2} \right| \\ = \| M _ {1} \| \| \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {\dagger} \left(\hat {\Delta} _ {i} \left(\hat {\Delta} ^ {i}\right) ^ {\top} + \hat {\Delta} ^ {i} \left(B ^ {*} W ^ {*}\right) ^ {\top} + \left(B ^ {*} W ^ {*}\right) \left(\hat {\Delta} ^ {i}\right) ^ {\top}\right) \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {\dagger} \| \| B ^ {*} M _ {2} \| \\ \left| M _ {3} \left(B ^ {*}\right) ^ {\top} \left(\left(\hat {B} _ {i} \hat {W} _ {i} \left(\hat {B} _ {i} \hat {W} _ {i}\right) ^ {\top}\right) ^ {\dagger} - \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {+}\right) M _ {4} \right| \\ = \| M _ {3} (B ^ {*}) ^ {\top} \| \| \left(B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}\right) ^ {\dagger} \left(\hat {\Delta} ^ {i} (\hat {\Delta} ^ {i}) ^ {\top} + \hat {\Delta} (B ^ {*} W ^ {*}) ^ {\top} + (B ^ {*} W ^ {*}) (\hat {\Delta} ^ {i}) ^ {\top}\right) (B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}) ^ {\dagger} \| \| M _ {4} \| \\ \end{array}
$$

Proof. First we want to related these two inverse terms,

$$
\begin{array}{l} (\hat {B} _ {i} \hat {W} _ {i} \hat {W} _ {i} ^ {\top} \hat {B} _ {i} ^ {\top}) ^ {\dagger} - (B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}) ^ {\dagger} \\ \leq \left((B ^ {*} W ^ {*} + \hat {\Delta} ^ {i}) (B ^ {*} W ^ {*} + \hat {\Delta} _ {i}) ^ {\top}\right) ^ {\dagger} - \left(B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}\right) ^ {\dagger} \\ \leq \left(B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top} + \left(\hat {\Delta} ^ {i} (\hat {\Delta} ^ {i}) ^ {\top} + \hat {\Delta} ^ {i} (B ^ {*} W ^ {*}) ^ {\top} + (B ^ {*} W ^ {*}) (\hat {\Delta} ^ {i}) ^ {\top}\right)\right) ^ {\dagger} - \left(B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}\right) ^ {\dagger} \\ \end{array}
$$

In order to connect the first pseudo inverse with the second, we want to use the generalized inverse of matrix theorem as stated below.

Theorem E.11 (Theorem from (Kovanic, 1979)). If  $V$  is an  $n \times n$  symmetrical matrix and if  $X$  is an  $n \times q$  arbitrary real matrix, then

$$
(V + X X ^ {\top}) ^ {\dagger} = V ^ {\dagger} - V ^ {\dagger} X (I + X ^ {\top} V ^ {\dagger} X) ^ {- 1} X ^ {\top} V ^ {\dagger} + ((X _ {\bot}) ^ {\dagger}) ^ {\top} X _ {\bot} ^ {\dagger}
$$

where  $X_{\perp} = (I - VV^{\dagger})X$

It is easy to see that  $V \coloneqq B^{*}W^{*}(B^{*}W^{*})^{\top}$  and we can also decompose  $\left(\hat{\Delta}\hat{\Delta}^{\top} + \hat{\Delta}(B^{*}W^{*})^{\top} + (B^{*}W^{*})\hat{\Delta}^{\top}\right)$  into some  $XX^{\top}$ ,

Therefore, we can write the above inequality as

$$
- V ^ {\dagger} X (I + X ^ {\top} V ^ {\dagger} X) ^ {- 1} X ^ {\top} V ^ {\dagger} + ((X _ {\bot}) ^ {\dagger}) ^ {\top} X _ {\bot} ^ {\dagger}
$$

Next we show that  $((X_{\perp})^{\dagger})^{\top}X_{\perp}^{\dagger}B^{*} = 0$  and  $(B^{*})^{\top}((X_{\perp})^{\dagger})^{\top}X_{\perp}^{\dagger} = 0$ . Let  $UDV^{\top}$  be the singular value decomposition of  $B^{*}W^{*}$ . So we have

$$
V V ^ {\dagger} = U D ^ {2} U ^ {\top} \left(U D ^ {2} U ^ {\top}\right) ^ {\dagger} = U U ^ {\top},
$$

and therefore, because  $B^{*}$  are contained in the column spaces as  $B^{*}W^{*}$ ,

$$
\begin{array}{l} X _ {\perp} ^ {\dagger} B ^ {*} = (U _ {\perp} U _ {\perp} ^ {\top} X) ^ {\dagger} B ^ {*} \\ = X ^ {\dagger} U _ {\perp} U _ {\perp} ^ {\top} B ^ {*} = 0. \\ \end{array}
$$

Therefore, we conclude that

$$
\begin{array}{l} \left| M _ {1} \left(\left(\hat {B} _ {i} \hat {W} _ {i} \left(\hat {B} _ {i} \hat {W} _ {i}\right) ^ {\top}\right) ^ {\dagger} - \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {+}\right) B ^ {*} M _ {2} \right| \\ \leq \| M _ {1} \| \| \underbrace {\left(B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}\right) ^ {\dagger} \left(\hat {\Delta} (\hat {\Delta} ^ {i}) ^ {\top} + \hat {\Delta} (B ^ {*} W ^ {*}) ^ {\top} + (B ^ {*} W ^ {*}) (\hat {\Delta} ^ {i}) ^ {\top}\right) (B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}) ^ {\dagger}} _ {V ^ {\dagger} X X ^ {\top} V ^ {\dagger}} \| \| B ^ {*} M _ {2} \| \\ \end{array}
$$

and so does the other equation.

Lemma E.12. Given  $\mathcal{E}_{\mathrm{source}}^i$  , for any fixed  $i$  we have

$$
\begin{array}{l} \left\| \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {\dagger} \left(\hat {\Delta} ^ {i} \left(\hat {\Delta} ^ {i}\right) ^ {\top} + \hat {\Delta} \left(B ^ {*} W ^ {*}\right) ^ {\top} + \left(B ^ {*} W ^ {*}\right) \left(\hat {\Delta} ^ {i}\right) ^ {\top}\right) \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {\dagger} \right\| _ {F} \\ \leq 3 \sigma \| \left(W ^ {*}\right) ^ {\dagger} \| _ {F} \| \left(W ^ {*} \left(W ^ {*}\right) ^ {\top}\right) ^ {\dagger} \| _ {F} \underline {{\sigma}} ^ {3} \sqrt {\epsilon_ {i}} / 6 K R \\ \leq \sigma \sqrt {\epsilon_ {i}} / 2 R \\ \end{array}
$$

Proof. The target term can be upper bounded by

$$
\begin{array}{l} \left\| \left(B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}\right) ^ {\dagger} \hat {\Delta} ^ {i} (\hat {\Delta} ^ {i}) ^ {\top} \left(B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}\right) ^ {\dagger} \right\| _ {F} \\ + \left\| \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {\dagger} B ^ {*} W ^ {*} \left(\hat {\Delta} ^ {i}\right) ^ {\top} \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {\dagger} \right\| _ {F} \\ + \left\| \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {\dagger} \hat {\Delta} ^ {i} \left(B ^ {*} W ^ {*}\right) ^ {\top} \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {\dagger} \right\| _ {F} \\ \end{array}
$$

Before we do the final bounding, we first show the upper bound of the following term, which will be used a lot,

$$
\begin{array}{l} \left\| \left(B ^ {*} W ^ {*}\right) ^ {\top} \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {\dagger} \right\| _ {F} = \left\| \left(B ^ {*} W ^ {*} \left(B ^ {*} W ^ {*}\right) ^ {\top}\right) ^ {\dagger} B ^ {*} W ^ {*} \right\| _ {F} \\ = \left\| \left(B ^ {*} W ^ {*} \left(W ^ {*}\right) ^ {\top} \left(B ^ {*}\right) ^ {\top}\right) ^ {\dagger} B ^ {*} W ^ {*} \right\| _ {F} \\ = \left\| \left(\left(B ^ {*}\right) ^ {\top}\right) ^ {\dagger} \left(W ^ {*} \left(W ^ {*}\right) ^ {\top}\right) ^ {\dagger} \left(B ^ {*}\right) ^ {\dagger} B ^ {*} W ^ {*} \right\| _ {F} \\ = \| B ^ {*} \left(W ^ {*} \left(W ^ {*}\right) ^ {\top}\right) ^ {\dagger} W ^ {*} \| _ {F} \\ = \left\| \left(W ^ {*} \left(W ^ {*}\right) ^ {\top}\right) ^ {\dagger} W ^ {*} \right\| _ {F} \\ = \left\| \left(W ^ {*}\right) ^ {\top}\right) ^ {\dagger} \left(W ^ {*}\right) ^ {\dagger} W ^ {*} \left\| _ {F} \right. \\ = \left\| \left(\left(W ^ {*}\right) ^ {\dagger} W ^ {*}\right) ^ {\top} \left(W ^ {*}\right) ^ {\dagger} \right\| _ {F} \\ = \left\| \left(W ^ {*}\right) ^ {\dagger} W ^ {*} \left(W ^ {*}\right) ^ {\dagger} \right\| _ {F} \\ = \left\| \left(\left(W ^ {*}\right) ^ {\dagger} \right. \right\| _ {F} \\ \end{array}
$$

Therefore, we can bound the whole term by

$$
\| (B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}) ^ {\dagger} \| _ {F} ^ {2} \| \hat {\Delta} ^ {i} \| _ {F} ^ {2} + 2 \| (W ^ {*}) ^ {\dagger} \| _ {F} \| \hat {\Delta} ^ {i} \| _ {F} \| (B ^ {*} W ^ {*} (B ^ {*} W ^ {*}) ^ {\top}) ^ {\dagger} \| _ {F} \leq 3 \| (W ^ {*}) ^ {\dagger} \| \| (W ^ {*} (W ^ {*}) ^ {\top}) ^ {\dagger} \| _ {F} \| \hat {\Delta} ^ {i} \| _ {F}
$$

Recall that we have  $\| \hat{\Delta}^i\| _F^2\leq \frac{\sigma^2\sigma^6\epsilon_i}{36K^2R^2}$  given  $\mathcal{E}_{\mathrm{source}}^i$  therefore, we get the final result.

![](images/fff36ac1440b3364ae778fb46d09eacbc47ade544d352ba9244c5bf5d5e28473.jpg)

# F. Experiment details

# F.1. Other implementation details

We choose  $\beta_{i} = 1 / \| \nu \|_{2}^{2}$ , which is usually  $\Theta (1)$  in practice. Instead of choosing  $\epsilon_{i} = 2^{i}$ , we set that as  $1.5^{-i}$  and directly start from  $i = 22$ . It is easy to see that it turns out the actual sample number used in the experiment is similar as choosing  $\beta = \mathrm{poly}(d,K,M)$  and start from epoch 1 as proposed in theorem. But our choice is more easy for us to adjust parameter and do comparison.

We run each experiment on each task only once due to the limited computational resources and admitted that it is better to repeat the experiment for more iterations. But since we have overall 160 target tasks, so considering the randomness among tasks, we think it still gives meaningful result.

Moreover, remember that we have lower bound  $\beta \epsilon_{i}^{-1}$  in our proposed algorithm. In our experiment for linear model, we actually find that only using a constant small number like 50 for each epoch is enough for getting meaningful result. While in convnet, considering the complexity of model, we still obey this law.

# F.2. More results and analysis for linear model

# F.2.1. MORE COMPREHENSIVE SUMMARY

![](images/366a1ae11f34faf6c4d1efdf1bd8618f207dde7a1ca36e01236b92649304b2d8.jpg)  
Figure 3. summary of performance difference for linear model (restated of figure 1); left: The prediction difference (in %) between ada and non-ada for all target tasks right: The incorrect percentage of non-adaptive algorithm. Note that  $10\%$  is the baseline due to the in-balance dataset and the large the worse. Please refer to the main paper for further explanation,

![](images/dfd82ebd021b68f82eb523e8e137f1aabaf6db10453707fb912f7b8e5813482a.jpg)

# F.2.2. WHY OUR ALGORITHM FAILS ON SOME TARGET TASKS?

Here we focus on why our algorithm gets bad performance on some tasks. Overall, other than the randomness, this is mainly due to the incompatibility between our theoretical assumption (realizable linear model) and the complicated data structure in reality.

To be specific, there might a subset of source tasks that are informative about the target task under linear model assumptions, but other source tasks are far away from this model assumption. Due to the model misspecification, those misleading tasks may gain more sampling in the adaptive algorithm. We conjecture that this might be the case for target tasks like scale_0 and scale_2. To further support our argument, we further analyze its sample number distribution and test error changing with increasing epoch in the next paragraph.

For the scale_0 task, in Figure 8 we observe the non-stationary sample distribution changing across each epoch. But fortunately, the distribution is not extreme, there are still some significant sample from  $X_{-}O$  source tasks. This aligns with the test error observation in Figure 9, which is still gradually decreasing, although slower than the non-adaptive algorithm. On the other hand, the sample distributions are even worse. We observed that nearly all the samples concentrate towards  $X_{-}5$  source tasks. Thus no only we can not sample enough informative data, but we also force the model to fit to some non-related data. Such mispecification has been reflected in the test error changing plot. (You may notice the unstable error changing in non-adaptive algorithm performance, we think it is acceptable randomness because we only run each target task once and the target task itself is not very easy to learn.)

![](images/3734b1924c2c69a764e98937f880a4f9ea21d1523557102e8a0d624054da84e2.jpg)  
Figure 4. top: sample distribution for target task as scale_0, bottom: sample distribution for target task as scale_2 We show the sample distribution at each epoch 1,2,3.

![](images/c437912d0f99cab7aec8ed4b2a42cdd61bd476f7929956b194ec775d3e785021.jpg)

![](images/63ffc5a31ef5a8e5d1129cc4045cc5eeada15dbdceb8af9da9062aec817a857c.jpg)

![](images/ca413d8e9a7caf379f3a2a591a247e03baeee04281df5a6996415bcf11dcb404.jpg)  
Figure 5. Test error change after each epoch for target task as scale_0 and scale_2

![](images/cd2ec425725c402ef6f83a168c26c03fbb6b83e6541e3412e7c5d5a9670962a1.jpg)

# F.2.3. MORE GOOD SAMPLE DISTRIBUTION EXAMPLES

Appendix F.3

![](images/b692f4bb14a585b66ffaf3ebe6e3e9720fa2fbea4e783706618a992abc9fcd4e.jpg)

![](images/3873cb1536afd38704cc53cba07451a98e61699f4c76ebcfbdf08badbafb5dcf.jpg)

![](images/0b6fc4ffe820526fb9ae5f72812e066e63354987736cf0fc75c4786972df1005.jpg)

![](images/9e9dc153bd1a27db00e7238db7b0c21a0c66bde5eeb96c5d5845cffe0d06378f.jpg)

![](images/17a9cddb8e82b227944a59034a1f6c28470cada76954b024f7a061daaebabb7a.jpg)

![](images/51add94fff679ba84c776c2dac2d60dfa77e657b3f5a1a42faeb245c2deda7d1.jpg)

![](images/758181b98a296f391812960de8603a3cf1f3327eb3cd06302c3f061b2e82de78.jpg)

![](images/f02cf68d1e7b8f48059735f13b520e2ad20c5ff03e37eb29ec784f40b083ded9.jpg)

![](images/8f83950b3618af4760d330dd33acee5fe61face18db3f5098cdbcca1fd6bf63b.jpg)

![](images/baabd897d6fbcc1755ca527e2088e01e6b4b747e9a652357061a00e9168fde37.jpg)

![](images/2af059a81ef11d2e821a16a21ee6ec07e07a9a6eb03322201376e6be590e3a09.jpg)

![](images/a69087a7b3f9c11667aa192a53039fa68201d2a2b9fc71b1f62e7b138a072380.jpg)

![](images/7c5f567f05313b382f502d5b112718be2a31b259c8e19ab11aadae3f423ce08a.jpg)

![](images/51721927e8fe67e01919c2e5d903757abbdb7ed972d156d29898282ddd9beb8e.jpg)

![](images/5b7813672e14ede751a5a18c3cfa38e3a715094881029001572bfa26835528e2.jpg)

![](images/48a79ee340e3d4d4e09dc61474f9531c4156396638a1a75c3795418fe1b79aea.jpg)

![](images/c333a1a4d68139acbb2a8ea14d47bfa7baa66ca30d152371e23f766591dd8505.jpg)

![](images/cfc1cfa159d2957eaeae016a867bb203596c672f10448ff3cd443756060d9d8f.jpg)

![](images/207e343eb5a67a5f94f41ffa93bb8df3126fdb3de2253a32064a946f9faea9a3.jpg)

![](images/7bc8cc3791431dd281c5e3ad98b32d87aa35b6080ef79e8d9371e1b72260df26.jpg)

![](images/ec8ce7b665d8b75c4258037c4021b41ebf081642a7a700200f137e337d6242f4.jpg)

![](images/71d90d3dab5554df1efc7c55012534569bbb3fd5043a2317a2992de07e89600c.jpg)  
Figure 6. Good sample distribution. We show the sample distribution at each epoch 1,2,3. From top to bottom: glass_blur_0, glass_blur_1, glass_blur_3, impulse_noise_4, motion_blur_6, motion_blur_7, identity_8, identity_9

![](images/9ab51c56941d49f6bf68c77baffa39763c94574779607231af40316e7cea4e91.jpg)

![](images/5b8f2b4ae8b9e86085c27d4443bf9fa1cbc1bf70e3ad4b79698cd638ebc11186.jpg)

# F.3. More results and analysis for convnet model

The convnet gives overall better accuracy than the linear model, except the translate class, as shown in Figure 7. So we want to argue that it might be harder for us to get as large improvement as on linear model given the better expressive power on convnet.

![](images/a48d23bef25f9d16819ebc614ef11da4777a4077e1e63d0214618cf06e61fa19.jpg)  
Figure 7. left: summary of performance difference for conv model (restated of figure 2); right: the incorrect percentage of non-adaptive algorithm On the right side we show the incorrect percentage of non-adaptive algorithm. Note that  $10\%$  is the baseline due to the in-balance dataset and the large the worse. Please refer to the main paper for further explanation,

![](images/1e3906225085c8752f2f4a701639f26ff1e83ab23e3f38f6fbc2cecb16737ef5.jpg)

# F.3.1. WHY OUR ALGORITHM FAILS ON SOME TARGET TASKS?

Here we show scale_5 and shear_9 as the representative bad cases. With the similar idea of linear model, we again observe the non-stationary sample distribution changing across each epoch in Figure 8. For scale_5, we observe that at the beginning of epoch 1, the sample fortunately converges to  $X\_5$  source tasks, therefore our adaptive algorithm initially performs better than the non-adaptive one as shown in Figure 9. Unfortunately, the sample soon diverges to other source tasks, which more test error. For shear_9, although there are some samples concentrate on  $X\_9$  source tasks, overall, the number of samples on  $X\_9$  source tasks has a decrease proportion of total number of source sample. So the algorithm has a worse performance on this.

![](images/cf66e055d4bdef63e258600ecec96402c994c2209de2a0c1971e58620c4c5455.jpg)  
Figure 8. top: sample distribution for target task as scale_5, bottom: sample distribution for target task as shear_9 We show the sample distribution at each epoch 1,2,3.

![](images/736d4e9dfd12dbf5406056a9b040f31cd56f082098df68b38c8df59f8078dd57.jpg)

![](images/f6becade4600f739bcc55b06419754d4b3f05a63b498e05d502a1e6d32bb468e.jpg)

![](images/d9ca64f175657c96d1632b29a6a4102febdcdb482df5986f3b8412f0a1c22dce.jpg)  
Figure 9. Test error change after each epoch for target task as scale_5 and shear_9

![](images/8aafa6beb192617d5198f3f82bfc6fc762925c5c4d721a1910ec6bd77aa96fba.jpg)

# F.3.2. MORE GOOD SAMPLE DISTRIBUTION EXAMPLES

![](images/e712cb0473f99b084e6baa9cc85542db742ff22321f1cdbec99f930e2d6e5807.jpg)

![](images/326f205c34e7cfb2c4bb42b55889f6e1040a287d0bf3017c5f5377db090af28a.jpg)

![](images/a7379769e14546b19f3aba459eb1eca31a1b8b6f8f4fe8acf6707909bdf69eb4.jpg)

![](images/a20934011a6a6d2af34713494bc69acdae99445271abc1d0aa10cc432ef72d14.jpg)

![](images/dcd4c6709ff0b31b5b2808e2dccee07dfa99cb4a0dfdb98816c1f2d0d0d6505e.jpg)

![](images/51e1d66bbb70c31537f7126ee9f833ae59610fdd8c7e180504c4763ad1c1b388.jpg)

![](images/55912778d558c51b447fe9877c6b3b931dcbcdd7dbae1c12920cd59939bfb947.jpg)

![](images/63dba567e7aef011ed034edbd3c349ca5da6e2f74e0b48ff2beb3a4f9a94621e.jpg)

![](images/b0e4f075bfcbe6479c1ee8e4e544b6c30de6705e2aefa0aed0597d40a42858fe.jpg)

![](images/1ed080404a1709fe4bb38f0b0d679d5fabb8e8e7755a7f38c1544bd08af50036.jpg)

![](images/8a6a98359993188776edbaf19c78c78a3cf45b3b574d18f172638ef71f63a79e.jpg)

![](images/669212319f742b29808a04ce5fb9ebbc7353865098e99fe99eca89c5ddc07d7f.jpg)

![](images/42c03ab28041f1c9d20a01749b7ff8f2663852d20119abfafe7910fb14b7d6a6.jpg)

![](images/f387d6d1b60bf2b6437498a7e5e84fe7dc865e78d2f2570b535cec907278a02d.jpg)

![](images/539d3fa8b926480b464cc2ce70b5c0eefc5138e6b158906f173fafb570a837d1.jpg)

![](images/f1a560a19dc475d7d2f418c84d10284d315e62f7613674671b03da28ed3a56a3.jpg)

![](images/3f60bd284a138afd8cb5f4d5f810fb638f8040904a5178c275636c72bde03fe4.jpg)

![](images/836234be2da6fe518d0a02faa6cf15d1d8189070907146c0ffdbab33c15e4ac3.jpg)

![](images/36fac2e66421e61fa8a2cadfa6557295519badcf6a1c5a2b200dc566865de07e.jpg)

![](images/b48475daef0a3cf37054f1dcdf7a38f7905e4d84c22098f48f9ec74f26e10eec.jpg)

![](images/99d4d8b3ee6de0a76a4bbe4f5a91f03c52c64264889edc653d6dabe7c7e03a73.jpg)

![](images/407018894081689e446eca80c4ad5c7aef29763f2939c7475c3e5c992aedfea1.jpg)

![](images/84f3b347acf2a3ab849e49d2e7468514cd6d8c56f84ba290ec5ffdc474c087c7.jpg)

![](images/1d0b206484dec34cbfb174606abc74913560431cb0164e9e712c0b4c78317f92.jpg)

![](images/6b5bde5e0e0a91ac502dfa1620de0f9cf66a84336f4e50f952464dec3703fc69.jpg)  
Figure 10. Good sample distribution. We show the sample distribution at each epoch 1,2,3. From top to bottom: brightness_0, brightness_1, dotted_line_3, dotted_line_4, identity_5, fog_6, glass_blur_7, rotate_8, canny_edge_9

![](images/240d13c552deedbc4deff2d7e8c9168338a19f2f031428366eb2a6856f8b246c.jpg)

![](images/3375fefd5b4e08b75ffd89d8c378f748c36647e87cb3a80e0fbbc7725bd6a2f1.jpg)