| TASKS | NEWSGROUPS |
| COMP | OS.MS-WINDOWS.MISC, SYS.MAC.HARDWARE, GRAPHICS, WINDOWS.X |
| REC | SPORT.BASEBALL, SPORT.HOCKEY AUTOS, MOTORCYCLES |
| SCI | CRYPT, ELECTRONICS, MED, SPACE |
| TALK | POLITICS.MIDEAST, RELIGION.MISC, POLITICS.MISC, POLITICS.GUNS |
DVDs, electronics, kitchen appliances, etc. We consider each domain as a binary classification task. Reviews with ratings $>3$ were labeled positive, while those with ratings $< 3$ were labeled negative, reviews with rating $= 3$ are discarded, as the sentiments were ambiguous and difficult to predict. The training/testing/validation partition is randomly split into $70\%$ training, $10\%$ testing and $20\%$ validation.
Topic Classification ${}^{2}$ . We select 16 newsgroups from the 20 Newsgroup dataset, which is a collection of approximately 20,000 newsgroup documents and partitioned (nearly) evenly across 20 different newsgroups, and formulate them into four 4-class classification tasks (shown in Table 1) to evaluate the performance of our algorithm on topic classification. The training/testing/Validation partition is randomly split into ${60}\%$ training, ${20}\%$ testing and ${20}\%$ validation.
# 5.1.2. NETWORK MODEL
We implement our adaptive AMTRL algorithm on the most prevalent deep multi-task representation learning network model (i.e. hard parameter sharing network model (Caruana, 1997)). As shown in Figure 1, all tasks have task-specific output layers and share the representation extraction layers in the model.
The shared representation extraction layers are typically built with a feature extraction structure such as Convolutional Neural Networks (CNN) or Recurrent Neural Network (RNN), and the task-specific output layers are typically formulated using fully connected layers. In our experiments, either TextCNN (Kim, 2014) or BiLSTM (Hochreiter & Schmidhuber, 1997) is used to build the shared representation extraction layers. The TextCNN module is structured with three parallel convolutional layers with kernel sizes of 3, 5, and 7 respectively. The BiLSTM module is structured with two bi-directional hidden layers with size 32. The extracted feature representations are then concatenated and classified using the task-specific output module, which has one fully connected layer.

(a) $R_{mean}$ changes in the training process.

(b) $R_{var}$ changes in the training process.

Figure 3. Evolution of relatedness between tasks during training for sentiment analysis. (a) presents the change in $R_{mean}$ for the original MTRL (Orig MTRL), AAMTRL without the weighting strategy (Uniform AAMTRL) and AAMTRL respectively. (b) presents the change in $R_{var}$ for Orig MTRL, Uniform AAMTRL and AAMTRL respectively.
(a) $R_{mean}$ changes in the training process.
Figure 4. Evolution of relatedness between tasks during training for topic classification. (a) presents the change in $R_{mean}$ for the original MTRL (Orig MTRL), AAMTRL without the weighting strategy (Uniform AAMTRL) and AAMTRL respectively. (b) presents the change in $R_{var}$ for Orig MTRL, Uniform AAMTRL and AAMTRL respectively.

(b) $R_{var}$ changes in the training process.
The adversarial module is built with one fully connected layer, the output size of which is equal to the number of tasks. It is noteworthy that the adversarial module connects to the shared layers via a gradient reversal layer (Ganin & Lempitsky, 2015). This gradient reversal layer multiplies the gradient by $-1$ during the backpropagation, which optimizes the adversarial loss function (7).
# 5.1.3. TRAINING PARAMETERS
We train the deep AAMTRL network model with Algorithm 1 settings $\lambda_0 = 1$ , $r_0 = 10$ and $r_{k + 1} = r_k + 2$ ; here, $R_0$ is a matrix of ones. We use the Adam optimizer (Kingma & Ba, 2015) and train 600 epochs for sentiment analysis and 1200 epochs for topic classification. The batch size is 256 for both sentiment analysis and topic classification. We use dropout with probability of 0.5 for all task-specific output modules. For all experiments, we search over the set $\{1e - 4,5e - 4,1e - 3,5e - 3,1e - 2,5e - 2\}$ of learning rates and choose the model with the highest validation accuracy.
# 5.2. Results and Analysis
# 5.2.1. RELATEDNESS EVOLUTION
To evaluate the performance of the adversarial module for AAMTRL, we record the change in the relatedness matrix during training. In this experiment, the text CNN module is used to extract representation.
The relatedness matrix is summarized by the mean and variance of $\{R_1, R_2, \dots, R_T\}$ , where $R_t$ for $t \in \{1, \dots, T\}$ is defined in (21). Let $R_{mean}$ , $R_{var}$ be the mean and the variance respectively. The results for sentiment analysis and topic classification are shown in Fig. 3 and Fig.4 respectively.
$$
R _ {t} = \frac {1}{T} \sum_ {k = 0} ^ {T} R _ {t k}. \tag {21}
$$
The results show the following:
- The proposed AAMTRL is able to enforce the tasks

(a) text-CNN.

(b) BiLSTM.

Figure 5. Radar chart of the error rate for each task in sentiment analysis. (a) shows the results for MTRL models with text CNN-based representation extraction layers. (b) shows the results for MTRL models with BiLSTM-based representation extraction layers.
(a) text-CNN.

(b) BiLSTM.
Figure 6. Radar chart of the error rate for each task in topic classification. (a) shows the results for MTRL models with text CNN-based representation extraction layers. (b) shows the results for MTRL models with BiLSTM-based representation extraction layers
to share an identical distribution in the representation space.
- The weighting strategy can accelerate and smooth the convergence process of the adversarial module during training.
- The tasks in sentiment analysis initially have a much closer relationship than those in topic classification.
# 5.2.2. CLASSIFICATION ACCURACY
We compare our proposed methods with two baselines — (i) Single Task, which solves tasks independently, and (ii) Uni-
form Scaling, which minimizes a uniformly weighted sum of loss functions—as well as two state-of-the-art methods: (i) MGDA, which uses the MGDA-UB method proposed by (Sener & Koltun, 2018). (ii) Adversarial MTRL, which uses the original adversarial MTL framework proposed by (Liu et al., 2017).
We report the error rate of each task for sentiment analysis and topic classification in Figure 5 and Figure 6 respectively. The exact results can be referred to in the supplementary materials. The results show the following:
- The proposed AAMTRL outperforms the state-of-the

Figure 7. Change of the relative task-averaged risk along the number of tasks.

Figure 8. variety of the test error for task (Appeal) according to learning with different tasks.
art methods on sentiment analysis and achieves similar performance for topic classification.
- For topic classification, in which the tasks are not closely related (as shown in Figure 4 (a)), MTL strategies do not outperform single-task learning. This shows that the performance of MTL is dependent on the initial relatedness between tasks.
# 5.2.3. INFLUENCE OF THE NUMBER OF TASKS
In this section, we investigate the influence of the number of tasks on the task-averaged risk. We define a relative task-averaged risk with respect to single-task learning (STL) in (22).
$$
e r _ {r e l} = \frac {e r _ {M T L}}{\frac {1}{T} \sum_ {1} ^ {T} e r _ {S T L} ^ {t}}, \tag {22}
$$
where $er_{MTL}$ is the task-averaged test error of a MTL model, while $er_{STL}^t$ is the test error of the STL model $t$ . The MTL model and the STL models are the best-performing models generated from our experimental setting. The MTL model is trained using our AAMTRL algorithm.
We also carry out an experiment on sentiment analysis. In this experiment, the text CNN module is used to extract representation. Figure 7 presents the change in the relative
task-averaged risk depending on the number of tasks. Figure 8 presents the variety of the test error for task (Appeal) according to learning with different tasks.
The results show the following:
- In AMTRL, an increase in task numbers does not decrease the task-averaged error.
- For a specific task in AMTRL, learning with more tasks does not guarantee better performance.
The results verify our analysis in Section 4.1.
# 6. Conclusion
While performance of AMTRL is attractive, the theoretical mechanism is unexplored. To fill this gap, we analyze the task-averaged generalization error bound for AMTRL. Based on the analysis, we propose a novel AMTRL method, named Adaptive AMTRL, that is designed to improve the performance of existing AMTRL methods. Numerical experiments support our theoretical results and demonstrate the effectiveness of our proposed approach.
# Acknowledgements
This work is supported by the National Natural Science Foundation of China under Grants 61976161.
# References
Ando, R. K. and Zhang, T. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817-1853, 2005.
Blitzer, J., Dredze, M., and Pereira, F. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL, 2007.
Caruana, R. Multitask learning. Machine Learning, 28(1): 41-75, 1997.
Chen, C., Yang, Y., Zhou, J., Li, X., and Bao, F. S. Cross-domain review helpfulness prediction based on convolutional neural networks with auxiliary domain discriminators. In NAACL, pp. 602-607, 2018a.
Chen, Z., Badrinarayanan, V., Lee, C., and Rabinovich, A. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In ICML, pp. 793-802, 2018b.
Collobert, R. and Weston, J. A unified architecture for natural language processing: deep neural networks with multitask learning. In ICML, pp. 160-167, 2008.
Dwivedi, K. and Roig, G. Representation similarity analysis for efficient task taxonomy & transfer learning. In CVPR, 2019.
Ganin, Y. and Lempitsky, V. S. Unsupervised domain adaptation by backpropagation. In ICML, pp. 1180-1189, 2015.
Hager, W. W. Dual techniques for constrained optimization. Journal of Optimization Theory and Applications, 55(1): 37-71, 1987.
Hestenes, M. R. Multiplier and gradient methods. Journal of optimization theory and applications, 4(5):303-320, 1969.
Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural Computation, 9(8):1735-1780, 1997.
Kendall, A., Gal, Y., and Cipolla, R. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In CVPR, pp. 7482-7491, 2018.
Kim, Y. Convolutional neural networks for sentence classification. In EMNLP, pp. 1746-1751, 2014.
Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In ICLR, 2015.
Kriegeskorte, N., Mur, M., and Bandettini, P. A. Representational similarity analysis-connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2:4, 2008.
Lin, X., Zhen, H., Li, Z., Zhang, Q., and Kwong, S. Pareto multi-task learning. In NeurIPS, 2019.
Liu, P., Qiu, X., and Huang, X. Adversarial multi-task learning for text classification. In ACL, pp. 1-10, 2017.
Liu, Y., Wang, Z., Jin, H., and Wassell, I. J. Multi-task adversarial network for disentangled feature learning. In CVPR, pp. 3743-3751, 2018.
Mao, Y., Yun, S., Liu, W., and Du, B. Tchebycheff procedure for multi-task text classification. In ACL, 2020.
Maurer, A. A chain rule for the expected suprema of gaussian processes. In ALT, pp. 245-259, 2014.
Maurer, A., Pontil, M., and Romero-Paredes, B. The benefit of multitask representation learning. Journal of Machine Learning Research, 17:81:1-81:32, 2016.
McClure, P. and Kriegeskorte, N. Representational distance learning for deep neural networks. Frontiers in Computational Neuroscience, 10:131, 2016.
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS. 2019.
Rockafellar, R. T. Augmented lagrange multiplier functions and duality in nonconvex programming. SIAM Journal on Control, 12(2):268-285, 1974.
Ruder, S. An overview of multi-task learning in deep neural networks. CoRR, abs/1706.05098, 2017.
Sener, O. and Koltun, V. Multi-task learning as multi-objective optimization. In NeurIPS, pp. 525-536, 2018.
Shi, G., Feng, C., Huang, L., Zhang, B., Ji, H., Liao, L., and Huang, H. Genre separation network with adversarial training for cross-genre relation extraction. In EMNLP, pp. 1018-1023, 2018.
Yadav, S., Ekbal, A., Saha, S., Bhattacharyya, P., and Sheth, A. P. Multi-task learning framework for mining crowd intelligence towards clinical treatment. In *NAACL*, pp. 271-277, 2018.
Yu, J., Qiu, M., Jiang, J., Huang, J., Song, S., Chu, W., and Chen, H. Modelling domain relationships for transfer learning on retrieval-based question answering systems in e-commerce. In WSDM, pp. 682-690, 2018.