# Adaptive Adversarial Multi-task Representation Learning Yuren Mao1 Weiwei Liu2 Xuemin Lin1 # Abstract Adversarial Multi-task Representation Learning (AMTRL) methods are capable of boosting the performance of Multi-task Representation Learning (MTRL) models. However, the theoretical mechanism behind AMTRL has been only minimally investigated. Accordingly, to fill this gap, we study the generalization error bound of AMTRL through the lens of Lagrangian duality. Based on this duality, we propose a novel adaptive AMTRL algorithm that improves the performance of the original AMTRL methods. We further conduct extensive experiments to back up our theoretical analysis and validate the superiority of our proposed algorithm. # 1. Introduction Multi-task Representation Learning (MTRL), which is an influential line of research on Multi-task Learning, learns related tasks simultaneously by sharing a common representation. Compared with learning each task independently, MTRL typically has a lower computational cost and better prediction performance. It has achieved great success in various applications ranging from computer vision (Kendall et al., 2018) to natural language processing (Collobert & Weston, 2008). Recently, adversarial MTRL (AMTRL) methods (Liu et al., 2017; Chen et al., 2018a; Shi et al., 2018; Yu et al., 2018; Liu et al., 2018; Yadav et al., 2018) have been widely utilized in a range of applications. AMTRL methods improve the performance of original MTRL models by adding an extra adversarial module, i.e., a task discriminator in the representation space. Unfortunately, the theoretical mechanism behind AMTRL methods is still not well understood. The findings of this paper suggest that AMTRL methods $^{2}$ School of Computer Science, Wuhan University, China. $^{1}$ School of Computer Science and Engineering, University of New South Wales, Australia. Correspondence to: Weiwei Liu . Proceedings of the $37^{th}$ International Conference on Machine Learning, Online, PMLR 119, 2020. Copyright 2020 by the author(s). restrict the hypothesis class by enforcing all the tasks to share an identical distribution in the representation space. The identical distribution restriction provides further inductive bias and tightens the task-averaged generalization error bound for MTRL. Based on this restriction, we formulate AMTRL as a constrained optimization problem and propose to solve the problem using the augmented Lagrangian method. To quantitatively measure how likely the tasks share an identical distribution in the representation space, we propose a pairwise relatedness metric for AMTRL. Based on this metric, a weight adaption strategy is proposed in order to accelerate the convergence of the adversarial module. Combining the weight adaption strategy and the augmented Lagrangian method, we present the adaptive AMTRL method. This paper conducts experiments on two popular multi-task learning applications: sentiment analysis and topic classification. Experimental results verify our theoretical analysis and validate that the proposed algorithm outperforms several state-of-the-art methods. # 2. Related Works Adaptive weighting scalarization, which linearly scalarizes the tasks with adaptive weight assignment, is a typical MTRL method. Various adaptive weighting strategies (Kendall et al., 2018; Chen et al., 2018b; Sener & Koltun, 2018; Lin et al., 2019; Mao et al., 2020) have been proposed to balance the regularization between tasks and improve the performance of original MTRL. By contrast, existing AMTRL methods, for example (Liu et al., 2017; Chen et al., 2018a), only adopts the naive uniform scalarization. In this paper, we propose a adaptive weighting strategy for AMTRL based on the augmented Lagrangian (Hestenes, 1969) and a novel task relatedness metric. The task relatedness metric is proposed based on the representation similarity. Comparing with the typical representation-similarity-based task relatedness metric (Kriegeskorte et al., 2008; McClure & Kriegeskorte, 2016; Dwivedi & Roig, 2019), the proposed task relatedness metric computes the representation similarity with the output of the adversarial module and does not require extra computation of correlation coefficients, which is more efficient for AMTRL. # 3. Preliminaries Consider a multi-task representation learning problem with $T$ tasks over an input space $\mathcal{X}$ and a collection of task spaces $\{\mathcal{Y}\}_{t=1}^{T}$ . We define the hypothesis class of the problem as $\mathcal{H}$ and $\mathcal{H} = \{\mathcal{F}\}_{t=1}^{T} \circ \mathcal{G}$ . $\mathcal{G} = \{g : \mathcal{X} \to \mathbb{R}^{K}\}$ is the set of representation functions (i.e. the representation hypothesis class). $K$ is the dimension of the representation space. $\{\mathcal{F}\}_{t=1}^{T} = \{f^{t} : \mathbb{R}^{K} \to \mathcal{Y}\}_{t=1}^{T}$ is a set of predictors (i.e. the prediction hypothesis class) and $f^{t}$ is $\rho$ -Lipschitz for all $t \in \{1, \dots, T\}$ . $g$ is used across different tasks, while $f^{t}$ is task-specific. $\mathcal{H} = \{h = \{f^{t}(g(\cdot))\}_{t=1}^{T} : \mathcal{X} \to \{\mathcal{Y}\}_{t=1}^{T}\}$ . Learning $\mathcal{H}$ is based on the data observed for all the tasks. Without loss of generality, we assume that each task has $n$ samples. The data takes the form of a multisample $S = \{S_{t}\}_{t=1}^{T}$ with $S_{t} = (\overline{X}_{t}, \overline{Y}_{t})$ and $(\overline{X}_{t}, \overline{Y}_{t}) = \{x_{i}^{t}, y_{i}^{t}\}_{i=1}^{n} \sim \mathcal{D}_{t}^{n}$ . $\mathcal{D}_{t}$ is a probability distribution over $\mathcal{X} \times \mathcal{Y}$ . After representation mapping, $(g(\overline{X}_{t}), \overline{Y}_{t}) \sim \mu_{t}^{n}$ where $\mu_{t}$ is a distribution over $\mathbb{R}^{K}$ . The loss function for task $t$ is defined as $l^t: \mathcal{Y} \times \mathcal{Y} \to [0,1]$ and assumed to be 1-Lipschitz. We define the true risk of a hypothesis $f^t \circ g$ for task $t$ as $\mathcal{L}_{\mathcal{D}_t}(f^t \circ g) = \mathbb{E}_{(x^t, y^t) \sim \mathcal{D}_t}[l^t(f^t(g(x^t)), y^t)]$ and the task-averaged generalization error as $\mathcal{L}_{\mathcal{D}}(h) = \frac{1}{T} \sum_{t=1}^{T} \mathcal{L}_{\mathcal{D}_t}(f^t \circ g)$ . Correspondingly, the empirical loss of the task $t$ is defined as $\mathcal{L}_{S_t}(f^t \circ g) = \frac{1}{n} \sum_{i=1}^{n} l^t(f^t(g(x_i^t)), y_i^t)$ and the empirical task-averaged error is defined as $\mathcal{L}_S(h) = \frac{1}{T} \sum_{t=1}^{T} \mathcal{L}_{S_t}(f^t \circ g)$ . We also denote the transpose of the vector/matrix by superscript '', the logarithms to base 2 by log. Multi-task Representation Learning. Multi-task Representation Learning (MTRL) learns multiple tasks jointly by sharing representation across tasks. This representation is typically produced using a representation map that has the same parameters for each task. For example, in deep neural networks, the common representation is obtained by sharing hidden layers. The original MTRL module in Figure 1 shows a deep MTRL network model utilizing a hard parameter sharing strategy (Ruder, 2017). With the Empirical Risk Minimization (ERM) paradigm, MTRL is defined to minimize the task-averaged empirical error (1) (Maurer et al., 2016). $$ \min _ {g, f ^ {1}, \dots , f ^ {T}} \frac {1}{T} \sum_ {t = 1} ^ {T} \hat {\mathcal {L}} ^ {t} (g, f ^ {t}). \tag {1} $$ Theorem 1 (Maurer et al., 2016; Ando & Zhang, 2005) presents an upper bound for the task-averaged generalization error of MTRL. Theorem 1. For $0 < \delta < 1$ , with probability at least $1 - \delta$ ![](images/9cff84b869e8b7bcc176f2fc6a0e7eb5bae8c45e0ffde5487f93849b527408c3.jpg) Figure 1. A deep adversarial MTRL Network model. in $S$ we have that $$ \mathcal {L} _ {\mathcal {D}} (h) - \mathcal {L} _ {S} (h) \leq $$ $$ \frac {c _ {1} L G _ {a} (\mathcal {G} (\bar {X}))}{n T} + \frac {c _ {2} Q s u p _ {g \in \mathcal {G}} \| g (\bar {X}) \|}{n \sqrt {T}} + \sqrt {\frac {9 l n (2 / \delta)}{2 n T}} \tag {2} $$ where $c_{1}$ and $c_{2}$ are universal constants. $G(\mathcal{G}(\overline{X}))$ is the Gaussian average defined in (3) $$ G _ {a} (\mathcal {G} (\bar {X})) = \mathbb {E} \left[ \sup _ {g \in \mathcal {G}} \sum_ {k, t, i} \gamma_ {k t i} g _ {k} \left(x _ {i} ^ {t}\right) \mid x _ {i} ^ {t} \right], \tag {3} $$ where $\gamma_{kti}$ denote independent standard normal variables. $\sup_{g\in \mathcal{G}}\| g(\overline{X})\|$ can be computed by (4) $$ \sup _ {g \in \mathcal {G}} \| g (\bar {X}) \| = \sup _ {g \in \mathcal {G}} \sqrt {\sum_ {k , t , i} g _ {k} \left(x _ {i} ^ {t}\right) ^ {2}}. \tag {4} $$ $Q$ is the quantity $$ Q \equiv \sup _ {y \neq y ^ {*} \in \mathbb {R} ^ {K n}} \frac {1}{\| y - y ^ {*} \|} \mathbb {E} \sup _ {f \in \mathcal {F}} \sum_ {i = 1} ^ {n} \gamma_ {i} \left(f \left(y _ {i}\right) - f \left(y _ {i} ^ {*}\right)\right), \tag {5} $$ where $\gamma_{i}$ are independent standard normal variables. Adversarial Multi-task Representation Learning. Adversarial MTRL (AMTRL) adds an extra task discriminator to the original MTRL model shown in Figure 1. For each training sample, the discriminator can recognize which task the sample belongs to. The loss functions of existing adversarial MTRL methods (Liu et al., 2017; Chen et al., 2018a; Shi et al., 2018; Yu et al., 2018; Liu et al., 2018; Yadav et al., 2018) have a common part $$ \min _ {h} L (h, \lambda) = \mathcal {L} _ {S} (h) + \lambda \mathcal {L} ^ {\text {a d v}}, \tag {6} $$ where $\lambda$ is a hyper parameter and the adversarial term $\mathcal{L}^{adv}$ has the form $$ \mathcal {L} ^ {a d v} = \max _ {\Phi} \frac {1}{n T} \sum_ {t = 1} ^ {T} \sum_ {i = 1} ^ {n} e _ {t} \Phi \left(g \left(x _ {i} ^ {t}\right)\right). \tag {7} $$ $\Phi(\cdot): \mathbb{R}^K \to [0,1]^T$ is a task discriminator that estimates which task the sample belongs to. $e_t$ is the vector with all components equal to 0, except the $t$ -th, which is 1. (6) minimizes the task-averaged empirical risk and enforces the representation of each task to share an identical distribution $(\mu_{1} = \mu_{2},\dots, = \mu_{T})$ . When all tasks have an identical distribution in the representation space, $\mathcal{L}_{adv} = c$ where $c$ is a discriminator-dependent constant. For the widely used softmax function-based discriminator, where $\Phi (g(x_n^t)) =$ softmax $(W^{\prime}g(x_n^t) + b)$ and $W\in \mathbb{R}^{K\times T}$ , $c = \frac{1}{T}$ . Without loss of generality, we can set $\mathcal{L}^{adv}\coloneqq \mathcal{L}^{adv} - c$ . # 4. Proposed Methods # 4.1. Task-averaged Generalization Error Bound Assuming the representation of each task shares an identical distribution, Corollary 1 outlines the task-averaged generalization error bound for AMTRL. Corollary 1. Assume $\mu_1 = \mu_2, \ldots, = \mu_T$ . For $0 < \delta < 1$ , with probability at least $1 - \delta$ in $\overline{S}$ we have that $$ \begin{array}{l} \mathcal {L} _ {\mathcal {D}} (h) - \mathcal {L} _ {S} (h) \leq \\ \frac {c _ {1} \rho G _ {a} \left(\mathcal {G} ^ {*} (\bar {X} _ {1})\right)}{n} + \frac {c _ {2} Q \sup _ {g \in \mathcal {G} ^ {*}} \| g (\bar {X} _ {1}) \|}{\sqrt {n}} + \sqrt {\frac {9 l n (2 / \delta)}{2 n T}} \tag {8} \\ \end{array} $$ where $c_{1}$ and $c_{2}$ are universal constants, while $\mathcal{G}^{*} = \{g\in$ $\mathcal{G}:\mu_1 = \mu_2 = ,\dots,\mu_T\}$ . $G(\mathcal{G}^{*}(\overline{X}_{1}))$ is the Gaussian average of task 1 defined in (9) $$ G _ {a} \left(\mathcal {G} ^ {*} \left(\bar {X} _ {1}\right)\right) = \mathbb {E} \left[ \sup _ {g \in \mathcal {G} ^ {*}} \sum_ {k, i} \gamma_ {k i} g _ {k} \left(x _ {i} ^ {1}\right) \mid x _ {i} ^ {1} \right], \tag {9} $$ where $\gamma_{ki}$ are independent standard normal variables. $\sup_{g\in \mathcal{G}^*}\| g(\overline{X}_1)\|$ can be computed by (10): $$ \sup _ {g \in \mathcal {G} ^ {*}} \| g (\bar {X} _ {1}) \| = \sup _ {g \in \mathcal {G} ^ {*}} \sqrt {\sum_ {k , i} g _ {k} \left(x _ {i} ^ {1}\right) ^ {2}}. \tag {10} $$ $Q$ is the quantity $$ Q \equiv \sup _ {y \neq y ^ {\prime} \in \mathbb {R} ^ {K n}} \frac {1}{\| y - y ^ {\prime} \|} \mathbb {E} \sup _ {f \in \mathcal {F}} \sum_ {i = 1} ^ {n} \gamma_ {i} \left(f \left(y _ {i}\right) - f \left(y _ {i} ^ {\prime}\right)\right), \tag {11} $$ where $\gamma_{i}$ denote independent standard normal variables. Proof. For $\mu_1 = \mu_2, \ldots, = \mu_T$ , $$ \begin{array}{l} G _ {a} \left(\mathcal {G} ^ {*} (\bar {X})\right) = \mathbb {E} \left[ \sup _ {g \in \mathcal {G} ^ {*}} \sum_ {k, t, i} \gamma_ {k t i} g _ {k} \left(x _ {i} ^ {t}\right) \mid x _ {i} ^ {t} \right] \\ = T \mathbb {E} \left[ \sup _ {g \in \mathcal {G} ^ {*}} \sum_ {k, i} \gamma_ {k i} g _ {k} \left(x _ {i} ^ {1}\right) \mid x _ {i} ^ {1} \right] = T G _ {a} \left(\mathcal {G} ^ {*} \left(\bar {X} _ {1}\right)\right). \tag {12} \\ \end{array} $$ $$ \sup _ {g \in \mathcal {G} ^ {*}} \| g (\bar {X}) \| = \sqrt {T} \sup _ {g \in \mathcal {G} ^ {*}} \| g (\bar {X} _ {1}) \|. \tag {13} $$ By combining (12) and (13) with Theorem 1, we conclude our proof. # Remarks: - The first term of the bound, which can be interpreted as the cost of estimating the representation $g$ , is typically of order $\frac{1}{n}$ . Moreover, the second term, which corresponds to the cost of estimating task-specific predictors, is typically of order $\frac{1}{\sqrt{n}}$ . The last term contains the confidence parameter. According to Theorem 3 in (Maurer, 2014), $c_{1}, c_{2}$ are rather large; the last term typically makes only a small contribution. - From the property of the Gaussian average, $TG_{a}(\mathcal{G}^{*}(\overline{X}_{1}))\leq G_{a}(\mathcal{G}(\overline{X}))$ for $G^{*}\subseteq \mathcal{G}$ Furthermore, we have $\sqrt{T}\sup_{g\in \mathcal{G}^*}\| g(\overline{X}_1)\|\leq$ supg∈g||g(X)||. The generalization error bound for AMTRL is tighter than that for MTRL. - In AMTRL, the number of tasks has little to do with the generalization error bound. # 4.2. Task Relatedness in Representation Space The above analysis shows that the similarity of distributions between tasks in the representation space determines the performance of AMTRL. The similarity is a data-dependent between-task relatedness. This paper proposes a novel relatedness metric for AMTRL based on the task discriminator to quantitatively measure the similarity. Based on the metric, we are able to visualize the relatedness between tasks during training. Assume that the discriminator $\Phi(\cdot)$ is the Bayes optimal classifier. We propose to measure the relatedness between task i and task j as follows: $$ R _ {i j} = \frac {\Phi_ {j} \left(g \left(x ^ {i}\right)\right) + \Phi_ {i} \left(g \left(x ^ {j}\right)\right)}{\Phi_ {i} \left(g \left(x ^ {i}\right)\right) + \Phi_ {j} \left(g \left(x ^ {j}\right)\right)}, \tag {14} $$ where $x^i$ and $x^j$ are sampled from $\mathcal{D}_i$ and $\mathcal{D}_j$ respectively, $g(x^i) \sim \mu_i$ and $g(x^j) \sim \mu_j$ . $\Phi_i(\cdot), \Phi_j(\cdot)$ represent the probability that $\Phi(\cdot)$ classify the input into tasks $i, j$ respectively. $R_{ij} \in [0,1]$ reflects the similarity between $\mu_i$ and $\mu_j$ . $R_{ij}$ is equal to 1 when $\mu_i$ is the same as $\mu_j$ and equals to 0 when $\mu_i$ and $\mu_j$ are totally different. In the Empirical Risk Minimization (ERM) setting, we approximate $R_{ij}$ with (15), as follows: $$ R _ {i j} = \min \left\{\frac {\sum_ {n = 1} ^ {N} e _ {j} \Phi \left(g \left(x _ {n} ^ {i}\right)\right) + e _ {i} \Phi \left(g \left(x _ {n} ^ {j}\right)\right)}{\sum_ {n = 1} ^ {N} e _ {i} \Phi \left(g \left(x _ {n} ^ {i}\right)\right) + e _ {j} \Phi \left(g \left(x _ {n} ^ {j}\right)\right)}, 1 \right\}, \tag {15} $$ ![](images/956de731835307b8b11706830e02c1d91533da05e5cc1d2c23e096b3dbf04804.jpg) (a) Three 2-D Gaussian distributions. ![](images/e74bab52209470373c53bdf2a059ae17d9577acd08a21a3443b8818c0bad11ae.jpg) (b) Discriminator. ![](images/d7119e4bdb32e1c78c9a434bef4cdddc1c694fe0ea0069b448f872c2440eb613.jpg) (c) Relatedness change curve. Figure 2. Performance of the proposed relatedness measure $R_{ij}$ across three two-dimensional Gaussian distributions. (a) Illustration of three tasks with 2-D Gaussian distributions over their representation space. A total of 3000 samples are used in this case. The mean of the Gaussian distributions corresponding to task 1, 2 and 3 are $[0.2\alpha, 0]$ , $[-0.2\alpha, 0]$ , $[0, 0.2\alpha]$ respectively, and all of them have the same variance-covariance matrix $\sum = I_T$ , where $I$ is the $T \times T$ identity matrix. (b) Discriminator constructed using a two-layers fully connected network ending with a softmax function. (c) Illustration of relatedness $R_{ij}$ between tasks decreases as $\alpha$ increases. where $e_t$ is the vector with all components equal to 0, except the $t$ -th, which is 1. Figure 2 presents the performance of the proposed relatedness metric in a two-dimensional Gaussian distribution case. It verifies that the metric is sensitive to the variation of the similarity between distributions. We then propose a relatedness matrix $R$ , where $$ R = \left[ \begin{array}{c c c c} R _ {1 1} & R _ {1 2} & \dots & R _ {1 T} \\ R _ {2 1} & R _ {2 2} & \dots & R _ {2 T} \\ \vdots & \vdots & \ddots & \vdots \\ R _ {T 1} & R _ {T 2} & \dots & R _ {T T} \end{array} \right]. \tag {16} $$ # 4.3. Adaptive Adversarial MTRL Motivated by considering the task relatedness and duality, we present an adaptive AMTRL algorithm with an novel weighting strategy in §4.3.1 and optimize it with the augmented Lagrangian method in §4.3.2. # 4.3.1. WEIGHT ADAPTATION Based on the relatedness matrix, we propose a weighting strategy designed to accelerate the convergence of the adversarial module for AMTRL models. Let $\mathbf{w} = (w_{1}, w_{2}, \dots, w_{T})'$ and $\mathbf{1} = (1, 1, \dots, 1)$ be a $T$ -dimension vector with all components being 1. The weighting strategy is used in formulating the empirical loss of the proposed adaptive AMTRL method (17). $$ \mathcal {L} _ {S} (h) = \frac {1}{T} \sum_ {t = 1} ^ {T} w _ {t} \mathcal {L} _ {S _ {t}} \left(f ^ {t} \circ g\right), \tag {17} $$ where $$ \mathbf {w} = \frac {1}{\mathbf {1} R \mathbf {1} ^ {\prime}} \mathbf {1} R. \tag {18} $$ Tasks that have a closer relationship with other tasks in the representation space have a larger weight. This has an intuitive interpretation: that is, the weighting strategy motivates tasks to be more similar in the representation space, which meets the constraint of AMTRL. The experimental result in Section 5.2.1 verifies this intuition. # 4.3.2.AUGMENTED LAGRANGIAN (6) can be regard as the Lagrangian dual function of the following equality-constrained optimization problem (Problem 1). Problem 1. $$ \min _ {h} \mathcal {L} _ {S} (h) $$ $$ s. t. \quad \mathcal {L} ^ {a d v} = 0, $$ In existing adversarial MTRL works, $\lambda$ is manually tuned; this process is highly time-consuming and makes it almost impossible to achieve the optimal Lagrange multiplier. As a result, an adaptive method that can choose $\lambda$ automatically is desired. Moreover, an MTL Problem like Problem 1 is usually non-convex, such that the solution obtained from the Lagrangian duality is in fact not optimal due to the duality gap (Rockafellar, 1974; Hager, 1987). Accordingly, we propose an Augmented Lagrangian-based Algorithm to dynamically tune $\lambda$ and reduce the duality gap. The basic idea behind augmented Lagrangian involves augmenting the ordinary Lagrangian with a penalty term, which usually has a quadratic form. Combining the proposed weighting strategy with the augmented Lagrangian Algorithm 1 Adaptive Adversarial MTRL Input: S Initialize $\lambda_0, r_0, R^0$ . for $q = 0$ to $N$ do $\mathbf{w}^q = \frac{1}{IR_qI'} IR_q$ Train the AMTRL model with loss (19) Update $R^{q + 1}$ using (15) with $\Phi_q(\cdot)$ if $\lambda_{q + 1} > 0$ then Update Lagrange multipliers using (20) to obtain $\lambda_{q + 1}$ else $\lambda_{q + 1} = \lambda_q$ end if Choose new penalty parameter $r_{q + 1} > r_q$ end for method, the optimization objective of our adaptive AMTRL method is given in (19). $$ \min _ {h} \frac {1}{T} \sum_ {t = 1} ^ {T} w _ {t} \mathcal {L} _ {S _ {t}} \left(f ^ {t} \circ g\right) + \lambda \mathcal {L} ^ {a d v} + \frac {r}{2} \mathcal {L} ^ {a d v ^ {2}}, \tag {19} $$ where $\lambda$ is the Lagrangian multiplier, while $r$ is the the penalty parameter with $r > 0$ . As $r$ increases, the gap between the value of the primal problem and the value of the dual problem decreases. Based on the typical augmented Lagrangian algorithmic framework, $\lambda_{k}$ is updated as follows: $$ \lambda_ {q + 1} = \lambda_ {q} - r _ {q} \mathcal {L} ^ {a d v}, \tag {20} $$ with $r_q$ increasing linearly. The specific procedure of the algorithm is shown in Algorithm 1. The adaptive AMTRL algorithm is shown in Algorithm 1. # 5. Experiments In this section, we perform experimental studies on sentiment analysis and topic classification in order to evaluate the performance of our proposed method and verify our theoretical analysis respectively. The implementation is based on PyTorch (Paszke et al., 2019). The code can be found in the Supplementary Materials. # 5.1. Experimental Setup # 5.1.1. DATASETS Sentiment Analysis1. We evaluated our algorithm on product reviews from Amazon. The dataset (Blitzer et al., 2007) contains product reviews from 14 domains, including books, Table 1. Data Allocation for Topic Classification Tasks.
TASKSNEWSGROUPS
COMPOS.MS-WINDOWS.MISC, SYS.MAC.HARDWARE, GRAPHICS, WINDOWS.X
RECSPORT.BASEBALL, SPORT.HOCKEY AUTOS, MOTORCYCLES
SCICRYPT, ELECTRONICS, MED, SPACE
TALKPOLITICS.MIDEAST, RELIGION.MISC, POLITICS.MISC, POLITICS.GUNS
DVDs, electronics, kitchen appliances, etc. We consider each domain as a binary classification task. Reviews with ratings $>3$ were labeled positive, while those with ratings $< 3$ were labeled negative, reviews with rating $= 3$ are discarded, as the sentiments were ambiguous and difficult to predict. The training/testing/validation partition is randomly split into $70\%$ training, $10\%$ testing and $20\%$ validation. Topic Classification ${}^{2}$ . We select 16 newsgroups from the 20 Newsgroup dataset, which is a collection of approximately 20,000 newsgroup documents and partitioned (nearly) evenly across 20 different newsgroups, and formulate them into four 4-class classification tasks (shown in Table 1) to evaluate the performance of our algorithm on topic classification. The training/testing/Validation partition is randomly split into ${60}\%$ training, ${20}\%$ testing and ${20}\%$ validation. # 5.1.2. NETWORK MODEL We implement our adaptive AMTRL algorithm on the most prevalent deep multi-task representation learning network model (i.e. hard parameter sharing network model (Caruana, 1997)). As shown in Figure 1, all tasks have task-specific output layers and share the representation extraction layers in the model. The shared representation extraction layers are typically built with a feature extraction structure such as Convolutional Neural Networks (CNN) or Recurrent Neural Network (RNN), and the task-specific output layers are typically formulated using fully connected layers. In our experiments, either TextCNN (Kim, 2014) or BiLSTM (Hochreiter & Schmidhuber, 1997) is used to build the shared representation extraction layers. The TextCNN module is structured with three parallel convolutional layers with kernel sizes of 3, 5, and 7 respectively. The BiLSTM module is structured with two bi-directional hidden layers with size 32. The extracted feature representations are then concatenated and classified using the task-specific output module, which has one fully connected layer. ![](images/b4226dbdaf702a708737b1a00b168054d7d78ca509761c5ae09b6a0ff037b282.jpg) (a) $R_{mean}$ changes in the training process. ![](images/97f3df35b1cdfff1d4c8c4e95e765e39b7c9908a5e632616d17646f2256ed8e4.jpg) (b) $R_{var}$ changes in the training process. ![](images/a19593f398c6ea7db9a5081c8ecbcbf9b0131a0a1739fada45586c6e2e506dde.jpg) Figure 3. Evolution of relatedness between tasks during training for sentiment analysis. (a) presents the change in $R_{mean}$ for the original MTRL (Orig MTRL), AAMTRL without the weighting strategy (Uniform AAMTRL) and AAMTRL respectively. (b) presents the change in $R_{var}$ for Orig MTRL, Uniform AAMTRL and AAMTRL respectively. (a) $R_{mean}$ changes in the training process. Figure 4. Evolution of relatedness between tasks during training for topic classification. (a) presents the change in $R_{mean}$ for the original MTRL (Orig MTRL), AAMTRL without the weighting strategy (Uniform AAMTRL) and AAMTRL respectively. (b) presents the change in $R_{var}$ for Orig MTRL, Uniform AAMTRL and AAMTRL respectively. ![](images/f35980cff7f37e8c737eea25fa69faa0db13734ee77b35b4a2c5d8c0daf07ead.jpg) (b) $R_{var}$ changes in the training process. The adversarial module is built with one fully connected layer, the output size of which is equal to the number of tasks. It is noteworthy that the adversarial module connects to the shared layers via a gradient reversal layer (Ganin & Lempitsky, 2015). This gradient reversal layer multiplies the gradient by $-1$ during the backpropagation, which optimizes the adversarial loss function (7). # 5.1.3. TRAINING PARAMETERS We train the deep AAMTRL network model with Algorithm 1 settings $\lambda_0 = 1$ , $r_0 = 10$ and $r_{k + 1} = r_k + 2$ ; here, $R_0$ is a matrix of ones. We use the Adam optimizer (Kingma & Ba, 2015) and train 600 epochs for sentiment analysis and 1200 epochs for topic classification. The batch size is 256 for both sentiment analysis and topic classification. We use dropout with probability of 0.5 for all task-specific output modules. For all experiments, we search over the set $\{1e - 4,5e - 4,1e - 3,5e - 3,1e - 2,5e - 2\}$ of learning rates and choose the model with the highest validation accuracy. # 5.2. Results and Analysis # 5.2.1. RELATEDNESS EVOLUTION To evaluate the performance of the adversarial module for AAMTRL, we record the change in the relatedness matrix during training. In this experiment, the text CNN module is used to extract representation. The relatedness matrix is summarized by the mean and variance of $\{R_1, R_2, \dots, R_T\}$ , where $R_t$ for $t \in \{1, \dots, T\}$ is defined in (21). Let $R_{mean}$ , $R_{var}$ be the mean and the variance respectively. The results for sentiment analysis and topic classification are shown in Fig. 3 and Fig.4 respectively. $$ R _ {t} = \frac {1}{T} \sum_ {k = 0} ^ {T} R _ {t k}. \tag {21} $$ The results show the following: - The proposed AAMTRL is able to enforce the tasks ![](images/28ad328132383cb44ea877315a5e94ac9b08db138c19be6788f29fad3819e769.jpg) (a) text-CNN. ![](images/5ace667a8e86257499e49f02560eabeeb0575f7405f9229b5d00c0eddcb170ce.jpg) (b) BiLSTM. ![](images/62151c8c17b89e03ee1f2035ed9f6c932626e0f513e1a56d3d923da45cb9911e.jpg) Figure 5. Radar chart of the error rate for each task in sentiment analysis. (a) shows the results for MTRL models with text CNN-based representation extraction layers. (b) shows the results for MTRL models with BiLSTM-based representation extraction layers. (a) text-CNN. ![](images/046c3247178dc71becd5d04f04d6e39fbb02d595be7c0df3d99e3d972d58c06d.jpg) (b) BiLSTM. Figure 6. Radar chart of the error rate for each task in topic classification. (a) shows the results for MTRL models with text CNN-based representation extraction layers. (b) shows the results for MTRL models with BiLSTM-based representation extraction layers to share an identical distribution in the representation space. - The weighting strategy can accelerate and smooth the convergence process of the adversarial module during training. - The tasks in sentiment analysis initially have a much closer relationship than those in topic classification. # 5.2.2. CLASSIFICATION ACCURACY We compare our proposed methods with two baselines — (i) Single Task, which solves tasks independently, and (ii) Uni- form Scaling, which minimizes a uniformly weighted sum of loss functions—as well as two state-of-the-art methods: (i) MGDA, which uses the MGDA-UB method proposed by (Sener & Koltun, 2018). (ii) Adversarial MTRL, which uses the original adversarial MTL framework proposed by (Liu et al., 2017). We report the error rate of each task for sentiment analysis and topic classification in Figure 5 and Figure 6 respectively. The exact results can be referred to in the supplementary materials. The results show the following: - The proposed AAMTRL outperforms the state-of-the ![](images/aa8b870c55e1fde62071526a01e4cb6eb7478a5afa6b38a0f9fa7eb8a2f07ba3.jpg) Figure 7. Change of the relative task-averaged risk along the number of tasks. ![](images/d548f535f1b4023e60e8159cdd52abbeffadc14a1a8aa203773a0add2ecea99e.jpg) Figure 8. variety of the test error for task (Appeal) according to learning with different tasks. art methods on sentiment analysis and achieves similar performance for topic classification. - For topic classification, in which the tasks are not closely related (as shown in Figure 4 (a)), MTL strategies do not outperform single-task learning. This shows that the performance of MTL is dependent on the initial relatedness between tasks. # 5.2.3. INFLUENCE OF THE NUMBER OF TASKS In this section, we investigate the influence of the number of tasks on the task-averaged risk. We define a relative task-averaged risk with respect to single-task learning (STL) in (22). $$ e r _ {r e l} = \frac {e r _ {M T L}}{\frac {1}{T} \sum_ {1} ^ {T} e r _ {S T L} ^ {t}}, \tag {22} $$ where $er_{MTL}$ is the task-averaged test error of a MTL model, while $er_{STL}^t$ is the test error of the STL model $t$ . The MTL model and the STL models are the best-performing models generated from our experimental setting. The MTL model is trained using our AAMTRL algorithm. We also carry out an experiment on sentiment analysis. In this experiment, the text CNN module is used to extract representation. Figure 7 presents the change in the relative task-averaged risk depending on the number of tasks. Figure 8 presents the variety of the test error for task (Appeal) according to learning with different tasks. The results show the following: - In AMTRL, an increase in task numbers does not decrease the task-averaged error. - For a specific task in AMTRL, learning with more tasks does not guarantee better performance. The results verify our analysis in Section 4.1. # 6. Conclusion While performance of AMTRL is attractive, the theoretical mechanism is unexplored. To fill this gap, we analyze the task-averaged generalization error bound for AMTRL. Based on the analysis, we propose a novel AMTRL method, named Adaptive AMTRL, that is designed to improve the performance of existing AMTRL methods. Numerical experiments support our theoretical results and demonstrate the effectiveness of our proposed approach. # Acknowledgements This work is supported by the National Natural Science Foundation of China under Grants 61976161. # References Ando, R. K. and Zhang, T. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817-1853, 2005. Blitzer, J., Dredze, M., and Pereira, F. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL, 2007. Caruana, R. Multitask learning. Machine Learning, 28(1): 41-75, 1997. Chen, C., Yang, Y., Zhou, J., Li, X., and Bao, F. S. Cross-domain review helpfulness prediction based on convolutional neural networks with auxiliary domain discriminators. In NAACL, pp. 602-607, 2018a. Chen, Z., Badrinarayanan, V., Lee, C., and Rabinovich, A. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In ICML, pp. 793-802, 2018b. Collobert, R. and Weston, J. A unified architecture for natural language processing: deep neural networks with multitask learning. In ICML, pp. 160-167, 2008. Dwivedi, K. and Roig, G. Representation similarity analysis for efficient task taxonomy & transfer learning. In CVPR, 2019. Ganin, Y. and Lempitsky, V. S. Unsupervised domain adaptation by backpropagation. In ICML, pp. 1180-1189, 2015. Hager, W. W. Dual techniques for constrained optimization. Journal of Optimization Theory and Applications, 55(1): 37-71, 1987. Hestenes, M. R. Multiplier and gradient methods. Journal of optimization theory and applications, 4(5):303-320, 1969. Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural Computation, 9(8):1735-1780, 1997. Kendall, A., Gal, Y., and Cipolla, R. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In CVPR, pp. 7482-7491, 2018. Kim, Y. Convolutional neural networks for sentence classification. In EMNLP, pp. 1746-1751, 2014. Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In ICLR, 2015. Kriegeskorte, N., Mur, M., and Bandettini, P. A. Representational similarity analysis-connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2:4, 2008. Lin, X., Zhen, H., Li, Z., Zhang, Q., and Kwong, S. Pareto multi-task learning. In NeurIPS, 2019. Liu, P., Qiu, X., and Huang, X. Adversarial multi-task learning for text classification. In ACL, pp. 1-10, 2017. Liu, Y., Wang, Z., Jin, H., and Wassell, I. J. Multi-task adversarial network for disentangled feature learning. In CVPR, pp. 3743-3751, 2018. Mao, Y., Yun, S., Liu, W., and Du, B. Tchebycheff procedure for multi-task text classification. In ACL, 2020. Maurer, A. A chain rule for the expected suprema of gaussian processes. In ALT, pp. 245-259, 2014. Maurer, A., Pontil, M., and Romero-Paredes, B. The benefit of multitask representation learning. Journal of Machine Learning Research, 17:81:1-81:32, 2016. McClure, P. and Kriegeskorte, N. Representational distance learning for deep neural networks. Frontiers in Computational Neuroscience, 10:131, 2016. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS. 2019. Rockafellar, R. T. Augmented lagrange multiplier functions and duality in nonconvex programming. SIAM Journal on Control, 12(2):268-285, 1974. Ruder, S. An overview of multi-task learning in deep neural networks. CoRR, abs/1706.05098, 2017. Sener, O. and Koltun, V. Multi-task learning as multi-objective optimization. In NeurIPS, pp. 525-536, 2018. Shi, G., Feng, C., Huang, L., Zhang, B., Ji, H., Liao, L., and Huang, H. Genre separation network with adversarial training for cross-genre relation extraction. In EMNLP, pp. 1018-1023, 2018. Yadav, S., Ekbal, A., Saha, S., Bhattacharyya, P., and Sheth, A. P. Multi-task learning framework for mining crowd intelligence towards clinical treatment. In *NAACL*, pp. 271-277, 2018. Yu, J., Qiu, M., Jiang, J., Huang, J., Song, S., Chu, W., and Chen, H. Modelling domain relationships for transfer learning on retrieval-based question answering systems in e-commerce. In WSDM, pp. 682-690, 2018.