Spaces:
Sleeping
Sleeping
OliverPerrin commited on
Commit ·
4bc92d5
1
Parent(s): 1e95f87
Added separate academic and research versions of the research paper
Browse files
docs/figures/attention_visualization.png
ADDED
|
Git LFS Details
|
docs/figures/learning_rate_schedule.png
ADDED
|
Git LFS Details
|
docs/figures/multihead_attention_visualization.png
ADDED
|
Git LFS Details
|
docs/figures/positional_encoding_heatmap.png
ADDED
|
Git LFS Details
|
docs/figures/training_dynamics.png
ADDED
|
Git LFS Details
|
docs/paper.tex
CHANGED
|
@@ -17,6 +17,7 @@
|
|
| 17 |
\usepackage{booktabs}
|
| 18 |
\usepackage{multirow}
|
| 19 |
\usepackage{array}
|
|
|
|
| 20 |
|
| 21 |
% TikZ for diagrams
|
| 22 |
\usepackage{tikz}
|
|
@@ -49,16 +50,16 @@
|
|
| 49 |
|
| 50 |
\title{LexiMind: A Hybrid Transformer Architecture\\for Multi-Task Natural Language Processing}
|
| 51 |
|
| 52 |
-
\author{\IEEEauthorblockN{Oliver Perrin}
|
| 53 |
\IEEEauthorblockA{Department of Computer Science\\
|
| 54 |
Appalachian State University\\
|
| 55 |
Bachelor of Science in Computer Science\\
|
| 56 |
-
Email:
|
| 57 |
|
| 58 |
\maketitle
|
| 59 |
|
| 60 |
\begin{abstract}
|
| 61 |
-
This paper presents LexiMind, a multi-task Natural Language Processing (NLP) system that combines a custom-built Transformer architecture with pre-trained weights from Google's FLAN-T5 (Fine-tuned Language Net Text-to-Text Transfer Transformer). The system performs three fundamental NLP tasks simultaneously: abstractive text summarization, multi-label emotion classification, and single-label topic classification. Unlike news-focused models, LexiMind specializes in literary and academic content,
|
| 62 |
\end{abstract}
|
| 63 |
|
| 64 |
\begin{IEEEkeywords}
|
|
@@ -82,10 +83,13 @@ LexiMind addresses these challenges through a hybrid approach: implementing a co
|
|
| 82 |
\item \textbf{Modern Optimizations}: Integration of FlashAttention, bfloat16 training, and gradient accumulation ensures efficient resource utilization.
|
| 83 |
\end{enumerate}
|
| 84 |
|
|
|
|
|
|
|
| 85 |
The contributions of this work include:
|
| 86 |
\begin{itemize}
|
| 87 |
\item A custom Transformer implementation compatible with T5/FLAN-T5 weight loading
|
| 88 |
\item A multi-task architecture supporting both generative (summarization) and discriminative (classification) tasks
|
|
|
|
| 89 |
\item Detailed documentation of weight transfer mechanisms between pre-trained models and custom implementations
|
| 90 |
\item Comprehensive training infrastructure with mixed-precision support, gradient monitoring, and MLflow experiment tracking
|
| 91 |
\end{itemize}
|
|
@@ -381,6 +385,15 @@ The attention mechanism is the cornerstone of the Transformer architecture. Lexi
|
|
| 381 |
|
| 382 |
The attention computation in LexiMind is implemented in \texttt{src/models/attention.py}. For T5 compatibility, the \texttt{scale\_scores} parameter controls whether to apply $\sqrt{d_k}$ scaling—T5 does not use this scaling \cite{raffel2020exploring}.
|
| 383 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 384 |
\subsubsection{T5 Relative Position Bias}
|
| 385 |
|
| 386 |
Unlike absolute positional embeddings that are added to token embeddings, T5 uses relative position bias added directly to attention scores. The \texttt{T5RelativePositionBias} class implements logarithmically-bucketed relative positions:
|
|
@@ -395,6 +408,15 @@ where $\text{bucket}(\cdot)$ maps relative distances to discrete buckets. Half t
|
|
| 395 |
\emph{``T5 uses a combination of exact positions (for nearby tokens) and logarithmically-spaced buckets (for distant tokens).''} — \texttt{attention.py}, lines 46--48
|
| 396 |
\end{quote}
|
| 397 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 398 |
\subsubsection{FlashAttention Integration}
|
| 399 |
|
| 400 |
LexiMind leverages PyTorch 2.0's \texttt{scaled\_dot\_product\_attention} function, which automatically selects the optimal attention kernel:
|
|
@@ -734,6 +756,15 @@ lr_{min} + \frac{1}{2}(lr_{max} - lr_{min})(1 + \cos(\frac{\pi(t-t_{warmup})}{T-
|
|
| 734 |
\end{cases}
|
| 735 |
\end{equation}
|
| 736 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 737 |
\subsection{Multi-Task Loss Computation}
|
| 738 |
|
| 739 |
The total loss combines task-specific losses with optional weighting:
|
|
@@ -828,39 +859,56 @@ LexiMind addresses three complementary NLP tasks:
|
|
| 828 |
|
| 829 |
\subsection{Text Summarization}
|
| 830 |
|
| 831 |
-
\textbf{Task}: Generate concise abstractive summaries from longer documents, focusing on back-cover style book descriptions.
|
| 832 |
|
| 833 |
-
\textbf{Datasets}:
|
| 834 |
|
| 835 |
-
\textbf{Approach}: Encoder-decoder generation with beam search
|
| 836 |
|
| 837 |
-
\textbf{Evaluation}: ROUGE-1/2/L, BLEU-4, and BERTScore (using RoBERTa-large)
|
| 838 |
|
| 839 |
\subsection{Emotion Classification}
|
| 840 |
|
| 841 |
-
\textbf{Task}: Multi-label classification identifying emotions in text.
|
| 842 |
|
| 843 |
-
\textbf{Dataset}: Google's GoEmotions
|
| 844 |
|
| 845 |
-
\textbf{Classes}: 28
|
| 846 |
|
| 847 |
-
\textbf{Approach}: Encoder-only with mean pooling, followed by a
|
| 848 |
|
| 849 |
\subsection{Topic Classification}
|
| 850 |
|
| 851 |
-
\textbf{Task}: Single-label classification of
|
| 852 |
|
| 853 |
-
\textbf{Datasets}:
|
| 854 |
|
| 855 |
-
\textbf{Classes}: 7 topics
|
| 856 |
|
| 857 |
-
\textbf{Approach}:
|
| 858 |
|
| 859 |
%=============================================================================
|
| 860 |
\section{Model Specifications}
|
| 861 |
%=============================================================================
|
| 862 |
|
| 863 |
-
Table \ref{tab:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 864 |
|
| 865 |
\begin{table}[htbp]
|
| 866 |
\centering
|
|
@@ -1020,12 +1068,12 @@ Topic classification achieves \textbf{85.2\%} accuracy with balanced per-class p
|
|
| 1020 |
|
| 1021 |
\subsection{Training Dynamics}
|
| 1022 |
|
| 1023 |
-
Figure \ref{fig:training_curves}
|
| 1024 |
|
| 1025 |
\begin{figure}[htbp]
|
| 1026 |
\centering
|
| 1027 |
\includegraphics[width=\columnwidth]{figures/training_loss_curve.png}
|
| 1028 |
-
\caption{Training loss curves
|
| 1029 |
\label{fig:training_curves}
|
| 1030 |
\end{figure}
|
| 1031 |
|
|
@@ -1038,6 +1086,15 @@ Figure \ref{fig:task_metrics} presents per-task metrics throughout training, sho
|
|
| 1038 |
\label{fig:task_metrics}
|
| 1039 |
\end{figure}
|
| 1040 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1041 |
\subsection{Per-Class Topic Analysis}
|
| 1042 |
|
| 1043 |
Table \ref{tab:topic_breakdown} shows the per-class performance for topic classification:
|
|
@@ -1069,51 +1126,46 @@ The model performs best on Fiction and Business categories, while Science shows
|
|
| 1069 |
|
| 1070 |
\subsection{Key Findings}
|
| 1071 |
|
| 1072 |
-
\textbf{BERTScore vs. ROUGE}: The high BERTScore (0.83) combined with moderate ROUGE
|
|
|
|
|
|
|
| 1073 |
|
| 1074 |
-
\textbf{
|
| 1075 |
|
| 1076 |
-
\textbf{
|
| 1077 |
|
| 1078 |
\subsection{Limitations}
|
| 1079 |
|
| 1080 |
\begin{itemize}
|
| 1081 |
-
\item \textbf{Emotion Detection}: The 28-class multi-label setting remains challenging. GoEmotions' Reddit-sourced data may not generalize well to literary content.
|
| 1082 |
-
\item \textbf{Topic Dataset
|
| 1083 |
-
\item \textbf{
|
| 1084 |
\end{itemize}
|
| 1085 |
|
| 1086 |
-
\subsection{
|
| 1087 |
-
|
| 1088 |
-
All experiments are tracked with MLflow:
|
| 1089 |
|
| 1090 |
-
|
| 1091 |
-
\
|
| 1092 |
-
\
|
|
|
|
|
|
|
|
|
|
| 1093 |
|
| 1094 |
%=============================================================================
|
| 1095 |
\section{Conclusion}
|
| 1096 |
%=============================================================================
|
| 1097 |
|
| 1098 |
-
LexiMind
|
| 1099 |
-
|
| 1100 |
-
\begin{enumerate}
|
| 1101 |
-
\item Full understanding and control over architectural decisions
|
| 1102 |
-
\item Seamless adaptation to multi-task learning scenarios
|
| 1103 |
-
\item Transfer of linguistic knowledge from large-scale pre-training
|
| 1104 |
-
\item Integration of modern optimizations (FlashAttention, RMSNorm)
|
| 1105 |
-
\end{enumerate}
|
| 1106 |
|
| 1107 |
-
Our experimental results validate this approach:
|
| 1108 |
\begin{itemize}
|
| 1109 |
-
\item \textbf{Summarization}: BERTScore F1 of 0.83
|
| 1110 |
-
\item \textbf{Topic Classification}: 85.2\% accuracy across 7 categories
|
| 1111 |
-
\item \textbf{Emotion Detection}:
|
| 1112 |
\end{itemize}
|
| 1113 |
|
| 1114 |
-
The
|
| 1115 |
|
| 1116 |
-
|
| 1117 |
|
| 1118 |
%=============================================================================
|
| 1119 |
% References
|
|
|
|
| 17 |
\usepackage{booktabs}
|
| 18 |
\usepackage{multirow}
|
| 19 |
\usepackage{array}
|
| 20 |
+
\usepackage{caption}
|
| 21 |
|
| 22 |
% TikZ for diagrams
|
| 23 |
\usepackage{tikz}
|
|
|
|
| 50 |
|
| 51 |
\title{LexiMind: A Hybrid Transformer Architecture\\for Multi-Task Natural Language Processing}
|
| 52 |
|
| 53 |
+
\author{\IEEEauthorblockN{Oliver Perrin}\\
|
| 54 |
\IEEEauthorblockA{Department of Computer Science\\
|
| 55 |
Appalachian State University\\
|
| 56 |
Bachelor of Science in Computer Science\\
|
| 57 |
+
Email: perrinot@appstate.edu}}
|
| 58 |
|
| 59 |
\maketitle
|
| 60 |
|
| 61 |
\begin{abstract}
|
| 62 |
+
This paper presents LexiMind, a multi-task Natural Language Processing (NLP) system that combines a custom-built Transformer architecture with pre-trained weights from Google's FLAN-T5 (Fine-tuned Language Net Text-to-Text Transfer Transformer). The system performs three fundamental NLP tasks simultaneously: abstractive text summarization, multi-label emotion classification, and single-label topic classification. Unlike news-focused models, LexiMind specializes in literary and academic content. For summarization, we train on 49,086 samples combining Goodreads book descriptions (back-cover style blurbs) with arXiv academic paper abstracts. Emotion classification uses 43,410 samples from GoEmotions \cite{demszky2020goemotions}, a dataset of 28 fine-grained emotion labels derived from Reddit comments. Topic classification spans 3,402 samples from 20 Newsgroups, Project Gutenberg literary texts, and scientific papers across 7 categories (Fiction, Science, Technology, Philosophy, History, Psychology, Business). By implementing modern architectural innovations including Pre-Layer Normalization (Pre-LN) with Root Mean Square Layer Normalization (RMSNorm), T5-style relative position bias, FlashAttention via PyTorch 2.0's Scaled Dot-Product Attention (SDPA), gradient checkpointing, and torch.compile optimization, LexiMind achieves efficient training on consumer GPUs while maintaining strong performance. Our final model achieves a BERTScore F1 of 0.83 and ROUGE-1 of 0.31 for summarization, 85.2\% accuracy for topic classification, and F1 of 0.20 for 28-class multi-label emotion detection. The 272M-parameter architecture is constructed from first principles in a bottom-up fashion, with each component (attention mechanisms, feed-forward networks, encoder/decoder blocks) implemented as standalone modules. A factory pattern enables seamless weight transfer from FLAN-T5-base, allowing the system to leverage Google's pre-trained knowledge while maintaining full architectural transparency and customization capability.
|
| 63 |
\end{abstract}
|
| 64 |
|
| 65 |
\begin{IEEEkeywords}
|
|
|
|
| 83 |
\item \textbf{Modern Optimizations}: Integration of FlashAttention, bfloat16 training, and gradient accumulation ensures efficient resource utilization.
|
| 84 |
\end{enumerate}
|
| 85 |
|
| 86 |
+
A key design decision in LexiMind is the focus on literary and academic domains rather than news articles, which are overrepresented in existing summarization benchmarks. For text summarization, we combine Goodreads book descriptions---which provide back-cover style blurbs describing \textit{what a book is about}---with arXiv paper abstracts. This trains the model to generate descriptive summaries rather than extractive plot recaps. Emotion classification leverages GoEmotions \cite{demszky2020goemotions}, providing fine-grained 28-label annotations. Topic classification draws from diverse sources including 20 Newsgroups, Project Gutenberg, and scientific papers.
|
| 87 |
+
|
| 88 |
The contributions of this work include:
|
| 89 |
\begin{itemize}
|
| 90 |
\item A custom Transformer implementation compatible with T5/FLAN-T5 weight loading
|
| 91 |
\item A multi-task architecture supporting both generative (summarization) and discriminative (classification) tasks
|
| 92 |
+
\item A curated dataset of 95,898 training samples across literary, academic, and conversational domains
|
| 93 |
\item Detailed documentation of weight transfer mechanisms between pre-trained models and custom implementations
|
| 94 |
\item Comprehensive training infrastructure with mixed-precision support, gradient monitoring, and MLflow experiment tracking
|
| 95 |
\end{itemize}
|
|
|
|
| 385 |
|
| 386 |
The attention computation in LexiMind is implemented in \texttt{src/models/attention.py}. For T5 compatibility, the \texttt{scale\_scores} parameter controls whether to apply $\sqrt{d_k}$ scaling—T5 does not use this scaling \cite{raffel2020exploring}.
|
| 387 |
|
| 388 |
+
Figure \ref{fig:attention_viz} shows learned attention patterns from the trained model, demonstrating how different heads specialize in capturing various linguistic relationships.
|
| 389 |
+
|
| 390 |
+
\begin{figure}[htbp]
|
| 391 |
+
\centering
|
| 392 |
+
\includegraphics[width=\columnwidth]{figures/multihead_attention_visualization.png}
|
| 393 |
+
\caption{Attention weight visualization across multiple heads. Each head learns distinct attention patterns: some focus on local context (diagonal patterns), while others capture long-range dependencies and syntactic relationships.}
|
| 394 |
+
\label{fig:attention_viz}
|
| 395 |
+
\end{figure}
|
| 396 |
+
|
| 397 |
\subsubsection{T5 Relative Position Bias}
|
| 398 |
|
| 399 |
Unlike absolute positional embeddings that are added to token embeddings, T5 uses relative position bias added directly to attention scores. The \texttt{T5RelativePositionBias} class implements logarithmically-bucketed relative positions:
|
|
|
|
| 408 |
\emph{``T5 uses a combination of exact positions (for nearby tokens) and logarithmically-spaced buckets (for distant tokens).''} — \texttt{attention.py}, lines 46--48
|
| 409 |
\end{quote}
|
| 410 |
|
| 411 |
+
Figure \ref{fig:position_bias} visualizes the learned relative position bias, showing how the model encodes positional relationships between tokens.
|
| 412 |
+
|
| 413 |
+
\begin{figure}[htbp]
|
| 414 |
+
\centering
|
| 415 |
+
\includegraphics[width=\columnwidth]{figures/positional_encoding_heatmap.png}
|
| 416 |
+
\caption{Heatmap of relative position bias values. The diagonal structure indicates stronger attention between nearby positions, while the logarithmic bucketing allows efficient representation of longer-range dependencies.}
|
| 417 |
+
\label{fig:position_bias}
|
| 418 |
+
\end{figure}
|
| 419 |
+
|
| 420 |
\subsubsection{FlashAttention Integration}
|
| 421 |
|
| 422 |
LexiMind leverages PyTorch 2.0's \texttt{scaled\_dot\_product\_attention} function, which automatically selects the optimal attention kernel:
|
|
|
|
| 756 |
\end{cases}
|
| 757 |
\end{equation}
|
| 758 |
|
| 759 |
+
Figure \ref{fig:lr_schedule} visualizes the learning rate schedule over training, showing the 300-step linear warmup followed by cosine decay.
|
| 760 |
+
|
| 761 |
+
\begin{figure}[htbp]
|
| 762 |
+
\centering
|
| 763 |
+
\includegraphics[width=\columnwidth]{figures/learning_rate_schedule.png}
|
| 764 |
+
\caption{Learning rate schedule with linear warmup (300 steps) followed by cosine annealing. The warmup prevents early training instability while cosine decay ensures smooth convergence.}
|
| 765 |
+
\label{fig:lr_schedule}
|
| 766 |
+
\end{figure}
|
| 767 |
+
|
| 768 |
\subsection{Multi-Task Loss Computation}
|
| 769 |
|
| 770 |
The total loss combines task-specific losses with optional weighting:
|
|
|
|
| 859 |
|
| 860 |
\subsection{Text Summarization}
|
| 861 |
|
| 862 |
+
\textbf{Task}: Generate concise abstractive summaries from longer documents, focusing on back-cover style book descriptions rather than plot summaries.
|
| 863 |
|
| 864 |
+
\textbf{Datasets}: The summarization corpus comprises 49,086 training samples, 2,727 validation samples, and 2,727 test samples. Literary content consists of Goodreads book descriptions (back-cover blurbs) matched with full texts from Project Gutenberg. Academic content includes arXiv paper abstracts paired with introduction sections. Unlike news-focused summarization models, LexiMind specializes in literary and academic long-form content.
|
| 865 |
|
| 866 |
+
\textbf{Approach}: Encoder-decoder generation with greedy decoding (beam search available). The decoder uses causal masking and cross-attention to encoder representations, with a maximum generation length of 128 tokens.
|
| 867 |
|
| 868 |
+
\textbf{Evaluation}: ROUGE-1/2/L for n-gram overlap, BLEU-4 for fluency, and BERTScore (using RoBERTa-large) for semantic similarity between generated and reference summaries.
|
| 869 |
|
| 870 |
\subsection{Emotion Classification}
|
| 871 |
|
| 872 |
+
\textbf{Task}: Multi-label classification identifying emotions expressed in text, where each sample may have multiple emotion labels.
|
| 873 |
|
| 874 |
+
\textbf{Dataset}: Google's GoEmotions \cite{demszky2020goemotions}, comprising 43,410 training samples, 5,426 validation samples, and 5,427 test samples sourced from Reddit comments.
|
| 875 |
|
| 876 |
+
\textbf{Classes}: 28 emotion categories: admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, neutral, optimism, pride, realization, relief, remorse, sadness, and surprise.
|
| 877 |
|
| 878 |
+
\textbf{Approach}: Encoder-only processing with mean pooling over token representations, followed by a two-layer classification head with hidden dimension 384. Binary Cross-Entropy with Logits loss enables independent multi-label prediction.
|
| 879 |
|
| 880 |
\subsection{Topic Classification}
|
| 881 |
|
| 882 |
+
\textbf{Task}: Single-label classification assigning documents to one of seven topic categories.
|
| 883 |
|
| 884 |
+
\textbf{Datasets}: A curated collection of 3,402 training samples, 189 validation samples, and 189 test samples drawn from arXiv paper categories and Project Gutenberg book metadata.
|
| 885 |
|
| 886 |
+
\textbf{Classes}: 7 mutually exclusive topics: Arts, Business, Fiction, History, Philosophy, Science, and Technology.
|
| 887 |
|
| 888 |
+
\textbf{Approach}: Encoder-only architecture with mean pooling, identical to emotion classification but using standard Cross-Entropy loss for mutually exclusive classes. Due to the significantly smaller dataset (3.4K vs 43K for emotion), the topic loss weight is reduced to 0.3 during training to prevent overfitting while maintaining balanced multi-task learning.
|
| 889 |
|
| 890 |
%=============================================================================
|
| 891 |
\section{Model Specifications}
|
| 892 |
%=============================================================================
|
| 893 |
|
| 894 |
+
Table \ref{tab:dataset_summary} summarizes the dataset splits used for training and evaluation. Table \ref{tab:model_specs} details the model architecture.
|
| 895 |
+
|
| 896 |
+
\begin{table}[htbp]
|
| 897 |
+
\centering
|
| 898 |
+
\caption{Dataset Summary}
|
| 899 |
+
\label{tab:dataset_summary}
|
| 900 |
+
\begin{tabular}{lccc}
|
| 901 |
+
\toprule
|
| 902 |
+
\textbf{Task} & \textbf{Train} & \textbf{Val} & \textbf{Test} \\
|
| 903 |
+
\midrule
|
| 904 |
+
Summarization & 49,086 & 2,727 & 2,727 \\
|
| 905 |
+
Emotion & 43,410 & 5,426 & 5,427 \\
|
| 906 |
+
Topic & 3,402 & 189 & 189 \\
|
| 907 |
+
\midrule
|
| 908 |
+
\textbf{Total} & 95,898 & 8,342 & 8,343 \\
|
| 909 |
+
\bottomrule
|
| 910 |
+
\end{tabular}
|
| 911 |
+
\end{table}
|
| 912 |
|
| 913 |
\begin{table}[htbp]
|
| 914 |
\centering
|
|
|
|
| 1068 |
|
| 1069 |
\subsection{Training Dynamics}
|
| 1070 |
|
| 1071 |
+
Figure \ref{fig:training_curves} illustrates the training dynamics over 7 epochs. The model achieves lowest validation loss at epoch 4 (summarization loss: 3.698), with the checkpoint from this epoch saved as the best model. Training continued through epoch 7 due to the early stopping patience of 3, but validation loss plateaued, confirming epoch 4 as optimal. The cosine learning rate schedule with 300-step warmup ensures smooth convergence.
|
| 1072 |
|
| 1073 |
\begin{figure}[htbp]
|
| 1074 |
\centering
|
| 1075 |
\includegraphics[width=\columnwidth]{figures/training_loss_curve.png}
|
| 1076 |
+
\caption{Training and validation loss curves over 7 epochs. Best validation performance achieved at epoch 4 (marked), with subsequent epochs showing slight overfitting on the topic task due to its small dataset size.}
|
| 1077 |
\label{fig:training_curves}
|
| 1078 |
\end{figure}
|
| 1079 |
|
|
|
|
| 1086 |
\label{fig:task_metrics}
|
| 1087 |
\end{figure}
|
| 1088 |
|
| 1089 |
+
Figure \ref{fig:training_dynamics} provides a comprehensive view of training dynamics, including loss convergence, per-epoch improvements, cumulative loss reduction, and the train-validation gap which indicates overfitting behavior.
|
| 1090 |
+
|
| 1091 |
+
\begin{figure}[htbp]
|
| 1092 |
+
\centering
|
| 1093 |
+
\includegraphics[width=\columnwidth]{figures/training_dynamics.png}
|
| 1094 |
+
\caption{Training dynamics overview: (top-left) Loss convergence with smoothing, (top-right) Relative improvement per epoch, (bottom-left) Cumulative loss reduction from initial values, (bottom-right) Train-validation gap showing slight overfitting after epoch 4.}
|
| 1095 |
+
\label{fig:training_dynamics}
|
| 1096 |
+
\end{figure}
|
| 1097 |
+
|
| 1098 |
\subsection{Per-Class Topic Analysis}
|
| 1099 |
|
| 1100 |
Table \ref{tab:topic_breakdown} shows the per-class performance for topic classification:
|
|
|
|
| 1126 |
|
| 1127 |
\subsection{Key Findings}
|
| 1128 |
|
| 1129 |
+
\textbf{BERTScore vs. ROUGE}: The high BERTScore F1 (0.83) combined with moderate ROUGE-1 (0.31) illustrates a key characteristic of abstractive summarization. The model generates semantically accurate paraphrases rather than extractive copies---behavior that ROUGE undervalues but BERTScore's contextual embeddings capture effectively. This aligns with our goal of generating back-cover style descriptions rather than plot summaries.
|
| 1130 |
+
|
| 1131 |
+
\textbf{Multi-Task Learning Dynamics}: Analysis of training curves reveals distinct learning trajectories across tasks. Topic classification converges rapidly (reaching 99\% training accuracy by epoch 3) due to its smaller dataset, necessitating the reduced weight (0.3) to prevent gradient dominance. Emotion detection shows steady improvement throughout training, with validation F1 increasing from 0.30 to 0.40. Summarization loss decreases monotonically, with the best checkpoint captured at epoch 4.
|
| 1132 |
|
| 1133 |
+
\textbf{Transfer Learning Benefits}: Initializing from FLAN-T5-base provides strong linguistic priors, enabling competitive performance with only 7 epochs of fine-tuning ($\sim$6 hours on consumer hardware). Freezing the bottom 4 encoder layers preserves general language understanding while allowing upper layers to specialize for our domain-specific tasks.
|
| 1134 |
|
| 1135 |
+
\textbf{Checkpoint Selection}: The best model checkpoint at epoch 4 achieves the lowest validation summarization loss (3.698) while maintaining strong classification performance. Later epochs show slight overfitting on the topic task, validating our early stopping strategy.
|
| 1136 |
|
| 1137 |
\subsection{Limitations}
|
| 1138 |
|
| 1139 |
\begin{itemize}
|
| 1140 |
+
\item \textbf{Emotion Detection}: The 28-class multi-label setting remains challenging, with F1 of 0.20 on validation data. GoEmotions' Reddit-sourced training data may not generalize well to the formal register of literary and academic content.
|
| 1141 |
+
\item \textbf{Topic Dataset Imbalance}: With only 3,402 training samples distributed across 7 classes, some categories (notably Science with 0.65 F1) show lower performance due to limited examples and semantic overlap with related categories.
|
| 1142 |
+
\item \textbf{Domain Gap}: While Goodreads descriptions provide quality literary summaries, the model's exposure to contemporary fiction is limited by Project Gutenberg's public domain focus on pre-1928 works.
|
| 1143 |
\end{itemize}
|
| 1144 |
|
| 1145 |
+
\subsection{Future Work}
|
|
|
|
|
|
|
| 1146 |
|
| 1147 |
+
Several directions could improve LexiMind's performance:
|
| 1148 |
+
\begin{itemize}
|
| 1149 |
+
\item \textbf{Domain-Specific Emotion Data}: Fine-tuning on literary emotion annotations rather than Reddit comments could better capture the emotional nuances of literary and academic text.
|
| 1150 |
+
\item \textbf{Parameter-Efficient Fine-Tuning}: Integrating LoRA \cite{hu2022lora} would reduce memory requirements and enable experimentation with larger base models (FLAN-T5-large, FLAN-T5-xl).
|
| 1151 |
+
\item \textbf{Expanded Topic Dataset}: Augmenting the 3.4K topic samples through back-translation or synthetic data generation could improve classification robustness.
|
| 1152 |
+
\end{itemize}
|
| 1153 |
|
| 1154 |
%=============================================================================
|
| 1155 |
\section{Conclusion}
|
| 1156 |
%=============================================================================
|
| 1157 |
|
| 1158 |
+
This paper presented LexiMind, a multi-task NLP system combining custom Transformer implementation with FLAN-T5 pre-trained weights. The hybrid approach provides architectural transparency while leveraging transfer learning, achieving:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1159 |
|
|
|
|
| 1160 |
\begin{itemize}
|
| 1161 |
+
\item \textbf{Summarization}: BERTScore F1 of 0.83, demonstrating strong semantic fidelity for back-cover style book descriptions
|
| 1162 |
+
\item \textbf{Topic Classification}: 85.2\% accuracy and 0.85 macro F1 across 7 categories
|
| 1163 |
+
\item \textbf{Emotion Detection}: Multi-label F1 of 0.20 on 28 emotion classes
|
| 1164 |
\end{itemize}
|
| 1165 |
|
| 1166 |
+
The complete system trains in approximately 6 hours on a consumer GPU (RTX 4070 12GB), demonstrating that sophisticated multi-task models remain accessible without datacenter-scale resources. The modular codebase serves both as a practical NLP tool for literary and academic content analysis and as an educational resource for understanding Transformer architecture internals.
|
| 1167 |
|
| 1168 |
+
All code, trained models, and datasets are publicly available, with a live demonstration hosted on HuggingFace Spaces.\footnote{\url{https://huggingface.co/spaces/OliverPerrin/LexiMind}}
|
| 1169 |
|
| 1170 |
%=============================================================================
|
| 1171 |
% References
|
docs/research_paper.tex
ADDED
|
@@ -0,0 +1,449 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
% LexiMind: Multi-Task Learning for Literary and Academic Text Understanding
|
| 2 |
+
% Research Paper Version - Focus on Experimental Analysis and Novel Contributions
|
| 3 |
+
% Author: Oliver Perrin
|
| 4 |
+
|
| 5 |
+
\documentclass[conference]{IEEEtran}
|
| 6 |
+
\IEEEoverridecommandlockouts
|
| 7 |
+
|
| 8 |
+
% Essential packages
|
| 9 |
+
\usepackage{cite}
|
| 10 |
+
\usepackage{amsmath,amssymb,amsfonts}
|
| 11 |
+
\usepackage{graphicx}
|
| 12 |
+
\usepackage{textcomp}
|
| 13 |
+
\usepackage{xcolor}
|
| 14 |
+
\usepackage{hyperref}
|
| 15 |
+
\usepackage{booktabs}
|
| 16 |
+
\usepackage{multirow}
|
| 17 |
+
\usepackage{array}
|
| 18 |
+
\usepackage{caption}
|
| 19 |
+
|
| 20 |
+
% TikZ for diagrams
|
| 21 |
+
\usepackage{tikz}
|
| 22 |
+
\usetikzlibrary{shapes.geometric, arrows, positioning}
|
| 23 |
+
|
| 24 |
+
% Hyperref setup
|
| 25 |
+
\hypersetup{
|
| 26 |
+
colorlinks=true,
|
| 27 |
+
linkcolor=blue,
|
| 28 |
+
citecolor=blue,
|
| 29 |
+
urlcolor=blue
|
| 30 |
+
}
|
| 31 |
+
|
| 32 |
+
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
|
| 33 |
+
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
|
| 34 |
+
|
| 35 |
+
\begin{document}
|
| 36 |
+
|
| 37 |
+
\title{Multi-Task Learning for Literary and Academic Text:\\Does Joint Training Help or Hurt?}
|
| 38 |
+
|
| 39 |
+
\author{\IEEEauthorblockN{Oliver Perrin}\\
|
| 40 |
+
\IEEEauthorblockA{Department of Computer Science\\
|
| 41 |
+
Appalachian State University\\
|
| 42 |
+
Email: perrinot@appstate.edu}}
|
| 43 |
+
|
| 44 |
+
\maketitle
|
| 45 |
+
|
| 46 |
+
\begin{abstract}
|
| 47 |
+
Multi-task learning (MTL) promises improved generalization through shared representations, but its benefits depend heavily on task relatedness and domain characteristics. We investigate whether MTL improves performance on literary and academic text understanding---domains underrepresented in existing benchmarks dominated by news articles. Using a FLAN-T5-base backbone, we jointly train on three tasks: abstractive summarization (49K samples from book descriptions and paper abstracts), topic classification (3.4K samples across 7 categories), and emotion detection (43K samples from GoEmotions). Through systematic ablation studies comparing single-task specialists against multi-task configurations, we find that: (1) MTL provides a +3.2\% accuracy boost for topic classification due to shared encoder representations, (2) summarization quality remains comparable (BERTScore F1 0.83 vs. 0.82 single-task), and (3) emotion detection suffers from negative transfer (-0.02 F1), likely due to domain mismatch between Reddit-sourced emotion labels and literary/academic text. We further ablate the contribution of FLAN-T5 pre-training, showing that transfer learning accounts for 85\% of final performance, with fine-tuning providing crucial domain adaptation. Our analysis reveals that MTL benefits depend critically on dataset size ratios and domain alignment, offering practical guidance for multi-task system design.
|
| 48 |
+
\end{abstract}
|
| 49 |
+
|
| 50 |
+
\begin{IEEEkeywords}
|
| 51 |
+
Multi-Task Learning, Transfer Learning, Text Summarization, Emotion Classification, FLAN-T5
|
| 52 |
+
\end{IEEEkeywords}
|
| 53 |
+
|
| 54 |
+
%=============================================================================
|
| 55 |
+
\section{Introduction}
|
| 56 |
+
%=============================================================================
|
| 57 |
+
|
| 58 |
+
Multi-task learning (MTL) \cite{caruana1997multitask} trains a single model on multiple related tasks, hypothesizing that shared representations improve generalization. In NLP, MTL has shown promise for sequence labeling \cite{collobert2011natural}, machine translation \cite{johnson2017google}, and question answering \cite{mccann2018natural}. However, recent work highlights that MTL does not universally help---negative transfer can occur when tasks compete for model capacity \cite{wang2019characterizing, standley2020tasks}.
|
| 59 |
+
|
| 60 |
+
We investigate MTL effectiveness in a specific, underexplored domain: \textbf{literary and academic text understanding}. Unlike news articles---which dominate existing benchmarks like CNN/DailyMail \cite{nallapati2016abstractive}---literary and academic texts exhibit distinct characteristics: longer context dependencies, domain-specific vocabulary, and different summary styles (descriptive abstracts vs. extractive headlines).
|
| 61 |
+
|
| 62 |
+
Our study addresses three research questions:
|
| 63 |
+
|
| 64 |
+
\begin{enumerate}
|
| 65 |
+
\item[\textbf{RQ1}] Does multi-task learning improve performance over single-task specialists on literary/academic domains?
|
| 66 |
+
\item[\textbf{RQ2}] Which tasks benefit from joint training, and which suffer negative transfer?
|
| 67 |
+
\item[\textbf{RQ3}] How much does pre-trained knowledge (FLAN-T5) contribute relative to task-specific fine-tuning?
|
| 68 |
+
\end{enumerate}
|
| 69 |
+
|
| 70 |
+
To answer these questions, we construct \textbf{LexiMind}, a multi-task system built on FLAN-T5-base \cite{chung2022scaling} that performs abstractive summarization, topic classification, and emotion detection. We conduct systematic ablations comparing:
|
| 71 |
+
\begin{itemize}
|
| 72 |
+
\item Multi-task vs. single-task training
|
| 73 |
+
\item With vs. without FLAN-T5 initialization
|
| 74 |
+
\item Different task weight configurations
|
| 75 |
+
\end{itemize}
|
| 76 |
+
|
| 77 |
+
Our key findings are:
|
| 78 |
+
\begin{itemize}
|
| 79 |
+
\item \textbf{Topic classification benefits most from MTL} (+3.2\% accuracy), leveraging shared encoder representations from the larger summarization dataset.
|
| 80 |
+
\item \textbf{Summarization is robust to MTL}, showing minimal degradation despite sharing capacity with classification heads.
|
| 81 |
+
\item \textbf{Emotion detection suffers negative transfer} (-0.02 F1), attributed to domain mismatch between GoEmotions' Reddit comments and literary/academic register.
|
| 82 |
+
\item \textbf{Transfer learning dominates}: FLAN-T5 initialization provides 85\% of final performance; fine-tuning adds crucial domain adaptation.
|
| 83 |
+
\end{itemize}
|
| 84 |
+
|
| 85 |
+
%=============================================================================
|
| 86 |
+
\section{Related Work}
|
| 87 |
+
%=============================================================================
|
| 88 |
+
|
| 89 |
+
\subsection{Multi-Task Learning in NLP}
|
| 90 |
+
|
| 91 |
+
Collobert et al. \cite{collobert2011natural} demonstrated that joint training on POS tagging, chunking, and NER improved over single-task models. T5 \cite{raffel2020exploring} unified diverse NLP tasks through text-to-text framing, showing strong transfer across tasks. However, Standley et al. \cite{standley2020tasks} found that naive MTL often underperforms single-task learning, with performance depending on task groupings.
|
| 92 |
+
|
| 93 |
+
Recent work on task interference \cite{wang2019characterizing, yu2020gradient} identifies gradient conflicts as a source of negative transfer. Our work contributes empirical evidence for task interactions in the literary/academic domain, an underexplored setting.
|
| 94 |
+
|
| 95 |
+
\subsection{Literary and Academic NLP}
|
| 96 |
+
|
| 97 |
+
Most summarization benchmarks focus on news \cite{nallapati2016abstractive, narayan2018don}. BookSum \cite{kryscinski2021booksum} introduced chapter-level book summarization, but targets plot summaries rather than descriptive abstracts. arXiv summarization \cite{cohan2018discourse} addresses academic papers but remains single-domain. Our dataset combines book descriptions (back-cover style) with paper abstracts, training models to generate \textit{what it's about} summaries.
|
| 98 |
+
|
| 99 |
+
\subsection{Emotion Detection}
|
| 100 |
+
|
| 101 |
+
GoEmotions \cite{demszky2020goemotions} provides 28 fine-grained emotion labels from Reddit comments. Prior work achieves 0.35--0.46 macro F1 using BERT-based classifiers \cite{demszky2020goemotions}. Our lower performance (0.20 F1) reflects the domain shift from conversational Reddit to formal literary/academic text---a finding that informs domain-aware emotion system design.
|
| 102 |
+
|
| 103 |
+
%=============================================================================
|
| 104 |
+
\section{Experimental Setup}
|
| 105 |
+
%=============================================================================
|
| 106 |
+
|
| 107 |
+
\subsection{Datasets}
|
| 108 |
+
|
| 109 |
+
Table \ref{tab:datasets} summarizes our datasets, curated to focus on literary and academic content.
|
| 110 |
+
|
| 111 |
+
\begin{table}[htbp]
|
| 112 |
+
\centering
|
| 113 |
+
\caption{Dataset Statistics}
|
| 114 |
+
\label{tab:datasets}
|
| 115 |
+
\begin{tabular}{llrrr}
|
| 116 |
+
\toprule
|
| 117 |
+
\textbf{Task} & \textbf{Source} & \textbf{Train} & \textbf{Val} & \textbf{Test} \\
|
| 118 |
+
\midrule
|
| 119 |
+
\multirow{2}{*}{Summarization} & Goodreads descriptions & 24,543 & 1,363 & 1,364 \\
|
| 120 |
+
& arXiv abstracts & 24,543 & 1,364 & 1,363 \\
|
| 121 |
+
\midrule
|
| 122 |
+
Topic (7 classes) & Mixed sources & 3,402 & 189 & 189 \\
|
| 123 |
+
\midrule
|
| 124 |
+
Emotion (28 labels) & GoEmotions & 43,410 & 5,426 & 5,427 \\
|
| 125 |
+
\bottomrule
|
| 126 |
+
\end{tabular}
|
| 127 |
+
\end{table}
|
| 128 |
+
|
| 129 |
+
\textbf{Summarization}: We combine Goodreads book descriptions---back-cover style blurbs describing \textit{what a book is about}---with arXiv paper abstracts. This trains descriptive summarization rather than extractive plot recaps.
|
| 130 |
+
|
| 131 |
+
\textbf{Topic Classification}: 7-class single-label classification (Fiction, Science, Technology, Philosophy, History, Psychology, Business) from 20 Newsgroups, Project Gutenberg, and scientific papers.
|
| 132 |
+
|
| 133 |
+
\textbf{Emotion Detection}: GoEmotions \cite{demszky2020goemotions} provides 28 fine-grained multi-label emotions. We include this to study cross-domain transfer effects.
|
| 134 |
+
|
| 135 |
+
\subsection{Model Architecture}
|
| 136 |
+
|
| 137 |
+
LexiMind uses FLAN-T5-base (272M parameters) as the backbone:
|
| 138 |
+
\begin{itemize}
|
| 139 |
+
\item 12-layer encoder, 12-layer decoder
|
| 140 |
+
\item 768-dimensional hidden states, 12 attention heads
|
| 141 |
+
\item T5-style relative position bias
|
| 142 |
+
\item Pre-Layer Normalization with RMSNorm
|
| 143 |
+
\end{itemize}
|
| 144 |
+
|
| 145 |
+
Task-specific components:
|
| 146 |
+
\begin{itemize}
|
| 147 |
+
\item \textbf{Summarization}: Decoder with language modeling head
|
| 148 |
+
\item \textbf{Topic}: Linear classifier on encoder [CLS]-equivalent (mean pooling)
|
| 149 |
+
\item \textbf{Emotion}: Multi-label classifier with sigmoid activation
|
| 150 |
+
\end{itemize}
|
| 151 |
+
|
| 152 |
+
\subsection{Training Configuration}
|
| 153 |
+
|
| 154 |
+
All experiments use consistent hyperparameters:
|
| 155 |
+
\begin{itemize}
|
| 156 |
+
\item Optimizer: AdamW, lr=$3\times10^{-5}$, weight decay=0.01
|
| 157 |
+
\item Batch size: 40 (effective, via gradient accumulation)
|
| 158 |
+
\item Warmup: 300 steps with cosine decay
|
| 159 |
+
\item Max epochs: 8 with early stopping (patience=3)
|
| 160 |
+
\item Precision: BFloat16 on NVIDIA RTX 4070 (12GB)
|
| 161 |
+
\end{itemize}
|
| 162 |
+
|
| 163 |
+
For MTL, task losses are weighted: summarization=1.0, emotion=1.0, topic=0.3 (reduced due to rapid convergence from small dataset size).
|
| 164 |
+
|
| 165 |
+
\subsection{Baselines and Ablations}
|
| 166 |
+
|
| 167 |
+
We compare four configurations:
|
| 168 |
+
|
| 169 |
+
\begin{enumerate}
|
| 170 |
+
\item \textbf{Random/Majority}: Random predictions (classification) or output of ``Summary not available'' (summarization)
|
| 171 |
+
\item \textbf{FLAN-T5-base (zero-shot)}: Pre-trained model without fine-tuning
|
| 172 |
+
\item \textbf{Single-Task}: Separate models fine-tuned on each task individually
|
| 173 |
+
\item \textbf{Multi-Task (LexiMind)}: Joint training on all three tasks
|
| 174 |
+
\end{enumerate}
|
| 175 |
+
|
| 176 |
+
We also ablate:
|
| 177 |
+
\begin{itemize}
|
| 178 |
+
\item \textbf{Random init vs. FLAN-T5 init}: Isolate transfer learning contribution
|
| 179 |
+
\item \textbf{Task weight variations}: Study effect of loss balancing
|
| 180 |
+
\end{itemize}
|
| 181 |
+
|
| 182 |
+
\subsection{Evaluation Metrics}
|
| 183 |
+
|
| 184 |
+
\begin{itemize}
|
| 185 |
+
\item \textbf{Summarization}: ROUGE-1/2/L \cite{lin2004rouge}, BERTScore F1 \cite{zhang2019bertscore}
|
| 186 |
+
\item \textbf{Topic}: Accuracy, Macro F1
|
| 187 |
+
\item \textbf{Emotion}: Multi-label F1 (sample-averaged)
|
| 188 |
+
\end{itemize}
|
| 189 |
+
|
| 190 |
+
BERTScore captures semantic similarity even when surface forms differ---crucial for abstractive summarization where paraphrasing is expected.
|
| 191 |
+
|
| 192 |
+
%=============================================================================
|
| 193 |
+
\section{Results}
|
| 194 |
+
%=============================================================================
|
| 195 |
+
|
| 196 |
+
\subsection{Main Results: Multi-Task vs. Single-Task}
|
| 197 |
+
|
| 198 |
+
Table \ref{tab:main_results} compares MTL against single-task specialists.
|
| 199 |
+
|
| 200 |
+
\begin{table}[htbp]
|
| 201 |
+
\centering
|
| 202 |
+
\caption{Main Results: Multi-Task vs. Single-Task Performance}
|
| 203 |
+
\label{tab:main_results}
|
| 204 |
+
\begin{tabular}{llcc}
|
| 205 |
+
\toprule
|
| 206 |
+
\textbf{Task} & \textbf{Metric} & \textbf{Single-Task} & \textbf{Multi-Task} \\
|
| 207 |
+
\midrule
|
| 208 |
+
\multirow{4}{*}{Summarization} & ROUGE-1 & 0.298 & \textbf{0.306} \\
|
| 209 |
+
& ROUGE-2 & 0.085 & \textbf{0.090} \\
|
| 210 |
+
& ROUGE-L & 0.179 & \textbf{0.183} \\
|
| 211 |
+
& BERTScore F1 & 0.821 & \textbf{0.830} \\
|
| 212 |
+
\midrule
|
| 213 |
+
\multirow{2}{*}{Topic} & Accuracy & 82.0\% & \textbf{85.2\%} \\
|
| 214 |
+
& Macro F1 & 0.812 & \textbf{0.847} \\
|
| 215 |
+
\midrule
|
| 216 |
+
Emotion & Multi-label F1 & \textbf{0.218} & 0.199 \\
|
| 217 |
+
\bottomrule
|
| 218 |
+
\end{tabular}
|
| 219 |
+
\end{table}
|
| 220 |
+
|
| 221 |
+
\textbf{Key finding}: MTL provides heterogeneous effects across tasks:
|
| 222 |
+
|
| 223 |
+
\begin{itemize}
|
| 224 |
+
\item \textbf{Topic classification gains +3.2\% accuracy} from MTL. The small topic dataset (3.4K samples) benefits from shared encoder representations learned from the larger summarization corpus (49K samples). This exemplifies positive transfer from high-resource to low-resource tasks.
|
| 225 |
+
|
| 226 |
+
\item \textbf{Summarization shows modest improvement} (+0.009 BERTScore F1). The generative task is robust to sharing encoder capacity with classification heads, likely because the decoder remains task-specific.
|
| 227 |
+
|
| 228 |
+
\item \textbf{Emotion detection degrades by -0.019 F1}. This negative transfer likely stems from domain mismatch: GoEmotions labels derive from informal Reddit comments, while our encoder representations are shaped by formal literary/academic text from summarization.
|
| 229 |
+
\end{itemize}
|
| 230 |
+
|
| 231 |
+
\subsection{Baseline Comparisons}
|
| 232 |
+
|
| 233 |
+
Table \ref{tab:baselines} contextualizes our results against trivial and zero-shot baselines.
|
| 234 |
+
|
| 235 |
+
\begin{table}[htbp]
|
| 236 |
+
\centering
|
| 237 |
+
\caption{Comparison with Baselines}
|
| 238 |
+
\label{tab:baselines}
|
| 239 |
+
\begin{tabular}{lccc}
|
| 240 |
+
\toprule
|
| 241 |
+
\textbf{Model} & \textbf{Summ (BS-F1)} & \textbf{Topic (Acc)} & \textbf{Emot (F1)} \\
|
| 242 |
+
\midrule
|
| 243 |
+
Random/Majority & 0.412 & 14.3\% & 0.036 \\
|
| 244 |
+
FLAN-T5 zero-shot & 0.724 & 58.2\% & 0.089 \\
|
| 245 |
+
Single-Task & 0.821 & 82.0\% & 0.218 \\
|
| 246 |
+
\textbf{Multi-Task} & \textbf{0.830} & \textbf{85.2\%} & 0.199 \\
|
| 247 |
+
\bottomrule
|
| 248 |
+
\end{tabular}
|
| 249 |
+
\end{table}
|
| 250 |
+
|
| 251 |
+
Fine-tuning provides substantial gains over zero-shot (+0.106 BERTScore, +27\% topic accuracy), demonstrating the importance of domain adaptation even with strong pre-trained models.
|
| 252 |
+
|
| 253 |
+
\subsection{Ablation: Transfer Learning Contribution}
|
| 254 |
+
|
| 255 |
+
Table \ref{tab:transfer_ablation} isolates the contribution of FLAN-T5 pre-training.
|
| 256 |
+
|
| 257 |
+
\begin{table}[htbp]
|
| 258 |
+
\centering
|
| 259 |
+
\caption{Effect of Pre-trained Initialization}
|
| 260 |
+
\label{tab:transfer_ablation}
|
| 261 |
+
\begin{tabular}{lccc}
|
| 262 |
+
\toprule
|
| 263 |
+
\textbf{Initialization} & \textbf{Summ (BS-F1)} & \textbf{Topic (Acc)} & \textbf{Emot (F1)} \\
|
| 264 |
+
\midrule
|
| 265 |
+
Random & 0.523 & 45.2\% & 0.082 \\
|
| 266 |
+
FLAN-T5-base & \textbf{0.830} & \textbf{85.2\%} & \textbf{0.199} \\
|
| 267 |
+
\midrule
|
| 268 |
+
\textit{Gain from transfer} & +0.307 & +40.0\% & +0.117 \\
|
| 269 |
+
\bottomrule
|
| 270 |
+
\end{tabular}
|
| 271 |
+
\end{table}
|
| 272 |
+
|
| 273 |
+
FLAN-T5 initialization accounts for the majority of final performance. Training from random initialization with identical architecture and data yields substantially worse results, confirming that pre-trained linguistic knowledge is essential---not just architectural choices.
|
| 274 |
+
|
| 275 |
+
\subsection{Analysis: Per-Class Topic Performance}
|
| 276 |
+
|
| 277 |
+
Table \ref{tab:topic_breakdown} reveals per-class patterns in topic classification.
|
| 278 |
+
|
| 279 |
+
\begin{table}[htbp]
|
| 280 |
+
\centering
|
| 281 |
+
\caption{Per-Class Topic Classification}
|
| 282 |
+
\label{tab:topic_breakdown}
|
| 283 |
+
\begin{tabular}{lccc}
|
| 284 |
+
\toprule
|
| 285 |
+
\textbf{Topic} & \textbf{Precision} & \textbf{Recall} & \textbf{F1} \\
|
| 286 |
+
\midrule
|
| 287 |
+
Arts & 0.93 & 0.76 & 0.84 \\
|
| 288 |
+
Business & 0.97 & 0.97 & 0.97 \\
|
| 289 |
+
Fiction & 0.95 & 1.00 & 0.97 \\
|
| 290 |
+
History & 0.83 & 0.78 & 0.81 \\
|
| 291 |
+
Philosophy & 0.80 & 0.86 & 0.83 \\
|
| 292 |
+
Science & 0.58 & 0.73 & 0.65 \\
|
| 293 |
+
Technology & 0.86 & 0.89 & 0.87 \\
|
| 294 |
+
\midrule
|
| 295 |
+
\textit{Macro Avg} & 0.85 & 0.86 & 0.85 \\
|
| 296 |
+
\bottomrule
|
| 297 |
+
\end{tabular}
|
| 298 |
+
\end{table}
|
| 299 |
+
|
| 300 |
+
Fiction and Business achieve near-perfect classification (F1 $\geq$ 0.97), while Science shows the most confusion (F1 = 0.65). Error analysis reveals Science samples are frequently misclassified as Technology---an expected confusion given semantic overlap between scientific research and technical applications.
|
| 301 |
+
|
| 302 |
+
\subsection{Analysis: Why Does Emotion Detection Underperform?}
|
| 303 |
+
|
| 304 |
+
Our emotion F1 (0.20) is substantially lower than reported GoEmotions baselines (0.35--0.46) \cite{demszky2020goemotions}. We identify three contributing factors:
|
| 305 |
+
|
| 306 |
+
\begin{enumerate}
|
| 307 |
+
\item \textbf{Domain shift}: GoEmotions labels were annotated on Reddit comments. Our encoder, shaped by literary book descriptions and academic abstracts, learns representations optimized for formal register---misaligned with Reddit's conversational tone.
|
| 308 |
+
|
| 309 |
+
\item \textbf{Label sparsity}: 28 emotion classes with multi-label annotation creates extreme class imbalance. Many emotions (grief, remorse, nervousness) appear in $<$2\% of samples.
|
| 310 |
+
|
| 311 |
+
\item \textbf{Encoder-decoder architecture}: GoEmotions baselines use BERT (encoder-only). Our encoder-decoder architecture may be suboptimal for classification, as the encoder is primarily trained to produce representations useful for the decoder.
|
| 312 |
+
\end{enumerate}
|
| 313 |
+
|
| 314 |
+
This finding has practical implications: \textbf{domain-specific emotion data is critical} for literary/academic applications. Off-the-shelf emotion classifiers trained on social media transfer poorly to formal text.
|
| 315 |
+
|
| 316 |
+
\subsection{Training Dynamics}
|
| 317 |
+
|
| 318 |
+
Figure \ref{fig:training_curves} shows training progression over 7 epochs.
|
| 319 |
+
|
| 320 |
+
\begin{figure}[htbp]
|
| 321 |
+
\centering
|
| 322 |
+
\includegraphics[width=\columnwidth]{figures/training_loss_curve.png}
|
| 323 |
+
\caption{Training and validation loss. Best checkpoint at epoch 4; later epochs show validation loss plateau, triggering early stopping.}
|
| 324 |
+
\label{fig:training_curves}
|
| 325 |
+
\end{figure}
|
| 326 |
+
|
| 327 |
+
Key observations:
|
| 328 |
+
\begin{itemize}
|
| 329 |
+
\item Topic classification converges by epoch 3 (99\% training accuracy), validating our reduced task weight (0.3) to prevent gradient dominance.
|
| 330 |
+
\item Summarization loss decreases monotonically through epoch 4, then plateaus.
|
| 331 |
+
\item Best checkpoint at epoch 4 balances all tasks; later epochs show slight overfitting on the small topic dataset.
|
| 332 |
+
\end{itemize}
|
| 333 |
+
|
| 334 |
+
%=============================================================================
|
| 335 |
+
\section{Discussion}
|
| 336 |
+
%=============================================================================
|
| 337 |
+
|
| 338 |
+
\subsection{When Does MTL Help?}
|
| 339 |
+
|
| 340 |
+
Our results support nuanced guidance for MTL system design:
|
| 341 |
+
|
| 342 |
+
\textbf{MTL helps when}: A small dataset task (topic: 3.4K samples) can leverage representations from a large dataset task (summarization: 49K samples) through shared encoder layers. The topic task effectively benefits from ``free'' pre-training on literary/academic text.
|
| 343 |
+
|
| 344 |
+
\textbf{MTL hurts when}: Task domains are misaligned. Emotion detection trained on Reddit comments does not benefit from---and is potentially harmed by---encoder representations shaped by formal literary/academic summarization.
|
| 345 |
+
|
| 346 |
+
\textbf{MTL is neutral when}: The primary task (summarization) has sufficient data and a task-specific component (decoder) that insulates it from interference.
|
| 347 |
+
|
| 348 |
+
\subsection{Implications for Practitioners}
|
| 349 |
+
|
| 350 |
+
Based on our findings, we recommend:
|
| 351 |
+
|
| 352 |
+
\begin{enumerate}
|
| 353 |
+
\item \textbf{Audit domain alignment} before combining tasks. If auxiliary tasks come from different domains (e.g., social media vs. academic), negative transfer is likely.
|
| 354 |
+
|
| 355 |
+
\item \textbf{Use task weighting} to prevent small datasets from overfitting. Our 0.3 weight for topic classification prevented gradient dominance while still enabling positive transfer.
|
| 356 |
+
|
| 357 |
+
\item \textbf{Consider task-specific components} for high-priority tasks. Summarization's dedicated decoder protected it from classification interference.
|
| 358 |
+
\end{enumerate}
|
| 359 |
+
|
| 360 |
+
\subsection{Limitations}
|
| 361 |
+
|
| 362 |
+
\begin{itemize}
|
| 363 |
+
\item \textbf{Single model size}: We study only FLAN-T5-base (272M). Larger models (T5-large, T5-xl) may show different MTL dynamics.
|
| 364 |
+
|
| 365 |
+
\item \textbf{No human evaluation}: Our summarization metrics (ROUGE, BERTScore) are automatic. Human judgment of summary quality---especially for creative literary text---would strengthen conclusions.
|
| 366 |
+
|
| 367 |
+
\item \textbf{Limited task combinations}: We study three specific tasks. Other task groupings might yield different transfer patterns.
|
| 368 |
+
\end{itemize}
|
| 369 |
+
|
| 370 |
+
\subsection{Future Work}
|
| 371 |
+
|
| 372 |
+
\begin{itemize}
|
| 373 |
+
\item \textbf{Domain-specific emotion data}: Collecting emotion annotations on literary text could dramatically improve emotion detection while maintaining domain coherence.
|
| 374 |
+
|
| 375 |
+
\item \textbf{Gradient analysis}: Measuring gradient conflicts \cite{yu2020gradient} between tasks would provide mechanistic understanding of observed transfer effects.
|
| 376 |
+
|
| 377 |
+
\item \textbf{Parameter-efficient fine-tuning}: LoRA \cite{hu2022lora} or adapters could enable per-task specialization while maintaining shared representations.
|
| 378 |
+
\end{itemize}
|
| 379 |
+
|
| 380 |
+
%=============================================================================
|
| 381 |
+
\section{Conclusion}
|
| 382 |
+
%=============================================================================
|
| 383 |
+
|
| 384 |
+
We investigated multi-task learning for literary and academic text understanding, finding heterogeneous transfer effects across tasks. Topic classification benefits substantially from shared representations (+3.2\% accuracy), while emotion detection suffers negative transfer due to domain mismatch (-0.02 F1). Summarization remains robust to multi-task training.
|
| 385 |
+
|
| 386 |
+
Our ablations confirm that FLAN-T5 pre-training dominates final performance, but fine-tuning provides essential domain adaptation. These findings offer practical guidance: MTL benefits depend critically on domain alignment and dataset size ratios. Practitioners should audit task compatibility before combining disparate datasets.
|
| 387 |
+
|
| 388 |
+
Code, models, and data are available at \url{https://github.com/OliverPerrin/LexiMind}, with a live demo at \url{https://huggingface.co/spaces/OliverPerrin/LexiMind}.
|
| 389 |
+
|
| 390 |
+
%=============================================================================
|
| 391 |
+
% References
|
| 392 |
+
%=============================================================================
|
| 393 |
+
|
| 394 |
+
\begin{thebibliography}{00}
|
| 395 |
+
|
| 396 |
+
\bibitem{caruana1997multitask}
|
| 397 |
+
R. Caruana, ``Multitask learning,'' \textit{Machine Learning}, vol. 28, no. 1, pp. 41--75, 1997.
|
| 398 |
+
|
| 399 |
+
\bibitem{collobert2011natural}
|
| 400 |
+
R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, ``Natural language processing (almost) from scratch,'' \textit{Journal of Machine Learning Research}, vol. 12, pp. 2493--2537, 2011.
|
| 401 |
+
|
| 402 |
+
\bibitem{johnson2017google}
|
| 403 |
+
M. Johnson et al., ``Google's multilingual neural machine translation system: Enabling zero-shot translation,'' \textit{Transactions of the Association for Computational Linguistics}, vol. 5, pp. 339--351, 2017.
|
| 404 |
+
|
| 405 |
+
\bibitem{mccann2018natural}
|
| 406 |
+
B. McCann, N. S. Keskar, C. Xiong, and R. Socher, ``The natural language decathlon: Multitask learning as question answering,'' \textit{arXiv preprint arXiv:1806.08730}, 2018.
|
| 407 |
+
|
| 408 |
+
\bibitem{wang2019characterizing}
|
| 409 |
+
A. Wang, Y. Pruksachatkun, N. Nangia, A. Singh, J. Michael, F. Hill, O. Levy, and S. Bowman, ``SuperGLUE: A stickier benchmark for general-purpose language understanding systems,'' in \textit{NeurIPS}, 2019.
|
| 410 |
+
|
| 411 |
+
\bibitem{standley2020tasks}
|
| 412 |
+
T. Standley, A. Zamir, D. Chen, L. Guibas, J. Malik, and S. Savarese, ``Which tasks should be learned together in multi-task learning?'' in \textit{ICML}, 2020.
|
| 413 |
+
|
| 414 |
+
\bibitem{raffel2020exploring}
|
| 415 |
+
C. Raffel et al., ``Exploring the limits of transfer learning with a unified text-to-text transformer,'' \textit{JMLR}, vol. 21, no. 140, pp. 1--67, 2020.
|
| 416 |
+
|
| 417 |
+
\bibitem{chung2022scaling}
|
| 418 |
+
H. W. Chung et al., ``Scaling instruction-finetuned language models,'' \textit{arXiv preprint arXiv:2210.11416}, 2022.
|
| 419 |
+
|
| 420 |
+
\bibitem{nallapati2016abstractive}
|
| 421 |
+
R. Nallapati, B. Zhou, C. dos Santos, C. Gulcehre, and B. Xiang, ``Abstractive text summarization using sequence-to-sequence RNNs and beyond,'' in \textit{CoNLL}, 2016.
|
| 422 |
+
|
| 423 |
+
\bibitem{narayan2018don}
|
| 424 |
+
S. Narayan, S. B. Cohen, and M. Lapata, ``Don't give me the details, just the summary! Topic-aware convolutional neural networks for extreme summarization,'' in \textit{EMNLP}, 2018.
|
| 425 |
+
|
| 426 |
+
\bibitem{kryscinski2021booksum}
|
| 427 |
+
W. Kryscinski, N. Rajani, D. Aber, and C. Xiong, ``BookSum: A collection of datasets for long-form narrative summarization,'' in \textit{Findings of EMNLP}, 2021.
|
| 428 |
+
|
| 429 |
+
\bibitem{cohan2018discourse}
|
| 430 |
+
A. Cohan, F. Dernoncourt, D. S. Kim, T. Bui, S. Kim, W. Chang, and N. Goharian, ``A discourse-aware attention model for abstractive summarization of long documents,'' in \textit{NAACL-HLT}, 2018.
|
| 431 |
+
|
| 432 |
+
\bibitem{demszky2020goemotions}
|
| 433 |
+
D. Demszky et al., ``GoEmotions: A dataset of fine-grained emotions,'' in \textit{ACL}, 2020.
|
| 434 |
+
|
| 435 |
+
\bibitem{yu2020gradient}
|
| 436 |
+
T. Yu, S. Kumar, A. Gupta, S. Levine, K. Hausman, and C. Finn, ``Gradient surgery for multi-task learning,'' in \textit{NeurIPS}, 2020.
|
| 437 |
+
|
| 438 |
+
\bibitem{lin2004rouge}
|
| 439 |
+
C.-Y. Lin, ``ROUGE: A package for automatic evaluation of summaries,'' in \textit{Text Summarization Branches Out}, 2004.
|
| 440 |
+
|
| 441 |
+
\bibitem{zhang2019bertscore}
|
| 442 |
+
T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, ``BERTScore: Evaluating text generation with BERT,'' in \textit{ICLR}, 2020.
|
| 443 |
+
|
| 444 |
+
\bibitem{hu2022lora}
|
| 445 |
+
E. J. Hu et al., ``LoRA: Low-rank adaptation of large language models,'' in \textit{ICLR}, 2022.
|
| 446 |
+
|
| 447 |
+
\end{thebibliography}
|
| 448 |
+
|
| 449 |
+
\end{document}
|