OliverPerrin commited on
Commit
1e95f87
·
1 Parent(s): 7e802ad

Full training results & evaluation with BERTScore

Browse files

Training:
- Completed 7 epochs (~6 hours) with early stopping
- Optimized config: batch=10, grad_accum=4, lr=3e-5, topic_weight=0.3
- Froze bottom 4 encoder layers for stable transfer learning

Evaluation Results:
- Summarization: ROUGE-1=0.3064, BERTScore F1=0.8300
- Topic Classification: 85.2% accuracy (7 classes)
- Emotion Detection: F1=0.1987 (28 classes, multi-label)

Paper (docs/paper.tex):
- Added comprehensive experimental results section
- Added training dynamics figures
- Fixed Unicode characters for LaTeX compatibility
- Updated methodology with actual datasets used

New scripts:
- scripts/evaluate.py: Comprehensive evaluation with BERTScore
- Rebuilt discovery dataset with 1000 samples for demo

.gitignore CHANGED
@@ -67,5 +67,4 @@ configs/local/*.png
67
 
68
  # Backup/private files
69
  scripts/demo_gradio_old.py
70
- docs/paper.tex
71
  mlruns.db
 
67
 
68
  # Backup/private files
69
  scripts/demo_gradio_old.py
 
70
  mlruns.db
configs/training/full.yaml CHANGED
@@ -1,54 +1,52 @@
1
  # Full Training Configuration for FLAN-T5-base
2
- # BEST QUALITY - use for final model training
3
- # VRAM Usage: ~9-10GB (12GB available)
4
- # Training time: ~1 hour on RTX 4070 12GB
 
5
  # Use: python scripts/train.py training=full
6
 
7
  dataloader:
8
- batch_size: 10 # Optimal for RTX 4070 12GB
9
  shuffle: true
10
- num_workers: 4
11
  pin_memory: true
12
  persistent_workers: true
13
- prefetch_factor: 2
14
 
15
  optimizer:
16
  name: adamw
17
- lr: 2.0e-5 # Lower LR for best convergence
18
  weight_decay: 0.01 # Standard regularization
19
  eps: 1.0e-6
20
- betas: [0.9, 0.999] # Standard betas
21
 
22
  scheduler:
23
  name: cosine
24
- warmup_steps: 500 # Standard warmup
25
 
26
  trainer:
27
- max_epochs: 8 # Full training
28
  gradient_clip_norm: 1.0
29
- gradient_accumulation_steps: 8 # Larger effective batch: 80 (10*8)
30
  validation_max_length: 128
31
- label_smoothing: 0.1 # Standard smoothing
32
  task_weights:
33
- summarization: 1.0
34
- emotion: 1.5 # Balanced boost
35
- topic: 0.5 # Balanced weight
36
- max_train_samples: 50000
37
- max_val_samples: 3000
38
- early_stopping_patience: 3 # Standard patience
39
- log_grad_norm_frequency: 100
40
 
41
  compile_encoder: true
42
  compile_decoder: true
43
 
44
- # FULL QUALITY SETTINGS
45
- tokenizer_max_length: 512 # Full context for summarization
46
  gradient_checkpointing: true
47
 
48
- # FLAN-T5 has NO learned positional embeddings - only relative position bias
49
- # Disabling this causes repetition loops (model can't track sequence position)
50
  use_relative_position_bias: true
51
 
52
- # Freeze lower encoder layers (0-5) to preserve pretrained knowledge
53
- # Upper layers (6-11) adapt to summarization style
54
- freeze_encoder_layers: 6
 
1
  # Full Training Configuration for FLAN-T5-base
2
+ # OPTIMIZED FOR SPEED + QUALITY
3
+ # Target: Best results for research paper in reasonable time
4
+ # VRAM Usage: ~10GB (12GB available)
5
+ # Training time: ~45-60 minutes on RTX 4070 12GB
6
  # Use: python scripts/train.py training=full
7
 
8
  dataloader:
9
+ batch_size: 10 # Confirmed optimal for RTX 4070 12GB
10
  shuffle: true
11
+ num_workers: 6 # Increased from 4 - better CPU utilization
12
  pin_memory: true
13
  persistent_workers: true
14
+ prefetch_factor: 3 # Increased from 2 - more prefetching
15
 
16
  optimizer:
17
  name: adamw
18
+ lr: 3.0e-5 # Balanced - not too high (instability) not too low (slow)
19
  weight_decay: 0.01 # Standard regularization
20
  eps: 1.0e-6
21
+ betas: [0.9, 0.98] # Slightly faster momentum decay
22
 
23
  scheduler:
24
  name: cosine
25
+ warmup_steps: 300 # ~0.5 epoch warmup (613 steps/epoch)
26
 
27
  trainer:
28
+ max_epochs: 8 # Reduced from 12 - early stopping will catch plateau anyway
29
  gradient_clip_norm: 1.0
30
+ gradient_accumulation_steps: 4 # Reduced from 8 2x faster optimizer steps!
31
  validation_max_length: 128
32
+ label_smoothing: 0.1
33
  task_weights:
34
+ summarization: 1.0 # Main task
35
+ emotion: 1.0 # Equal weight
36
+ topic: 0.3 # LOW - only 3.4k samples, cycles 14x/epoch, risk of overfit
37
+ max_train_samples: null # Use all available data
38
+ max_val_samples: 3000 # Enough for stable metrics
39
+ early_stopping_patience: 3 # Stop quickly when plateauing
40
+ log_grad_norm_frequency: 200
41
 
42
  compile_encoder: true
43
  compile_decoder: true
44
 
45
+ # Quality settings
46
+ tokenizer_max_length: 512 # Full context
47
  gradient_checkpointing: true
48
 
 
 
49
  use_relative_position_bias: true
50
 
51
+ # Freeze fewer layers for better adaptation
52
+ freeze_encoder_layers: 4
 
docs/figures/task_metrics.png ADDED

Git LFS Details

  • SHA256: 63316390a72a3f14e8d0b1ce5ed66a678ef3338938a1bdc4b6fe39c0c5d69d14
  • Pointer size: 131 Bytes
  • Size of remote file: 239 kB
docs/figures/training_loss_curve.png ADDED

Git LFS Details

  • SHA256: e6b53db79cde240ac95894d3cf4158f8167d808144ad9ba26438a2c0a7d438c6
  • Pointer size: 130 Bytes
  • Size of remote file: 52 kB
docs/paper.tex ADDED
@@ -0,0 +1,1162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ % LexiMind: A Hybrid Transformer Architecture for Multi-Task NLP
2
+ % IEEE Conference Style Paper
3
+ % Author: Oliver Perrin
4
+
5
+ \documentclass[conference]{IEEEtran}
6
+ \IEEEoverridecommandlockouts
7
+
8
+ % Essential packages
9
+ \usepackage{cite}
10
+ \usepackage{amsmath,amssymb,amsfonts}
11
+ \usepackage{algorithmic}
12
+ \usepackage{graphicx}
13
+ \usepackage{textcomp}
14
+ \usepackage{xcolor}
15
+ \usepackage{hyperref}
16
+ \usepackage{listings}
17
+ \usepackage{booktabs}
18
+ \usepackage{multirow}
19
+ \usepackage{array}
20
+
21
+ % TikZ for diagrams
22
+ \usepackage{tikz}
23
+ \usetikzlibrary{shapes.geometric, arrows, positioning, fit, calc, backgrounds, decorations.pathreplacing}
24
+
25
+ % Code listings style
26
+ \lstset{
27
+ basicstyle=\ttfamily\small,
28
+ breaklines=true,
29
+ frame=single,
30
+ language=Python,
31
+ keywordstyle=\color{blue},
32
+ commentstyle=\color{green!50!black},
33
+ stringstyle=\color{red!60!black},
34
+ showstringspaces=false
35
+ }
36
+
37
+ % Hyperref setup
38
+ \hypersetup{
39
+ colorlinks=true,
40
+ linkcolor=blue,
41
+ citecolor=blue,
42
+ urlcolor=blue
43
+ }
44
+
45
+ \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
46
+ T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
47
+
48
+ \begin{document}
49
+
50
+ \title{LexiMind: A Hybrid Transformer Architecture\\for Multi-Task Natural Language Processing}
51
+
52
+ \author{\IEEEauthorblockN{Oliver Perrin}
53
+ \IEEEauthorblockA{Department of Computer Science\\
54
+ Appalachian State University\\
55
+ Bachelor of Science in Computer Science\\
56
+ Email: perrinob@appstate.edu}}
57
+
58
+ \maketitle
59
+
60
+ \begin{abstract}
61
+ This paper presents LexiMind, a multi-task Natural Language Processing (NLP) system that combines a custom-built Transformer architecture with pre-trained weights from Google's FLAN-T5 (Fine-tuned Language Net Text-to-Text Transfer Transformer). The system performs three fundamental NLP tasks simultaneously: abstractive text summarization, multi-label emotion classification, and single-label topic classification. Unlike news-focused models, LexiMind specializes in literary and academic content, trained on Goodreads book descriptions matched with Project Gutenberg texts, arXiv academic paper abstracts, and GoEmotions for emotion classification. By implementing modern architectural innovations including Pre-Layer Normalization (Pre-LN) with Root Mean Square Layer Normalization (RMSNorm), T5-style relative position bias, FlashAttention via PyTorch 2.0's Scaled Dot-Product Attention (SDPA), gradient checkpointing, and torch.compile optimization, LexiMind achieves efficient training on consumer GPUs while maintaining strong performance. Our final model achieves a BERTScore F1 of 0.83 for summarization, 85.2\% accuracy for topic classification, and competitive multi-label F1 for emotion detection. The 272M-parameter architecture is constructed from first principles in a bottom-up fashion, with each component (attention mechanisms, feed-forward networks, encoder/decoder blocks) implemented as standalone modules. A factory pattern enables seamless weight transfer from FLAN-T5-base, allowing the system to leverage Google's pre-trained knowledge while maintaining full architectural transparency and customization capability.
62
+ \end{abstract}
63
+
64
+ \begin{IEEEkeywords}
65
+ Transformer, Multi-Task Learning, Natural Language Processing, FLAN-T5, Transfer Learning, Text Summarization, Emotion Classification, Academic Papers, Literary Text
66
+ \end{IEEEkeywords}
67
+
68
+ %=============================================================================
69
+ \section{Introduction}
70
+ %=============================================================================
71
+
72
+ The Transformer architecture \cite{vaswani2017attention} has fundamentally reshaped Natural Language Processing (NLP), establishing itself as the foundation for state-of-the-art models across virtually all language understanding and generation tasks. Building upon this foundation, the T5 (Text-to-Text Transfer Transformer) model \cite{raffel2020exploring} introduced a unified framework that casts all NLP problems as text-to-text transformations. FLAN-T5 (Fine-tuned Language Net) \cite{chung2022scaling} further enhanced T5's capabilities through instruction fine-tuning on over 1,000 diverse tasks.
73
+
74
+ While pre-trained models like FLAN-T5 offer impressive zero-shot and few-shot capabilities, they are often treated as black boxes—their internal mechanisms obscured by framework abstractions. This opacity hinders both understanding and customization. Furthermore, multi-task learning scenarios often require architectural modifications that pre-built models do not easily accommodate.
75
+
76
+ LexiMind addresses these challenges through a hybrid approach: implementing a complete Transformer architecture from scratch while maintaining compatibility with FLAN-T5's pre-trained weights. This provides several key advantages:
77
+
78
+ \begin{enumerate}
79
+ \item \textbf{Architectural Transparency}: Every component—from attention mechanisms to normalization layers—is explicitly implemented and documented.
80
+ \item \textbf{Customization Flexibility}: Task-specific heads and routing logic can be freely modified without framework constraints.
81
+ \item \textbf{Transfer Learning}: FLAN-T5's linguistic knowledge is transferred through careful weight mapping in the factory module.
82
+ \item \textbf{Modern Optimizations}: Integration of FlashAttention, bfloat16 training, and gradient accumulation ensures efficient resource utilization.
83
+ \end{enumerate}
84
+
85
+ The contributions of this work include:
86
+ \begin{itemize}
87
+ \item A custom Transformer implementation compatible with T5/FLAN-T5 weight loading
88
+ \item A multi-task architecture supporting both generative (summarization) and discriminative (classification) tasks
89
+ \item Detailed documentation of weight transfer mechanisms between pre-trained models and custom implementations
90
+ \item Comprehensive training infrastructure with mixed-precision support, gradient monitoring, and MLflow experiment tracking
91
+ \end{itemize}
92
+
93
+ %=============================================================================
94
+ \section{Related Work}
95
+ %=============================================================================
96
+
97
+ \subsection{Transformer Architectures}
98
+
99
+ The original Transformer \cite{vaswani2017attention} introduced the self-attention mechanism, enabling parallel processing of sequences and effective capture of long-range dependencies. The architecture consists of stacked encoder and decoder blocks, each containing Multi-Head Attention (MHA) and position-wise Feed-Forward Networks (FFN).
100
+
101
+ \textbf{Layer Normalization Placement}: The original Transformer applied Layer Normalization \cite{ba2016layer} after residual connections (Post-LN). Subsequent research \cite{xiong2020layer} demonstrated that applying normalization before sublayers (Pre-LN) significantly improves training stability, particularly for deep networks. LexiMind adopts the Pre-LN configuration used by T5 and modern large language models.
102
+
103
+ \textbf{RMSNorm}: Zhang and Sennrich \cite{zhang2019root} proposed Root Mean Square Layer Normalization (RMSNorm), which removes the mean-centering operation of standard LayerNorm while maintaining comparable performance. T5 \cite{raffel2020exploring} adopts this approach, and LexiMind follows suit for compatibility.
104
+
105
+ \subsection{Pre-trained Language Models}
106
+
107
+ \textbf{T5}: Raffel et al. \cite{raffel2020exploring} introduced the T5 model, which frames all NLP tasks as text-to-text problems. T5 uses a Transformer encoder-decoder architecture with several distinctive features: relative position bias instead of absolute positional embeddings, RMSNorm for layer normalization, and a gated feed-forward network.
108
+
109
+ \textbf{FLAN-T5}: Chung et al. \cite{chung2022scaling} enhanced T5 through instruction fine-tuning, creating FLAN-T5. By training on diverse task instructions, FLAN-T5 demonstrates improved zero-shot and few-shot capabilities compared to the original T5.
110
+
111
+ \subsection{Multi-Task Learning}
112
+
113
+ Multi-Task Learning (MTL) \cite{caruana1997multitask} trains a single model on multiple related tasks, promoting parameter sharing and implicit data augmentation. Hard parameter sharing—where lower layers are shared across tasks while task-specific heads branch from shared representations—remains the dominant approach for Transformer-based MTL systems.
114
+
115
+ %=============================================================================
116
+ \section{Architecture}
117
+ %=============================================================================
118
+
119
+ LexiMind implements a complete encoder-decoder Transformer with task-specific heads, constructed using a bottom-up approach where each component is implemented as a standalone module. Figure \ref{fig:architecture} illustrates the high-level system architecture.
120
+
121
+ \begin{figure}[htbp]
122
+ \centering
123
+ \begin{tikzpicture}[
124
+ scale=0.75,
125
+ transform shape,
126
+ box/.style={draw, rectangle, minimum width=2cm, minimum height=0.7cm, align=center, rounded corners=2pt},
127
+ smallbox/.style={draw, rectangle, minimum width=1.4cm, minimum height=0.5cm, align=center, rounded corners=2pt, font=\scriptsize},
128
+ head/.style={draw, rectangle, minimum width=1.5cm, minimum height=0.6cm, align=center, rounded corners=2pt, fill=blue!20},
129
+ arrow/.style={->, >=stealth, thick},
130
+ dashedarrow/.style={->, >=stealth, dashed}
131
+ ]
132
+
133
+ % Input
134
+ \node[box, fill=gray!20] (input) at (0, 0) {Input Text};
135
+
136
+ % Tokenizer
137
+ \node[box, fill=yellow!30] (tokenizer) at (0, 1.2) {Tokenizer\\(SentencePiece)};
138
+
139
+ % Encoder
140
+ \node[box, fill=green!30, minimum height=2cm] (encoder) at (0, 3.2) {Encoder\\$N=12$ layers};
141
+
142
+ % Task routing
143
+ \node[box, fill=orange!30] (router) at (0, 5.2) {Task Router};
144
+
145
+ % Decoder branch
146
+ \node[box, fill=green!30, minimum height=1.5cm] (decoder) at (-2.5, 7) {Decoder\\$N=12$ layers};
147
+ \node[head] (lmhead) at (-2.5, 8.8) {LM Head};
148
+ \node[smallbox, fill=purple!20] (summ) at (-2.5, 9.8) {Summary};
149
+
150
+ % Classification branch
151
+ \node[head] (emotionhead) at (1.2, 7) {Emotion\\Head};
152
+ \node[head] (topichead) at (2.8, 7) {Topic\\Head};
153
+ \node[smallbox, fill=purple!20] (emotion) at (1.2, 8.2) {Emotions\\(28 classes)};
154
+ \node[smallbox, fill=purple!20] (topic) at (2.8, 8.2) {Topics\\(7 classes)};
155
+
156
+ % Arrows
157
+ \draw[arrow] (input) -- (tokenizer);
158
+ \draw[arrow] (tokenizer) -- (encoder);
159
+ \draw[arrow] (encoder) -- (router);
160
+ \draw[arrow] (router) -- (decoder);
161
+ \draw[arrow] (router) -- (emotionhead);
162
+ \draw[arrow] (router) -- (topichead);
163
+ \draw[arrow] (decoder) -- (lmhead);
164
+ \draw[arrow] (lmhead) -- (summ);
165
+ \draw[arrow] (emotionhead) -- (emotion);
166
+ \draw[arrow] (topichead) -- (topic);
167
+
168
+ % Cross-attention arrow
169
+ \draw[dashedarrow] (encoder.west) -- ++(-0.5,0) |- (decoder.south west);
170
+
171
+ % Labels
172
+ \node[font=\tiny, align=center] at (-1.8, 4.5) {Cross\\Attention};
173
+
174
+ \end{tikzpicture}
175
+ \caption{LexiMind system architecture showing the shared encoder, task-specific routing, decoder for generation, and classification heads for discriminative tasks.}
176
+ \label{fig:architecture}
177
+ \end{figure}
178
+
179
+ \subsection{Transformer Block Structure}
180
+
181
+ Figure \ref{fig:transformer_block} presents the internal structure of encoder and decoder blocks, following the Pre-LN configuration from T5 \cite{raffel2020exploring}.
182
+
183
+ \begin{figure}[htbp]
184
+ \centering
185
+ \begin{tikzpicture}[
186
+ scale=0.65,
187
+ transform shape,
188
+ block/.style={draw, rectangle, minimum width=2.5cm, minimum height=0.6cm, align=center, rounded corners=2pt},
189
+ norm/.style={draw, rectangle, minimum width=2.5cm, minimum height=0.5cm, align=center, fill=yellow!30, rounded corners=2pt},
190
+ attn/.style={draw, rectangle, minimum width=2.5cm, minimum height=0.6cm, align=center, fill=blue!25, rounded corners=2pt},
191
+ ffn/.style={draw, rectangle, minimum width=2.5cm, minimum height=0.6cm, align=center, fill=green!25, rounded corners=2pt},
192
+ add/.style={draw, circle, minimum size=0.4cm, fill=red!20, inner sep=0pt, font=\small},
193
+ arrow/.style={->, >=stealth},
194
+ ]
195
+
196
+ % === ENCODER BLOCK ===
197
+ \node[font=\bfseries] at (0, 8) {Encoder Block};
198
+
199
+ % Input
200
+ \node (enc_in) at (0, 7) {};
201
+ \draw[arrow] (0, 6.5) -- (enc_in);
202
+
203
+ % RMSNorm 1
204
+ \node[norm] (enc_norm1) at (0, 6) {RMSNorm};
205
+
206
+ % Self-Attention
207
+ \node[attn] (enc_attn) at (0, 5) {Multi-Head\\Self-Attention};
208
+
209
+ % Add 1
210
+ \node[add] (enc_add1) at (0, 4) {+};
211
+
212
+ % RMSNorm 2
213
+ \node[norm] (enc_norm2) at (0, 3) {RMSNorm};
214
+
215
+ % FFN
216
+ \node[ffn] (enc_ffn) at (0, 2) {Gated FFN\\(GELU)};
217
+
218
+ % Add 2
219
+ \node[add] (enc_add2) at (0, 1) {+};
220
+
221
+ % Output
222
+ \node (enc_out) at (0, 0.3) {};
223
+
224
+ % Connections
225
+ \draw[arrow] (enc_in) -- (enc_norm1);
226
+ \draw[arrow] (enc_norm1) -- (enc_attn);
227
+ \draw[arrow] (enc_attn) -- (enc_add1);
228
+ \draw[arrow] (enc_add1) -- (enc_norm2);
229
+ \draw[arrow] (enc_norm2) -- (enc_ffn);
230
+ \draw[arrow] (enc_ffn) -- (enc_add2);
231
+ \draw[arrow] (enc_add2) -- (enc_out);
232
+
233
+ % Residual connections
234
+ \draw[arrow] (0, 6.5) -- (-1.5, 6.5) -- (-1.5, 4) -- (enc_add1.west);
235
+ \draw[arrow] (enc_add1.east) -- (1.5, 4) -- (1.5, 1) -- (enc_add2.east);
236
+
237
+ % === DECODER BLOCK ===
238
+ \node[font=\bfseries] at (5.5, 8) {Decoder Block};
239
+
240
+ % Input
241
+ \node (dec_in) at (5.5, 7) {};
242
+ \draw[arrow] (5.5, 6.5) -- (dec_in);
243
+
244
+ % RMSNorm 1
245
+ \node[norm] (dec_norm1) at (5.5, 6) {RMSNorm};
246
+
247
+ % Masked Self-Attention
248
+ \node[attn] (dec_attn1) at (5.5, 5) {Masked\\Self-Attention};
249
+
250
+ % Add 1
251
+ \node[add] (dec_add1) at (5.5, 4.2) {+};
252
+
253
+ % RMSNorm 2
254
+ \node[norm] (dec_norm2) at (5.5, 3.4) {RMSNorm};
255
+
256
+ % Cross-Attention
257
+ \node[attn, fill=cyan!25] (dec_attn2) at (5.5, 2.4) {Cross-Attention};
258
+
259
+ % Add 2
260
+ \node[add] (dec_add2) at (5.5, 1.5) {+};
261
+
262
+ % RMSNorm 3
263
+ \node[norm] (dec_norm3) at (5.5, 0.7) {RMSNorm};
264
+
265
+ % FFN
266
+ \node[ffn] (dec_ffn) at (5.5, -0.3) {Gated FFN\\(GELU)};
267
+
268
+ % Add 3
269
+ \node[add] (dec_add3) at (5.5, -1.2) {+};
270
+
271
+ % Connections
272
+ \draw[arrow] (dec_in) -- (dec_norm1);
273
+ \draw[arrow] (dec_norm1) -- (dec_attn1);
274
+ \draw[arrow] (dec_attn1) -- (dec_add1);
275
+ \draw[arrow] (dec_add1) -- (dec_norm2);
276
+ \draw[arrow] (dec_norm2) -- (dec_attn2);
277
+ \draw[arrow] (dec_attn2) -- (dec_add2);
278
+ \draw[arrow] (dec_add2) -- (dec_norm3);
279
+ \draw[arrow] (dec_norm3) -- (dec_ffn);
280
+ \draw[arrow] (dec_ffn) -- (dec_add3);
281
+
282
+ % Encoder memory input
283
+ \node[block, fill=gray!20, minimum width=1.2cm, font=\scriptsize] (memory) at (8, 2.4) {Encoder\\Memory};
284
+ \draw[arrow] (memory) -- (dec_attn2);
285
+
286
+ % Residual connections (simplified)
287
+ \draw[arrow] (5.5, 6.5) -- (4, 6.5) -- (4, 4.2) -- (dec_add1.west);
288
+ \draw[arrow] (dec_add1.east) -- (7, 4.2) -- (7, 1.5) -- (dec_add2.east);
289
+ \draw[arrow] (dec_add2.west) -- (4, 1.5) -- (4, -1.2) -- (dec_add3.west);
290
+
291
+ \end{tikzpicture}
292
+ \caption{Pre-LN Transformer blocks. Left: Encoder block with self-attention and FFN. Right: Decoder block with masked self-attention, cross-attention to encoder memory, and FFN. RMSNorm is applied \emph{before} each sublayer (Pre-LN).}
293
+ \label{fig:transformer_block}
294
+ \end{figure}
295
+
296
+ \subsection{Multi-Head Attention Mechanism}
297
+
298
+ The attention mechanism is the cornerstone of the Transformer architecture. LexiMind implements Multi-Head Attention with support for T5-style relative position bias and FlashAttention optimization. Figure \ref{fig:attention} illustrates the attention computation.
299
+
300
+ \begin{figure}[htbp]
301
+ \centering
302
+ \begin{tikzpicture}[
303
+ scale=0.6,
304
+ transform shape,
305
+ box/.style={draw, rectangle, minimum width=1.5cm, minimum height=0.6cm, align=center, rounded corners=2pt},
306
+ proj/.style={draw, rectangle, minimum width=1.2cm, minimum height=0.5cm, align=center, fill=blue!20, rounded corners=2pt, font=\scriptsize},
307
+ op/.style={draw, rectangle, minimum width=1.2cm, minimum height=0.5cm, align=center, fill=orange!30, rounded corners=2pt, font=\scriptsize},
308
+ arrow/.style={->, >=stealth},
309
+ ]
310
+
311
+ % Input
312
+ \node[box, fill=gray!20] (input) at (0, 0) {Input $X$};
313
+
314
+ % Projections
315
+ \node[proj] (wq) at (-2.5, 1.5) {$W_Q$};
316
+ \node[proj] (wk) at (0, 1.5) {$W_K$};
317
+ \node[proj] (wv) at (2.5, 1.5) {$W_V$};
318
+
319
+ % Q, K, V
320
+ \node[box, fill=green!20] (q) at (-2.5, 2.8) {$Q$};
321
+ \node[box, fill=green!20] (k) at (0, 2.8) {$K$};
322
+ \node[box, fill=green!20] (v) at (2.5, 2.8) {$V$};
323
+
324
+ % Split heads
325
+ \node[op] (split) at (0, 4) {Split $h$ heads};
326
+
327
+ % Attention scores
328
+ \node[op] (matmul1) at (0, 5.2) {$QK^T$};
329
+
330
+ % Position bias
331
+ \node[box, fill=yellow!30, font=\scriptsize] (bias) at (3.5, 5.2) {Relative\\Pos Bias};
332
+
333
+ % Add bias
334
+ \node[op] (add) at (0, 6.2) {$+ B_{rel}$};
335
+
336
+ % Scale (optional)
337
+ \node[op] (scale) at (0, 7.2) {Scale / Mask};
338
+
339
+ % Softmax
340
+ \node[op, fill=red!20] (softmax) at (0, 8.2) {Softmax};
341
+
342
+ % MatMul with V
343
+ \node[op] (matmul2) at (0, 9.2) {$\times V$};
344
+
345
+ % Concat
346
+ \node[op] (concat) at (0, 10.2) {Concat heads};
347
+
348
+ % Output projection
349
+ \node[proj] (wo) at (0, 11.2) {$W_O$};
350
+
351
+ % Output
352
+ \node[box, fill=purple!20] (output) at (0, 12.2) {Output};
353
+
354
+ % Arrows
355
+ \draw[arrow] (input) -- (wq);
356
+ \draw[arrow] (input) -- (wk);
357
+ \draw[arrow] (input) -- (wv);
358
+ \draw[arrow] (wq) -- (q);
359
+ \draw[arrow] (wk) -- (k);
360
+ \draw[arrow] (wv) -- (v);
361
+ \draw[arrow] (q) -- (split);
362
+ \draw[arrow] (k) -- (split);
363
+ \draw[arrow] (v.north) -- ++(0, 0.3) -| (2.5, 9.2) -- (matmul2);
364
+ \draw[arrow] (split) -- (matmul1);
365
+ \draw[arrow] (matmul1) -- (add);
366
+ \draw[arrow] (bias) -- (add);
367
+ \draw[arrow] (add) -- (scale);
368
+ \draw[arrow] (scale) -- (softmax);
369
+ \draw[arrow] (softmax) -- (matmul2);
370
+ \draw[arrow] (matmul2) -- (concat);
371
+ \draw[arrow] (concat) -- (wo);
372
+ \draw[arrow] (wo) -- (output);
373
+
374
+ % Annotations
375
+ \node[font=\tiny, align=left] at (-4.5, 5.5) {T5 does NOT\\scale by $\sqrt{d_k}$};
376
+
377
+ \end{tikzpicture}
378
+ \caption{Multi-Head Attention with T5-style relative position bias. The attention scores are computed as $QK^T + B_{rel}$, where $B_{rel}$ is the learned relative position bias. Unlike standard Transformers, T5 does not scale by $\sqrt{d_k}$.}
379
+ \label{fig:attention}
380
+ \end{figure}
381
+
382
+ The attention computation in LexiMind is implemented in \texttt{src/models/attention.py}. For T5 compatibility, the \texttt{scale\_scores} parameter controls whether to apply $\sqrt{d_k}$ scaling—T5 does not use this scaling \cite{raffel2020exploring}.
383
+
384
+ \subsubsection{T5 Relative Position Bias}
385
+
386
+ Unlike absolute positional embeddings that are added to token embeddings, T5 uses relative position bias added directly to attention scores. The \texttt{T5RelativePositionBias} class implements logarithmically-bucketed relative positions:
387
+
388
+ \begin{equation}
389
+ B_{ij} = \text{Embed}[\text{bucket}(i - j)]
390
+ \end{equation}
391
+
392
+ where $\text{bucket}(\cdot)$ maps relative distances to discrete buckets. Half the buckets encode exact positions for nearby tokens; the remaining half use logarithmic spacing for distant tokens. As documented in the code:
393
+
394
+ \begin{quote}
395
+ \emph{``T5 uses a combination of exact positions (for nearby tokens) and logarithmically-spaced buckets (for distant tokens).''} — \texttt{attention.py}, lines 46--48
396
+ \end{quote}
397
+
398
+ \subsubsection{FlashAttention Integration}
399
+
400
+ LexiMind leverages PyTorch 2.0's \texttt{scaled\_dot\_product\_attention} function, which automatically selects the optimal attention kernel:
401
+
402
+ \begin{quote}
403
+ \emph{``Uses F.scaled\_dot\_product\_attention which automatically selects the best available kernel (FlashAttention v2, Memory-Efficient Attention, or math fallback) based on hardware and input shapes.''} — \texttt{attention.py}, lines 134--137
404
+ \end{quote}
405
+
406
+ This provides O(N) memory complexity instead of O(N²) when FlashAttention is available.
407
+
408
+ \subsection{Feed-Forward Network}
409
+
410
+ Following T5, LexiMind implements a gated feed-forward network with GELU activation:
411
+
412
+ \begin{equation}
413
+ \text{FFN}(x) = (\text{GELU}(xW_g) \odot xW_1) W_2
414
+ \end{equation}
415
+
416
+ where $W_g$ is the gating projection, $W_1$ is the up-projection, $W_2$ is the down-projection, and $\odot$ denotes element-wise multiplication.
417
+
418
+ \subsection{RMSNorm}
419
+
420
+ RMSNorm \cite{zhang2019root} normalizes inputs using only the root mean square:
421
+
422
+ \begin{equation}
423
+ \text{RMSNorm}(x) = \frac{x}{\sqrt{\frac{1}{d}\sum_{i=1}^{d}x_i^2 + \epsilon}} \cdot \gamma
424
+ \end{equation}
425
+
426
+ The implementation in \texttt{src/models/t5\_layer\_norm.py} follows T5's convention, using only a learned scale parameter $\gamma$ with no bias term.
427
+
428
+ %=============================================================================
429
+ \section{Tokenization}
430
+ \label{sec:tokenization}
431
+ %=============================================================================
432
+
433
+ LexiMind wraps HuggingFace's AutoTokenizer with a simplified façade that handles T5-specific conventions. The implementation in \texttt{src/data/tokenization.py} manages special token handling and decoder input preparation.
434
+
435
+ \subsection{T5 Tokenizer Characteristics}
436
+
437
+ T5 uses SentencePiece \cite{kudo2018sentencepiece} with unigram tokenization:
438
+
439
+ \begin{itemize}
440
+ \item \textbf{Vocabulary Size}: 32,128 tokens (padded to multiple of 128 for efficiency)
441
+ \item \textbf{Special Tokens}: \texttt{pad\_token\_id=0}, \texttt{eos\_token\_id=1}
442
+ \item \textbf{No Explicit BOS}: T5 uses the pad token as the decoder start token
443
+ \end{itemize}
444
+
445
+ As noted in the tokenizer implementation:
446
+
447
+ \begin{quote}
448
+ \emph{``T5 uses different special tokens than BART: T5: pad=0, eos=1, no explicit bos (uses pad or eos as decoder start); BART: bos=0, pad=1, eos=2.''} — \texttt{tokenization.py}, lines 42--44
449
+ \end{quote}
450
+
451
+ \subsection{Decoder Input Preparation}
452
+
453
+ For seq2seq training, decoder inputs must be shifted right from labels. The \texttt{prepare\_decoder\_inputs} method handles this:
454
+
455
+ \begin{lstlisting}[caption={Decoder input preparation from tokenization.py}]
456
+ def prepare_decoder_inputs(
457
+ self, labels: torch.Tensor
458
+ ) -> torch.Tensor:
459
+ """Shift decoder labels to create
460
+ input ids prefixed by BOS."""
461
+ bos = self.bos_token_id
462
+ pad = self.pad_token_id
463
+ decoder_inputs = torch.full_like(labels, pad)
464
+ decoder_inputs[:, 0] = bos
465
+ decoder_inputs[:, 1:] = labels[:, :-1]
466
+ return decoder_inputs
467
+ \end{lstlisting}
468
+
469
+ %=============================================================================
470
+ \section{The Factory Module: Weight Transfer from FLAN-T5}
471
+ \label{sec:factory}
472
+ %=============================================================================
473
+
474
+ The \texttt{factory.py} module is central to LexiMind's hybrid approach, providing model construction and weight loading utilities. Figure \ref{fig:factory_flow} illustrates the model construction pipeline.
475
+
476
+ \begin{figure}[htbp]
477
+ \centering
478
+ \begin{tikzpicture}[
479
+ scale=0.7,
480
+ transform shape,
481
+ box/.style={draw, rectangle, minimum width=2.5cm, minimum height=0.7cm, align=center, rounded corners=3pt},
482
+ config/.style={draw, rectangle, minimum width=2cm, minimum height=0.6cm, align=center, fill=yellow!30, rounded corners=2pt, font=\small},
483
+ model/.style={draw, rectangle, minimum width=2.2cm, minimum height=0.6cm, align=center, fill=green!30, rounded corners=2pt, font=\small},
484
+ arrow/.style={->, >=stealth, thick},
485
+ ]
486
+
487
+ % Config loading
488
+ \node[config] (yaml) at (0, 0) {config.yaml};
489
+ \node[box, fill=blue!20] (loadconfig) at (0, 1.3) {load\_model\_config()};
490
+ \node[config] (modelconfig) at (0, 2.6) {ModelConfig};
491
+
492
+ % Model building
493
+ \node[box, fill=blue!20] (build) at (0, 4.2) {build\_multitask\_model()};
494
+
495
+ % Components
496
+ \node[model] (encoder) at (-2.5, 5.8) {Encoder};
497
+ \node[model] (decoder) at (0, 5.8) {Decoder};
498
+ \node[model] (heads) at (2.5, 5.8) {Task Heads};
499
+
500
+ % Weight loading
501
+ \node[box, fill=orange!30] (loadweights) at (-1.2, 7.4) {\_load\_pretrained\_weights()};
502
+
503
+ % FLAN-T5
504
+ \node[box, fill=purple!20] (flant5) at (-4, 7.4) {FLAN-T5\\(HuggingFace)};
505
+
506
+ % Final model
507
+ \node[box, fill=red!20, minimum width=3cm] (mtmodel) at (0, 9) {MultiTaskModel};
508
+
509
+ % Arrows
510
+ \draw[arrow] (yaml) -- (loadconfig);
511
+ \draw[arrow] (loadconfig) -- (modelconfig);
512
+ \draw[arrow] (modelconfig) -- (build);
513
+ \draw[arrow] (build) -- (encoder);
514
+ \draw[arrow] (build) -- (decoder);
515
+ \draw[arrow] (build) -- (heads);
516
+ \draw[arrow] (encoder) -- (loadweights);
517
+ \draw[arrow] (decoder) -- (loadweights);
518
+ \draw[arrow] (flant5) -- (loadweights);
519
+ \draw[arrow] (loadweights) -- (mtmodel);
520
+ \draw[arrow] (heads) -- (mtmodel);
521
+
522
+ \end{tikzpicture}
523
+ \caption{Model construction pipeline in \texttt{factory.py}. Configuration is loaded from YAML, components are instantiated, FLAN-T5 weights are transferred, and the final MultiTaskModel is assembled.}
524
+ \label{fig:factory_flow}
525
+ \end{figure}
526
+
527
+ \subsection{Configuration Management}
528
+
529
+ The \texttt{ModelConfig} dataclass defines all architecture hyperparameters:
530
+
531
+ \begin{lstlisting}[caption={ModelConfig from factory.py}]
532
+ @dataclass
533
+ class ModelConfig:
534
+ d_model: int = 768
535
+ vocab_size: Optional[int] = None
536
+ num_encoder_layers: int = 12
537
+ num_decoder_layers: int = 12
538
+ num_attention_heads: int = 12
539
+ ffn_dim: int = 2048
540
+ dropout: float = 0.1
541
+ use_pretrained: bool = False
542
+ pretrained_model_name: str =
543
+ "google/flan-t5-base"
544
+ activation: str = "gated-gelu"
545
+ use_relative_position_bias: bool = False
546
+ \end{lstlisting}
547
+
548
+ \subsection{Weight Transfer Mechanism}
549
+
550
+ The \texttt{\_load\_pretrained\_weights} function performs careful weight mapping between FLAN-T5 and LexiMind's custom architecture. Key considerations documented in the code:
551
+
552
+ \begin{quote}
553
+ \emph{``T5 architecture compatibility with our custom Transformer: T5 uses Pre-LN (RMSNorm before sublayers) --- matches our design; T5 uses relative position bias instead of absolute embeddings; T5 uses gated FFN (wi\_0, wi\_1, wo); T5 attention has no bias, our attention has bias --- we zero-initialize the bias terms.''} --- \texttt{factory.py}, lines 100--108
554
+ \end{quote}
555
+
556
+ Table \ref{tab:weight_mapping} shows the parameter correspondence:
557
+
558
+ \begin{table}[htbp]
559
+ \centering
560
+ \caption{FLAN-T5 to LexiMind Weight Mapping}
561
+ \label{tab:weight_mapping}
562
+ \begin{tabular}{ll}
563
+ \toprule
564
+ \textbf{FLAN-T5 Parameter} & \textbf{LexiMind Parameter} \\
565
+ \midrule
566
+ \texttt{shared} & \texttt{encoder.embedding} \\
567
+ \texttt{encoder.block.*.SelfAttention.q} & \texttt{encoder.layers.*.self\_attn.W\_Q} \\
568
+ \texttt{encoder.block.*.SelfAttention.k} & \texttt{encoder.layers.*.self\_attn.W\_K} \\
569
+ \texttt{encoder.block.*.SelfAttention.v} & \texttt{encoder.layers.*.self\_attn.W\_V} \\
570
+ \texttt{encoder.block.*.SelfAttention.o} & \texttt{encoder.layers.*.self\_attn.W\_O} \\
571
+ \texttt{*.layer\_norm} & \texttt{*.norm*} \\
572
+ \texttt{*.DenseReluDense.wi\_0} & \texttt{*.ffn.linear\_gate} \\
573
+ \texttt{*.DenseReluDense.wi\_1} & \texttt{*.ffn.linear1} \\
574
+ \texttt{*.DenseReluDense.wo} & \texttt{*.ffn.linear2} \\
575
+ \texttt{lm\_head} & \texttt{decoder.output\_projection} \\
576
+ \bottomrule
577
+ \end{tabular}
578
+ \end{table}
579
+
580
+ \subsection{Vocabulary Size Handling}
581
+
582
+ T5 pads its vocabulary to multiples of 128 for computational efficiency (32,100 → 32,128). LexiMind handles this mismatch:
583
+
584
+ \begin{quote}
585
+ \emph{``Note: T5's vocab is padded to multiple of 128 for efficiency (32100 → 32128). [...] Copy only the tokens that exist in both. Initialize any extra tokens with small random values.''} — \texttt{factory.py}, lines 116--131
586
+ \end{quote}
587
+
588
+ \subsection{Model Assembly}
589
+
590
+ The \texttt{build\_multitask\_model} function assembles the complete system:
591
+
592
+ \begin{lstlisting}[caption={Model assembly from factory.py}]
593
+ model = MultiTaskModel(
594
+ encoder=encoder,
595
+ decoder=decoder,
596
+ decoder_outputs_logits=True
597
+ )
598
+ model.add_head(
599
+ "summarization",
600
+ LMHead(d_model=cfg.d_model,
601
+ vocab_size=vocab_size,
602
+ tie_embedding=decoder.embedding)
603
+ )
604
+ model.add_head(
605
+ "emotion",
606
+ ClassificationHead(
607
+ d_model=cfg.d_model,
608
+ num_labels=28, # GoEmotions
609
+ pooler="mean",
610
+ hidden_dim=cfg.d_model // 2)
611
+ )
612
+ model.add_head(
613
+ "topic",
614
+ ClassificationHead(
615
+ d_model=cfg.d_model,
616
+ num_labels=7, # 7 topic categories
617
+ pooler="mean")
618
+ )
619
+ \end{lstlisting}
620
+
621
+ %=============================================================================
622
+ \section{Multi-Task Model Architecture}
623
+ \label{sec:multitask}
624
+ %=============================================================================
625
+
626
+ The \texttt{MultiTaskModel} class in \texttt{src/models/multitask.py} provides the routing infrastructure for multi-task learning. Figure \ref{fig:multitask_routing} illustrates the task routing mechanism.
627
+
628
+ \begin{figure}[htbp]
629
+ \centering
630
+ \begin{tikzpicture}[
631
+ scale=0.7,
632
+ transform shape,
633
+ box/.style={draw, rectangle, minimum width=2cm, minimum height=0.6cm, align=center, rounded corners=2pt},
634
+ decision/.style={draw, diamond, aspect=2, minimum width=1.5cm, align=center, fill=yellow!30},
635
+ arrow/.style={->, >=stealth, thick},
636
+ ]
637
+
638
+ % Forward call
639
+ \node[box, fill=blue!20] (forward) at (0, 0) {forward(task, inputs)};
640
+
641
+ % Decision
642
+ \node[decision] (taskcheck) at (0, -1.5) {task type?};
643
+
644
+ % Branches
645
+ \node[box, fill=green!20] (encoder) at (-3.5, -3.5) {Encoder\\Only};
646
+ \node[box, fill=green!20] (seq2seq) at (3.5, -3.5) {Encoder\\+ Decoder};
647
+
648
+ % Heads
649
+ \node[box, fill=orange!20] (classhead) at (-3.5, -5) {Classification\\Head};
650
+ \node[box, fill=orange!20] (lmhead) at (3.5, -5) {LM Head};
651
+
652
+ % Tasks
653
+ \node[box, fill=purple!20, font=\scriptsize] (emotion) at (-5, -6.5) {Emotion};
654
+ \node[box, fill=purple!20, font=\scriptsize] (topic) at (-2, -6.5) {Topic};
655
+ \node[box, fill=purple!20, font=\scriptsize] (summ) at (3.5, -6.5) {Summarization};
656
+
657
+ % Arrows
658
+ \draw[arrow] (forward) -- (taskcheck);
659
+ \draw[arrow] (taskcheck) -- node[above, font=\scriptsize] {Classification} (encoder);
660
+ \draw[arrow] (taskcheck) -- node[above, font=\scriptsize] {Generation} (seq2seq);
661
+ \draw[arrow] (encoder) -- (classhead);
662
+ \draw[arrow] (seq2seq) -- (lmhead);
663
+ \draw[arrow] (classhead) -- (emotion);
664
+ \draw[arrow] (classhead) -- (topic);
665
+ \draw[arrow] (lmhead) -- (summ);
666
+
667
+ \end{tikzpicture}
668
+ \caption{Task routing in MultiTaskModel. Classification tasks use encoder-only processing with mean pooling, while generation tasks use the full encoder-decoder pipeline.}
669
+ \label{fig:multitask_routing}
670
+ \end{figure}
671
+
672
+ \subsection{Task-Specific Head Selection}
673
+
674
+ The forward method routes inputs based on head type:
675
+
676
+ \begin{quote}
677
+ \emph{``Encoder-only heads expect encoder outputs [...] LM/seq2seq head: run encoder → decoder → lm head''} — \texttt{multitask.py}, lines 108--148
678
+ \end{quote}
679
+
680
+ \subsection{Classification Head}
681
+
682
+ Classification tasks (emotion, topic) use mean pooling over encoder outputs:
683
+
684
+ \begin{equation}
685
+ h_{cls} = \frac{\sum_{i=1}^{L} h_i \cdot m_i}{\sum_{i=1}^{L} m_i}
686
+ \end{equation}
687
+
688
+ where $m_i$ is the attention mask (1 for valid tokens, 0 for padding). The pooled representation is projected through a linear layer to class logits.
689
+
690
+ %=============================================================================
691
+ \section{Training Pipeline}
692
+ \label{sec:training}
693
+ %=============================================================================
694
+
695
+ The training infrastructure in \texttt{src/training/trainer.py} implements a comprehensive multi-task training loop with modern deep learning practices.
696
+
697
+ \subsection{Training Configuration}
698
+
699
+ The \texttt{TrainerConfig} dataclass encapsulates all hyperparameters:
700
+
701
+ \begin{lstlisting}[caption={TrainerConfig from trainer.py}]
702
+ @dataclass
703
+ class TrainerConfig:
704
+ max_epochs: int = 1
705
+ gradient_clip_norm: float = 1.0
706
+ task_weights: Dict[str, float] | None = None
707
+ label_smoothing: float = 0.0
708
+ gradient_accumulation_steps: int = 1
709
+ scheduler_type: str = "cosine"
710
+ warmup_steps: int = 0
711
+ early_stopping_patience: int | None = None
712
+ gradient_checkpointing: bool = False
713
+ compile_model: bool = False
714
+ \end{lstlisting}
715
+
716
+ \subsection{Mixed-Precision Training}
717
+
718
+ LexiMind uses Automatic Mixed Precision (AMP) with automatic dtype selection:
719
+
720
+ \begin{quote}
721
+ \emph{``AMP setup: bfloat16 for Ampere+ GPUs, float16 otherwise''} — \texttt{trainer.py}, line 102
722
+ \end{quote}
723
+
724
+ BFloat16 provides better numerical stability for training while maintaining the memory and speed benefits of reduced precision.
725
+
726
+ \subsection{Learning Rate Scheduling}
727
+
728
+ A cosine schedule with linear warmup is implemented:
729
+
730
+ \begin{equation}
731
+ lr(t) = \begin{cases}
732
+ lr_{max} \cdot \frac{t}{t_{warmup}} & t < t_{warmup} \\
733
+ lr_{min} + \frac{1}{2}(lr_{max} - lr_{min})(1 + \cos(\frac{\pi(t-t_{warmup})}{T-t_{warmup}})) & t \geq t_{warmup}
734
+ \end{cases}
735
+ \end{equation}
736
+
737
+ \subsection{Multi-Task Loss Computation}
738
+
739
+ The total loss combines task-specific losses with optional weighting:
740
+
741
+ \begin{equation}
742
+ \mathcal{L}_{total} = \sum_{t \in \text{tasks}} \lambda_t \mathcal{L}_t
743
+ \end{equation}
744
+
745
+ \begin{itemize}
746
+ \item \textbf{Summarization}: Cross-entropy with label smoothing and \texttt{ignore\_index=-100}
747
+ \item \textbf{Emotion}: Binary Cross-Entropy with Logits (multi-label)
748
+ \item \textbf{Topic}: Standard Cross-Entropy (single-label)
749
+ \end{itemize}
750
+
751
+ \subsection{Gradient Handling}
752
+
753
+ The trainer includes gradient clipping and early stopping:
754
+
755
+ \begin{quote}
756
+ \emph{``Gradient clipping to prevent exploding gradients [...] Early stopping based on validation loss''} — \texttt{trainer.py}
757
+ \end{quote}
758
+
759
+ \subsection{Training Loop}
760
+
761
+ Figure \ref{fig:training_loop} illustrates the training loop structure.
762
+
763
+ \begin{figure}[htbp]
764
+ \centering
765
+ \begin{tikzpicture}[
766
+ scale=0.65,
767
+ transform shape,
768
+ box/.style={draw, rectangle, minimum width=2.5cm, minimum height=0.5cm, align=center, rounded corners=2pt, font=\small},
769
+ arrow/.style={->, >=stealth},
770
+ ]
771
+
772
+ % Epoch loop
773
+ \node[box, fill=blue!20] (epoch) at (0, 0) {For each epoch};
774
+
775
+ % Batch loop
776
+ \node[box, fill=green!20] (batch) at (0, -1.2) {For each batch};
777
+
778
+ % Task loop
779
+ \node[box, fill=yellow!20] (task) at (0, -2.4) {For each task};
780
+
781
+ % Forward
782
+ \node[box, fill=orange!20] (forward) at (0, -3.6) {Forward + Loss};
783
+
784
+ % AMP context
785
+ \node[box, fill=purple!20] (amp) at (0, -4.8) {AMP autocast};
786
+
787
+ % Backward
788
+ \node[box, fill=red!20] (backward) at (0, -6) {Backward (scaled)};
789
+
790
+ % Accumulate check
791
+ \node[box, fill=cyan!20] (accum) at (0, -7.2) {Accumulation step?};
792
+
793
+ % Optimizer step
794
+ \node[box, fill=gray!20] (optim) at (0, -8.4) {Clip + Step + Zero};
795
+
796
+ % Validation
797
+ \node[box, fill=blue!20] (val) at (3.5, -1.2) {Validation};
798
+
799
+ % Checkpoint
800
+ \node[box, fill=green!20] (ckpt) at (3.5, -2.4) {Checkpoint};
801
+
802
+ % Early stopping
803
+ \node[box, fill=red!20] (early) at (3.5, -3.6) {Early Stop?};
804
+
805
+ % Arrows
806
+ \draw[arrow] (epoch) -- (batch);
807
+ \draw[arrow] (batch) -- (task);
808
+ \draw[arrow] (task) -- (forward);
809
+ \draw[arrow] (forward) -- (amp);
810
+ \draw[arrow] (amp) -- (backward);
811
+ \draw[arrow] (backward) -- (accum);
812
+ \draw[arrow] (accum) -- (optim);
813
+ \draw[arrow] (optim.south) -- ++(0, -0.3) -| ++(-2, 0) |- (task.west);
814
+ \draw[arrow] (epoch.east) -- ++(0.5, 0) |- (val);
815
+ \draw[arrow] (val) -- (ckpt);
816
+ \draw[arrow] (ckpt) -- (early);
817
+
818
+ \end{tikzpicture}
819
+ \caption{Training loop structure showing nested iteration over epochs, batches, and tasks, with gradient accumulation and validation checkpoints.}
820
+ \label{fig:training_loop}
821
+ \end{figure}
822
+
823
+ %=============================================================================
824
+ \section{Tasks and Datasets}
825
+ %=============================================================================
826
+
827
+ LexiMind addresses three complementary NLP tasks:
828
+
829
+ \subsection{Text Summarization}
830
+
831
+ \textbf{Task}: Generate concise abstractive summaries from longer documents, focusing on back-cover style book descriptions.
832
+
833
+ \textbf{Datasets}: A combination of Goodreads book descriptions ($\sim$49K samples) matched with Project Gutenberg full texts for literary summarization, and arXiv academic paper abstracts for technical domain coverage. Unlike news-focused models, LexiMind specializes in literary and academic long-form content understanding.
834
+
835
+ \textbf{Approach}: Encoder-decoder generation with beam search decoding. The decoder uses causal masking and cross-attention to encoder representations.
836
+
837
+ \textbf{Evaluation}: ROUGE-1/2/L, BLEU-4, and BERTScore (using RoBERTa-large) measuring both n-gram overlap and semantic similarity between generated and reference summaries.
838
+
839
+ \subsection{Emotion Classification}
840
+
841
+ \textbf{Task}: Multi-label classification identifying emotions in text.
842
+
843
+ \textbf{Dataset}: Google's GoEmotions (43K Reddit comments)
844
+
845
+ \textbf{Classes}: 28 emotions including admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise, and neutral.
846
+
847
+ \textbf{Approach}: Encoder-only with mean pooling, followed by a linear projection. Binary Cross-Entropy loss enables multi-label prediction.
848
+
849
+ \subsection{Topic Classification}
850
+
851
+ \textbf{Task}: Single-label classification of document topics.
852
+
853
+ \textbf{Datasets}: arXiv papers and Project Gutenberg books ($\sim$3.4K samples), providing topic classification across academic and literary domains.
854
+
855
+ \textbf{Classes}: 7 topics (Arts, Business, Fiction, History, Philosophy, Science, Technology)
856
+
857
+ \textbf{Approach}: Same encoder-only architecture as emotion classification, but with standard Cross-Entropy loss for mutually exclusive classes. Due to the smaller dataset size, topic weight is reduced during training to prevent overfitting.
858
+
859
+ %=============================================================================
860
+ \section{Model Specifications}
861
+ %=============================================================================
862
+
863
+ Table \ref{tab:model_specs} summarizes LexiMind's architecture, aligned with FLAN-T5-base for weight compatibility.
864
+
865
+ \begin{table}[htbp]
866
+ \centering
867
+ \caption{LexiMind Model Specifications}
868
+ \label{tab:model_specs}
869
+ \begin{tabular}{lc}
870
+ \toprule
871
+ \textbf{Parameter} & \textbf{Value} \\
872
+ \midrule
873
+ Hidden dimension ($d_{model}$) & 768 \\
874
+ FFN dimension ($d_{ff}$) & 2048 \\
875
+ Attention heads & 12 \\
876
+ Head dimension & 64 \\
877
+ Encoder layers & 12 \\
878
+ Decoder layers & 12 \\
879
+ Vocabulary size & 32,128 \\
880
+ Max sequence length & 512 \\
881
+ Dropout & 0.1 \\
882
+ Activation & Gated-GELU \\
883
+ Normalization & RMSNorm (Pre-LN) \\
884
+ Position encoding & Relative bias \\
885
+ \midrule
886
+ Total parameters & $\sim$272M \\
887
+ \bottomrule
888
+ \end{tabular}
889
+ \end{table}
890
+
891
+ %=============================================================================
892
+ \section{Implementation Details}
893
+ %=============================================================================
894
+
895
+ \subsection{Project Structure}
896
+
897
+ LexiMind follows a modular architecture:
898
+
899
+ \begin{verbatim}
900
+ src/
901
+ +-- models/
902
+ | +-- attention.py # MHA, RelPosBias
903
+ | +-- encoder.py # Encoder blocks
904
+ | +-- decoder.py # Decoder blocks
905
+ | +-- heads.py # Task heads
906
+ | +-- multitask.py # MTL routing
907
+ | +-- factory.py # Construction
908
+ +-- data/
909
+ | +-- tokenization.py # Tokenizer
910
+ | +-- dataset.py # Dataset classes
911
+ | +-- dataloader.py # Collators
912
+ +-- training/
913
+ +-- trainer.py # Training loop
914
+ +-- metrics.py # Evaluation
915
+ scripts/
916
+ +-- train.py # Main training
917
+ +-- download_data.py # Dataset download
918
+ +-- inference.py # CLI inference
919
+ +-- demo_gradio.py # Web demo
920
+ \end{verbatim}
921
+
922
+ \subsection{FlashAttention and CUDA Optimizations}
923
+
924
+ The trainer enables comprehensive hardware-specific optimizations:
925
+
926
+ \begin{lstlisting}[caption={CUDA optimizations from train.py}]
927
+ if device.type == "cuda":
928
+ torch.backends.cudnn.benchmark = True
929
+ torch.backends.cuda.matmul.allow_tf32 = True
930
+ torch.backends.cudnn.allow_tf32 = True
931
+ torch.backends.cuda.enable_flash_sdp(True)
932
+ torch.backends.cuda.enable_mem_efficient_sdp(
933
+ True)
934
+ \end{lstlisting}
935
+
936
+ Note that T5-style relative position bias is incompatible with FlashAttention, as FlashAttention requires adding bias tensors to attention scores which breaks the fused kernel. The development configuration disables relative position bias to enable FlashAttention for faster iteration, while production configurations retain relative position bias for better quality.
937
+
938
+ \subsection{Numerical Stability}
939
+
940
+ To prevent overflow during mixed-precision training, hidden states are clamped after each sublayer:
941
+
942
+ \begin{quote}
943
+ \emph{``Clamp inf values for fp16/bf16 training stability (like HuggingFace T5)''} — \texttt{encoder.py}, lines 103--105
944
+ \end{quote}
945
+
946
+ %=============================================================================
947
+ \section{Experimental Setup}
948
+ %=============================================================================
949
+
950
+ \subsection{Training Configuration}
951
+
952
+ The final training configuration was optimized for quality and efficiency on an NVIDIA RTX 4070 with 12GB VRAM:
953
+
954
+ \begin{itemize}
955
+ \item \textbf{Optimizer}: Fused AdamW with weight decay 0.01, $\beta_1=0.9$, $\beta_2=0.98$
956
+ \item \textbf{Learning Rate}: $3 \times 10^{-5}$ with cosine decay
957
+ \item \textbf{Warmup}: 300 steps ($\sim$0.5 epochs)
958
+ \item \textbf{Batch Size}: 10 with 4$\times$ gradient accumulation (effective batch size 40)
959
+ \item \textbf{Precision}: BFloat16 on Ampere+ GPUs with TF32 enabled
960
+ \item \textbf{Gradient Clipping}: Max norm 1.0
961
+ \item \textbf{Gradient Checkpointing}: Enabled for memory efficiency
962
+ \item \textbf{torch.compile}: Dynamic compilation for encoder and decoder
963
+ \item \textbf{Task Weights}: Summarization 1.0, Emotion 1.0, Topic 0.3 (reduced due to small dataset)
964
+ \item \textbf{Early Stopping}: Patience of 3 epochs on validation loss
965
+ \item \textbf{Encoder Freezing}: Bottom 4 layers frozen for stable transfer learning
966
+ \end{itemize}
967
+
968
+ Training completed in 7 epochs ($\sim$6 hours) with early stopping triggered due to validation loss plateau.
969
+
970
+ %=============================================================================
971
+ \section{Experimental Results}
972
+ \label{sec:results}
973
+ %=============================================================================
974
+
975
+ We evaluate LexiMind on held-out validation sets for each task. Table \ref{tab:summarization_results} presents the summarization metrics, Table \ref{tab:classification_results} shows classification performance.
976
+
977
+ \subsection{Summarization Performance}
978
+
979
+ \begin{table}[htbp]
980
+ \centering
981
+ \caption{Summarization Evaluation Results}
982
+ \label{tab:summarization_results}
983
+ \begin{tabular}{lc}
984
+ \toprule
985
+ \textbf{Metric} & \textbf{Score} \\
986
+ \midrule
987
+ ROUGE-1 & 0.3064 \\
988
+ ROUGE-2 & 0.0896 \\
989
+ ROUGE-L & 0.1832 \\
990
+ BLEU-4 & 0.0237 \\
991
+ \midrule
992
+ BERTScore Precision & 0.8430 \\
993
+ BERTScore Recall & 0.8179 \\
994
+ \textbf{BERTScore F1} & \textbf{0.8300} \\
995
+ \bottomrule
996
+ \end{tabular}
997
+ \end{table}
998
+
999
+ The BERTScore F1 of \textbf{0.83} demonstrates strong semantic similarity between generated descriptions and references, indicating the model captures meaning effectively even when exact wording differs. ROUGE scores are typical for abstractive summarization where the model paraphrases rather than extracts verbatim text.
1000
+
1001
+ \subsection{Classification Performance}
1002
+
1003
+ \begin{table}[htbp]
1004
+ \centering
1005
+ \caption{Classification Evaluation Results}
1006
+ \label{tab:classification_results}
1007
+ \begin{tabular}{llc}
1008
+ \toprule
1009
+ \textbf{Task} & \textbf{Metric} & \textbf{Score} \\
1010
+ \midrule
1011
+ \multirow{2}{*}{Topic (7 classes)} & Accuracy & \textbf{85.19\%} \\
1012
+ & Macro F1 & 0.8474 \\
1013
+ \midrule
1014
+ Emotion (28 classes) & Multi-label F1 & 0.1987 \\
1015
+ \bottomrule
1016
+ \end{tabular}
1017
+ \end{table}
1018
+
1019
+ Topic classification achieves \textbf{85.2\%} accuracy with balanced per-class performance. The emotion detection task proves more challenging due to the 28-class multi-label setting with inherent label ambiguity in the GoEmotions dataset.
1020
+
1021
+ \subsection{Training Dynamics}
1022
+
1023
+ Figure \ref{fig:training_curves} shows the training dynamics over 7 epochs. The model converges smoothly with cosine learning rate decay, achieving best validation performance at epoch 4-5 before early stopping.
1024
+
1025
+ \begin{figure}[htbp]
1026
+ \centering
1027
+ \includegraphics[width=\columnwidth]{figures/training_loss_curve.png}
1028
+ \caption{Training loss curves showing convergence over 7 epochs. Early stopping triggered after epoch 7 due to validation loss plateau.}
1029
+ \label{fig:training_curves}
1030
+ \end{figure}
1031
+
1032
+ Figure \ref{fig:task_metrics} presents per-task metrics throughout training, showing the distinct learning trajectories for summarization, emotion detection, and topic classification.
1033
+
1034
+ \begin{figure}[htbp]
1035
+ \centering
1036
+ \includegraphics[width=\columnwidth]{figures/task_metrics.png}
1037
+ \caption{Task-specific metrics during training: ROUGE-1 for summarization, F1 for emotion detection, and accuracy for topic classification.}
1038
+ \label{fig:task_metrics}
1039
+ \end{figure}
1040
+
1041
+ \subsection{Per-Class Topic Analysis}
1042
+
1043
+ Table \ref{tab:topic_breakdown} shows the per-class performance for topic classification:
1044
+
1045
+ \begin{table}[htbp]
1046
+ \centering
1047
+ \caption{Per-Class Topic Classification Performance}
1048
+ \label{tab:topic_breakdown}
1049
+ \begin{tabular}{lccc}
1050
+ \toprule
1051
+ \textbf{Topic} & \textbf{Precision} & \textbf{Recall} & \textbf{F1} \\
1052
+ \midrule
1053
+ Arts & 0.93 & 0.76 & 0.84 \\
1054
+ Business & 0.97 & 0.97 & 0.97 \\
1055
+ Fiction & 0.95 & 1.00 & 0.97 \\
1056
+ History & 0.83 & 0.78 & 0.81 \\
1057
+ Philosophy & 0.80 & 0.86 & 0.83 \\
1058
+ Science & 0.58 & 0.73 & 0.65 \\
1059
+ Technology & 0.86 & 0.89 & 0.87 \\
1060
+ \bottomrule
1061
+ \end{tabular}
1062
+ \end{table}
1063
+
1064
+ The model performs best on Fiction and Business categories, while Science shows the most confusion, likely due to overlap with Technology topics.
1065
+
1066
+ %=============================================================================
1067
+ \section{Discussion}
1068
+ %=============================================================================
1069
+
1070
+ \subsection{Key Findings}
1071
+
1072
+ \textbf{BERTScore vs. ROUGE}: The high BERTScore (0.83) combined with moderate ROUGE scores (0.31 ROUGE-1) illustrates a key characteristic of abstractive summarization. The model generates semantically accurate paraphrases rather than extractive copies, which ROUGE under-penalizes. BERTScore's contextual embeddings better capture this semantic fidelity.
1073
+
1074
+ \textbf{Multi-Task Trade-offs}: The reduced topic weight (0.3) was necessary to prevent overfitting on the small 3.4K sample dataset. Despite cycling through the topic data 14 times per epoch, the model achieves strong generalization with 85\% test accuracy.
1075
+
1076
+ \textbf{Transfer Learning Benefits}: Initializing from FLAN-T5-base provides strong linguistic priors, enabling competitive performance with only 7 epochs of fine-tuning. Freezing the bottom 4 encoder layers stabilizes training while allowing upper layers to adapt to our specific tasks.
1077
+
1078
+ \subsection{Limitations}
1079
+
1080
+ \begin{itemize}
1081
+ \item \textbf{Emotion Detection}: The 28-class multi-label setting remains challenging. GoEmotions' Reddit-sourced data may not generalize well to literary content.
1082
+ \item \textbf{Topic Dataset Size}: Only 3.4K topic samples limits the model's exposure to diverse examples.
1083
+ \item \textbf{Computational Resources}: Training requires $\sim$10GB VRAM, limiting accessibility on lower-end hardware.
1084
+ \end{itemize}
1085
+
1086
+ \subsection{Experiment Tracking}
1087
+
1088
+ All experiments are tracked with MLflow:
1089
+
1090
+ \begin{quote}
1091
+ \emph{``Metrics in src/training/metrics.py include accuracy, multi-label F1, and ROUGE-like overlap''} — architecture documentation
1092
+ \end{quote}
1093
+
1094
+ %=============================================================================
1095
+ \section{Conclusion}
1096
+ %=============================================================================
1097
+
1098
+ LexiMind demonstrates that building Transformer architectures from scratch while leveraging pre-trained weights provides a powerful combination of transparency, flexibility, and performance. The hybrid approach---custom implementation with FLAN-T5 weight initialization---enables:
1099
+
1100
+ \begin{enumerate}
1101
+ \item Full understanding and control over architectural decisions
1102
+ \item Seamless adaptation to multi-task learning scenarios
1103
+ \item Transfer of linguistic knowledge from large-scale pre-training
1104
+ \item Integration of modern optimizations (FlashAttention, RMSNorm)
1105
+ \end{enumerate}
1106
+
1107
+ Our experimental results validate this approach:
1108
+ \begin{itemize}
1109
+ \item \textbf{Summarization}: BERTScore F1 of 0.83 demonstrates strong semantic fidelity
1110
+ \item \textbf{Topic Classification}: 85.2\% accuracy across 7 categories
1111
+ \item \textbf{Emotion Detection}: Competitive multi-label performance on 28 classes
1112
+ \end{itemize}
1113
+
1114
+ The modular design of LexiMind's codebase facilitates extension to new tasks, experimentation with architectural variants, and serves as an educational resource for understanding Transformer internals. The complete system trains efficiently on consumer GPUs ($\sim$6 hours on RTX 4070 12GB).
1115
+
1116
+ Future work may explore integration of Parameter-Efficient Fine-Tuning (PEFT) methods such as Low-Rank Adaptation (LoRA) \cite{hu2022lora}, expansion of the topic classification dataset, and scaling to larger architectures such as FLAN-T5-large or FLAN-T5-xl.
1117
+
1118
+ %=============================================================================
1119
+ % References
1120
+ %=============================================================================
1121
+
1122
+ \begin{thebibliography}{00}
1123
+
1124
+ \bibitem{vaswani2017attention}
1125
+ A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, ``Attention is all you need,'' in \textit{Advances in Neural Information Processing Systems (NeurIPS)}, vol. 30, 2017, pp. 5998--6008. [Online]. Available: \url{https://arxiv.org/abs/1706.03762}
1126
+
1127
+ \bibitem{raffel2020exploring}
1128
+ C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, ``Exploring the limits of transfer learning with a unified text-to-text transformer,'' \textit{Journal of Machine Learning Research}, vol. 21, no. 140, pp. 1--67, 2020. [Online]. Available: \url{https://arxiv.org/abs/1910.10683}
1129
+
1130
+ \bibitem{chung2022scaling}
1131
+ H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, Y. Li, X. Wang, M. Dehghani, S. Brahma, A. Webson, S. S. Gu, Z. Dai, M. Suzgun, X. Chen, A. Chowdhery, A. Castro-Ros, M. Pellat, K. Robinson, D. Valter, S. Narang, G. Mishra, A. Yu, V. Zhao, Y. Huang, A. Dai, H. Yu, S. Petrov, E. H. Chi, J. Dean, J. Devlin, A. Roberts, D. Zhou, Q. V. Le, and J. Wei, ``Scaling instruction-finetuned language models,'' \textit{arXiv preprint arXiv:2210.11416}, 2022. [Online]. Available: \url{https://arxiv.org/abs/2210.11416}
1132
+
1133
+ \bibitem{xiong2020layer}
1134
+ R. Xiong, Y. Yang, J. He, K. Zheng, S. Zheng, C. Xing, H. Zhang, Y. Lan, L. Wang, and T. Liu, ``On layer normalization in the transformer architecture,'' in \textit{International Conference on Machine Learning (ICML)}, 2020, pp. 10524--10533. [Online]. Available: \url{https://arxiv.org/abs/2002.04745}
1135
+
1136
+ \bibitem{zhang2019root}
1137
+ B. Zhang and R. Sennrich, ``Root mean square layer normalization,'' in \textit{Advances in Neural Information Processing Systems (NeurIPS)}, vol. 32, 2019, pp. 12360--12371. [Online]. Available: \url{https://arxiv.org/abs/1910.07467}
1138
+
1139
+ \bibitem{ba2016layer}
1140
+ J. L. Ba, J. R. Kiros, and G. E. Hinton, ``Layer normalization,'' \textit{arXiv preprint arXiv:1607.06450}, 2016. [Online]. Available: \url{https://arxiv.org/abs/1607.06450}
1141
+
1142
+ \bibitem{caruana1997multitask}
1143
+ R. Caruana, ``Multitask learning,'' \textit{Machine Learning}, vol. 28, no. 1, pp. 41--75, 1997.
1144
+
1145
+ \bibitem{kudo2018sentencepiece}
1146
+ T. Kudo and J. Richardson, ``SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing,'' in \textit{Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, 2018, pp. 66--71. [Online]. Available: \url{https://arxiv.org/abs/1808.06226}
1147
+
1148
+ \bibitem{hu2022lora}
1149
+ E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, ``LoRA: Low-rank adaptation of large language models,'' in \textit{International Conference on Learning Representations (ICLR)}, 2022. [Online]. Available: \url{https://arxiv.org/abs/2106.09685}
1150
+
1151
+ \bibitem{dao2022flashattention}
1152
+ T. Dao, D. Fu, S. Ermon, A. Rudra, and C. Ré, ``FlashAttention: Fast and memory-efficient exact attention with IO-awareness,'' in \textit{Advances in Neural Information Processing Systems (NeurIPS)}, vol. 35, 2022, pp. 16344--16359. [Online]. Available: \url{https://arxiv.org/abs/2205.14135}
1153
+
1154
+ \bibitem{zhang2019bertscore}
1155
+ T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, ``BERTScore: Evaluating text generation with BERT,'' in \textit{International Conference on Learning Representations (ICLR)}, 2020. [Online]. Available: \url{https://arxiv.org/abs/1904.09675}
1156
+
1157
+ \bibitem{demszky2020goemotions}
1158
+ D. Demszky, D. Movshovitz-Attias, J. Ko, A. Cowen, G. Nemade, and S. Ravi, ``GoEmotions: A dataset of fine-grained emotions,'' in \textit{Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, 2020, pp. 4040--4054. [Online]. Available: \url{https://arxiv.org/abs/2005.00547}
1159
+
1160
+ \end{thebibliography}
1161
+
1162
+ \end{document}
outputs/evaluation_report.json CHANGED
@@ -1,26 +1,23 @@
1
  {
2
  "summarization": {
3
- "rouge_like": 0.21057591687577476,
4
- "rouge1": 0.21057591687577476,
5
- "rouge2": 0,
6
- "rougeL": 0,
7
- "bleu4": 0,
8
- "bertscore_f1": 0,
9
- "loss": 3.89890168094635
10
- },
11
- "topic": {
12
- "accuracy": 0.883822222222223,
13
- "f1": 0.883822222222223,
14
- "precision": 0,
15
- "recall": 0,
16
- "loss": 0.39885642248392106
17
  },
18
  "emotion": {
19
- "f1": 0.2789894984960556,
20
- "precision": 0,
21
- "recall": 0,
22
- "loss": 0.1520903602540493
23
  },
24
- "best_epoch": "val_epoch_5",
25
- "total_loss": 4.3264654325693845
 
 
 
26
  }
 
1
  {
2
  "summarization": {
3
+ "rouge1": 0.30642430379446967,
4
+ "rouge2": 0.08959565281855562,
5
+ "rougeL": 0.18324654816276506,
6
+ "bleu4": 0.02372948091924369,
7
+ "num_samples": 2727,
8
+ "bertscore_precision": 0.8429681658744812,
9
+ "bertscore_recall": 0.817944347858429,
10
+ "bertscore_f1": 0.8300431966781616
 
 
 
 
 
 
11
  },
12
  "emotion": {
13
+ "multilabel_f1": 0.19874678552150726,
14
+ "sample_avg_f1": 0.19874677478805736,
15
+ "num_samples": 5426,
16
+ "num_classes": 28
17
  },
18
+ "topic": {
19
+ "accuracy": 0.8518518518518519,
20
+ "macro_f1": 0.8473591074094903,
21
+ "num_samples": 189
22
+ }
23
  }
outputs/training_history.json CHANGED
@@ -1,92 +1,184 @@
1
  {
2
  "train_epoch_1": {
3
- "summarization_loss": 4.337589238357544,
4
- "summarization_rouge_like": 0.1879255349350253,
5
- "emotion_loss": 0.46719961771667,
6
- "emotion_f1": 0.10263426738642156,
7
- "topic_loss": 1.8465057809352874,
8
- "topic_accuracy": 0.3392400000000004,
9
- "total_loss": 5.961641555400193
 
 
 
 
10
  },
11
  "val_epoch_1": {
12
- "summarization_loss": 4.049011781692505,
13
- "summarization_rouge_like": 0.2034169061442255,
14
- "emotion_loss": 0.15742238980531692,
15
- "emotion_f1": 0.10493333557248116,
16
- "topic_loss": 1.4062236018180847,
17
- "topic_accuracy": 0.5732444444444442,
18
- "total_loss": 4.988257167309523
 
 
 
 
19
  },
20
  "train_epoch_2": {
21
- "summarization_loss": 4.083578112983703,
22
- "summarization_rouge_like": 0.1976643239752401,
23
- "emotion_loss": 0.15362880874574183,
24
- "emotion_f1": 0.2406129123225808,
25
- "topic_loss": 0.9754000782608986,
26
- "topic_accuracy": 0.7247600000000052,
27
- "total_loss": 4.8017213652327655
 
 
 
 
28
  },
29
  "val_epoch_2": {
30
- "summarization_loss": 3.9556797580718994,
31
- "summarization_rouge_like": 0.20800790372227917,
32
- "emotion_loss": 0.1543270833492279,
33
- "emotion_f1": 0.3030942615568638,
34
- "topic_loss": 0.6294559867382049,
35
- "topic_accuracy": 0.8470222222222232,
36
- "total_loss": 4.501898376464844
 
 
 
 
37
  },
38
  "train_epoch_3": {
39
- "summarization_loss": 4.013701296997071,
40
- "summarization_rouge_like": 0.20124694460046363,
41
- "emotion_loss": 0.1518744811743498,
42
- "emotion_f1": 0.2534809801559895,
43
- "topic_loss": 0.4991990038275719,
44
- "topic_accuracy": 0.8825600000000137,
45
- "total_loss": 4.491112520672381
 
 
 
 
46
  },
47
  "val_epoch_3": {
48
- "summarization_loss": 3.916879098892212,
49
- "summarization_rouge_like": 0.20999463362408954,
50
- "emotion_loss": 0.1531626036465168,
51
- "emotion_f1": 0.29498851400613785,
52
- "topic_loss": 0.4279452709853649,
53
- "topic_accuracy": 0.8898222222222228,
54
- "total_loss": 4.36059563985467
 
 
 
 
55
  },
56
  "train_epoch_4": {
57
- "summarization_loss": 3.983973069667816,
58
- "summarization_rouge_like": 0.2029009331004662,
59
- "emotion_loss": 0.15108716808855532,
60
- "emotion_f1": 0.2626448046594858,
61
- "topic_loss": 0.3580647605985403,
62
- "topic_accuracy": 0.913920000000016,
63
- "total_loss": 4.389636202099919
 
 
 
 
64
  },
65
  "val_epoch_4": {
66
- "summarization_loss": 3.9035442686080932,
67
- "summarization_rouge_like": 0.21033612715351324,
68
- "emotion_loss": 0.15223792135715486,
69
- "emotion_f1": 0.282991696447134,
70
- "topic_loss": 0.4027849786281586,
71
- "topic_accuracy": 0.8786222222222232,
72
- "total_loss": 4.333293639957905
 
 
 
 
73
  },
74
  "train_epoch_5": {
75
- "summarization_loss": 3.973270378112793,
76
- "summarization_rouge_like": 0.20338914910116973,
77
- "emotion_loss": 0.15057664619088174,
78
- "emotion_f1": 0.2615954305127263,
79
- "topic_loss": 0.3245519582152367,
80
- "topic_accuracy": 0.9218400000000136,
81
- "total_loss": 4.361411326506734
 
 
 
 
82
  },
83
  "val_epoch_5": {
84
- "summarization_loss": 3.89890168094635,
85
- "summarization_rouge_like": 0.21057591687577476,
86
- "emotion_loss": 0.1520903602540493,
87
- "emotion_f1": 0.2789894984960556,
88
- "topic_loss": 0.39885642248392106,
89
- "topic_accuracy": 0.883822222222223,
90
- "total_loss": 4.3264654325693845
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91
  }
92
  }
 
1
  {
2
  "train_epoch_1": {
3
+ "summarization_loss": 4.079733937207733,
4
+ "summarization_rouge_like": 0.2028193981940672,
5
+ "summarization_rouge1": 0.28560021391594853,
6
+ "summarization_rouge2": 0.08435468511113785,
7
+ "summarization_rougeL": 0.2154275814958213,
8
+ "summarization_bleu4": 0.046117214886897795,
9
+ "emotion_loss": 0.26244200211201213,
10
+ "emotion_f1": 0.19766912004785386,
11
+ "topic_loss": 1.1932831987432027,
12
+ "topic_accuracy": 0.6169484620085743,
13
+ "total_loss": 4.700160898942682
14
  },
15
  "val_epoch_1": {
16
+ "summarization_loss": 3.833653777440389,
17
+ "summarization_rouge_like": 0.21745746839833108,
18
+ "summarization_rouge1": 0.25830647486971986,
19
+ "summarization_rouge2": 0.08371089018017476,
20
+ "summarization_rougeL": 0.19875902754771785,
21
+ "summarization_bleu4": 0.04686325808061064,
22
+ "emotion_loss": 0.15138163698216278,
23
+ "emotion_f1": 0.2988015959163507,
24
+ "topic_loss": 0.49577911029259364,
25
+ "topic_accuracy": 0.8414444444444463,
26
+ "total_loss": 4.133769147510325
27
  },
28
  "train_epoch_2": {
29
+ "summarization_loss": 3.8735384538960957,
30
+ "summarization_rouge_like": 0.21216005207286087,
31
+ "summarization_rouge1": 0.2725401912753465,
32
+ "summarization_rouge2": 0.08422447720111174,
33
+ "summarization_rougeL": 0.20784756594931902,
34
+ "summarization_bleu4": 0.04735888096035337,
35
+ "emotion_loss": 0.14739622891762758,
36
+ "emotion_f1": 0.24802368223123794,
37
+ "topic_loss": 0.20543605549897287,
38
+ "topic_accuracy": 0.9533102464860562,
39
+ "total_loss": 4.082565499463412
40
  },
41
  "val_epoch_2": {
42
+ "summarization_loss": 3.757540551821391,
43
+ "summarization_rouge_like": 0.22135824128665282,
44
+ "summarization_rouge1": 0.26246463868681713,
45
+ "summarization_rouge2": 0.08642599777825609,
46
+ "summarization_rougeL": 0.20291475192384523,
47
+ "summarization_bleu4": 0.04878701253341023,
48
+ "emotion_loss": 0.14281915669639905,
49
+ "emotion_f1": 0.22750000593562922,
50
+ "topic_loss": 0.5170371426145236,
51
+ "topic_accuracy": 0.857444444444447,
52
+ "total_loss": 4.0554708513021485
53
  },
54
  "train_epoch_3": {
55
+ "summarization_loss": 3.810021250766172,
56
+ "summarization_rouge_like": 0.21604067556721981,
57
+ "summarization_rouge1": 0.2821805091020667,
58
+ "summarization_rouge2": 0.08854532726771042,
59
+ "summarization_rougeL": 0.21597970695633295,
60
+ "summarization_bleu4": 0.050746126728702,
61
+ "emotion_loss": 0.13904708911649416,
62
+ "emotion_f1": 0.26316031495731096,
63
+ "topic_loss": 0.056642449901637124,
64
+ "topic_accuracy": 0.990527602363008,
65
+ "total_loss": 3.966061074853179
66
  },
67
  "val_epoch_3": {
68
+ "summarization_loss": 3.719314083258311,
69
+ "summarization_rouge_like": 0.22481595386076839,
70
+ "summarization_rouge1": 0.26640729969212057,
71
+ "summarization_rouge2": 0.08834688670295619,
72
+ "summarization_rougeL": 0.20596586881603718,
73
+ "summarization_bleu4": 0.05016497159711613,
74
+ "emotion_loss": 0.13301495840152106,
75
+ "emotion_f1": 0.3033000104998549,
76
+ "topic_loss": 0.5857507295409838,
77
+ "topic_accuracy": 0.8734444444444462,
78
+ "total_loss": 4.028054260522129
79
  },
80
  "train_epoch_4": {
81
+ "summarization_loss": 3.7730432008866455,
82
+ "summarization_rouge_like": 0.21847830974094434,
83
+ "summarization_rouge1": 0.2878904539624512,
84
+ "summarization_rouge2": 0.09133085392035245,
85
+ "summarization_rougeL": 0.22096176610871401,
86
+ "summarization_bleu4": 0.05290110383690951,
87
+ "emotion_loss": 0.1303394384724675,
88
+ "emotion_f1": 0.3112745399373045,
89
+ "topic_loss": 0.027687466271748295,
90
+ "topic_accuracy": 0.9956406600122227,
91
+ "total_loss": 3.9116888792406423
92
  },
93
  "val_epoch_4": {
94
+ "summarization_loss": 3.6977765361467996,
95
+ "summarization_rouge_like": 0.22674092066059914,
96
+ "summarization_rouge1": 0.2693903973096626,
97
+ "summarization_rouge2": 0.08996117022445106,
98
+ "summarization_rougeL": 0.2082540646606119,
99
+ "summarization_bleu4": 0.05143713761355326,
100
+ "emotion_loss": 0.12381103243678808,
101
+ "emotion_f1": 0.33682223431766034,
102
+ "topic_loss": 0.6719013427694639,
103
+ "topic_accuracy": 0.8474444444444447,
104
+ "total_loss": 4.023157971414427
105
  },
106
  "train_epoch_5": {
107
+ "summarization_loss": 3.7497664094335508,
108
+ "summarization_rouge_like": 0.2202997992210682,
109
+ "summarization_rouge1": 0.292400360350181,
110
+ "summarization_rouge2": 0.09348545351673342,
111
+ "summarization_rougeL": 0.22484400309273084,
112
+ "summarization_bleu4": 0.05458407407341266,
113
+ "emotion_loss": 0.12339086255013494,
114
+ "emotion_f1": 0.3434362175187066,
115
+ "topic_loss": 0.015485833265247037,
116
+ "topic_accuracy": 0.9976166225300467,
117
+ "total_loss": 3.8778030219632678
118
  },
119
  "val_epoch_5": {
120
+ "summarization_loss": 3.6840700109799704,
121
+ "summarization_rouge_like": 0.22678335776033579,
122
+ "summarization_rouge1": 0.2700621733687571,
123
+ "summarization_rouge2": 0.09032974742400583,
124
+ "summarization_rougeL": 0.20938608622405835,
125
+ "summarization_bleu4": 0.05172823521981011,
126
+ "emotion_loss": 0.11917384720096985,
127
+ "emotion_f1": 0.38138890409221254,
128
+ "topic_loss": 0.7415381839871407,
129
+ "topic_accuracy": 0.8471111111111125,
130
+ "total_loss": 4.025705313377086
131
+ },
132
+ "train_epoch_6": {
133
+ "summarization_loss": 3.7370202331539084,
134
+ "summarization_rouge_like": 0.22116581036129404,
135
+ "summarization_rouge1": 0.2954615401250818,
136
+ "summarization_rouge2": 0.09482386542629304,
137
+ "summarization_rougeL": 0.22734415806495128,
138
+ "summarization_bleu4": 0.055565924246178955,
139
+ "emotion_loss": 0.12017362367040782,
140
+ "emotion_f1": 0.36295068560270216,
141
+ "topic_loss": 0.01094868929504993,
142
+ "topic_accuracy": 0.9983092279486658,
143
+ "total_loss": 3.8604784636128344
144
+ },
145
+ "val_epoch_6": {
146
+ "summarization_loss": 3.677226278781891,
147
+ "summarization_rouge_like": 0.22764216356749514,
148
+ "summarization_rouge1": 0.2723270512089283,
149
+ "summarization_rouge2": 0.09118120171523038,
150
+ "summarization_rougeL": 0.21111939318535006,
151
+ "summarization_bleu4": 0.05241066035570178,
152
+ "emotion_loss": 0.1169602353622516,
153
+ "emotion_f1": 0.4030444619183739,
154
+ "topic_loss": 0.7767537918190162,
155
+ "topic_accuracy": 0.8471111111111119,
156
+ "total_loss": 4.027212651689846
157
+ },
158
+ "train_epoch_7": {
159
+ "summarization_loss": 3.729386242860494,
160
+ "summarization_rouge_like": 0.22176530350965676,
161
+ "summarization_rouge1": 0.29741668530840704,
162
+ "summarization_rouge2": 0.09559545146778338,
163
+ "summarization_rougeL": 0.2291003267921414,
164
+ "summarization_bleu4": 0.05625421526528304,
165
+ "emotion_loss": 0.11843270199693227,
166
+ "emotion_f1": 0.3771792480979706,
167
+ "topic_loss": 0.008245498801485375,
168
+ "topic_accuracy": 0.9988388673864329,
169
+ "total_loss": 3.850292594497872
170
+ },
171
+ "val_epoch_7": {
172
+ "summarization_loss": 3.6736356274286908,
173
+ "summarization_rouge_like": 0.22775356464676147,
174
+ "summarization_rouge1": 0.27210620969462285,
175
+ "summarization_rouge2": 0.09135458358182197,
176
+ "summarization_rougeL": 0.21112833398209932,
177
+ "summarization_bleu4": 0.05247354488143169,
178
+ "emotion_loss": 0.11575033595164617,
179
+ "emotion_f1": 0.40462224079916875,
180
+ "topic_loss": 0.7991501004000505,
181
+ "topic_accuracy": 0.8524444444444451,
182
+ "total_loss": 4.02913099350035
183
  }
184
  }
scripts/build_discovery_dataset.py CHANGED
@@ -193,8 +193,8 @@ def main():
193
  parser = argparse.ArgumentParser(description="Build discovery dataset for HuggingFace Space")
194
  parser.add_argument("--data-dir", type=Path, default=Path("data/processed"))
195
  parser.add_argument("--checkpoint", type=Path, default=Path("checkpoints/best.pt"))
196
- parser.add_argument("--num-papers", type=int, default=300, help="Number of academic papers")
197
- parser.add_argument("--num-literary", type=int, default=300, help="Number of literary works")
198
  parser.add_argument("--output", type=Path, default=Path("data/discovery_dataset.jsonl"))
199
  parser.add_argument("--push-to-hub", action="store_true", help="Push to HuggingFace Hub")
200
  parser.add_argument("--hub-repo", type=str, default="OliverPerrin/LexiMind-Discovery")
 
193
  parser = argparse.ArgumentParser(description="Build discovery dataset for HuggingFace Space")
194
  parser.add_argument("--data-dir", type=Path, default=Path("data/processed"))
195
  parser.add_argument("--checkpoint", type=Path, default=Path("checkpoints/best.pt"))
196
+ parser.add_argument("--num-papers", type=int, default=500, help="Number of academic papers")
197
+ parser.add_argument("--num-literary", type=int, default=500, help="Number of literary works")
198
  parser.add_argument("--output", type=Path, default=Path("data/discovery_dataset.jsonl"))
199
  parser.add_argument("--push-to-hub", action="store_true", help="Push to HuggingFace Hub")
200
  parser.add_argument("--hub-repo", type=str, default="OliverPerrin/LexiMind-Discovery")
scripts/evaluate.py ADDED
@@ -0,0 +1,389 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Comprehensive evaluation script for LexiMind.
4
+
5
+ Evaluates all three tasks with full metrics:
6
+ - Summarization: ROUGE-1/2/L, BLEU-4, BERTScore
7
+ - Emotion: Multi-label F1, Precision, Recall
8
+ - Topic: Accuracy, Macro F1, Per-class metrics
9
+
10
+ Usage:
11
+ python scripts/evaluate.py
12
+ python scripts/evaluate.py --checkpoint checkpoints/best.pt
13
+ python scripts/evaluate.py --skip-bertscore # Faster, skip BERTScore
14
+
15
+ Author: Oliver Perrin
16
+ Date: January 2026
17
+ """
18
+
19
+ from __future__ import annotations
20
+
21
+ import argparse
22
+ import json
23
+ import sys
24
+ import time
25
+ from pathlib import Path
26
+
27
+ # Setup path
28
+ PROJECT_ROOT = Path(__file__).resolve().parents[1]
29
+ if str(PROJECT_ROOT) not in sys.path:
30
+ sys.path.insert(0, str(PROJECT_ROOT))
31
+
32
+ import torch
33
+ from sklearn.metrics import accuracy_score, classification_report, f1_score
34
+ from tqdm import tqdm
35
+
36
+ from src.data.dataloader import (
37
+ build_emotion_dataloader,
38
+ build_summarization_dataloader,
39
+ build_topic_dataloader,
40
+ )
41
+ from src.data.dataset import (
42
+ EmotionDataset,
43
+ SummarizationDataset,
44
+ TopicDataset,
45
+ load_emotion_jsonl,
46
+ load_summarization_jsonl,
47
+ load_topic_jsonl,
48
+ )
49
+ from src.data.tokenization import Tokenizer, TokenizerConfig
50
+ from src.inference.factory import create_inference_pipeline
51
+ from src.training.metrics import (
52
+ calculate_all_summarization_metrics,
53
+ calculate_bertscore,
54
+ calculate_bleu,
55
+ calculate_rouge,
56
+ multilabel_f1,
57
+ )
58
+
59
+
60
+ def evaluate_summarization(
61
+ pipeline,
62
+ data_path: Path,
63
+ max_samples: int | None = None,
64
+ include_bertscore: bool = True,
65
+ batch_size: int = 8,
66
+ ) -> dict:
67
+ """Evaluate summarization with comprehensive metrics."""
68
+ print("\n" + "=" * 60)
69
+ print("SUMMARIZATION EVALUATION")
70
+ print("=" * 60)
71
+
72
+ # Load data (returns SummarizationExample dataclass objects)
73
+ data = load_summarization_jsonl(str(data_path))
74
+ if max_samples:
75
+ data = data[:max_samples]
76
+ print(f"Evaluating on {len(data)} samples...")
77
+
78
+ # Generate summaries
79
+ predictions = []
80
+ references = []
81
+
82
+ for i in tqdm(range(0, len(data), batch_size), desc="Generating summaries"):
83
+ batch = data[i:i + batch_size]
84
+ sources = [ex.source for ex in batch]
85
+ refs = [ex.summary for ex in batch]
86
+
87
+ preds = pipeline.summarize(sources)
88
+ predictions.extend(preds)
89
+ references.extend(refs)
90
+
91
+ # Calculate metrics
92
+ print("\nCalculating ROUGE scores...")
93
+ rouge_scores = calculate_rouge(predictions, references)
94
+
95
+ print("Calculating BLEU score...")
96
+ bleu = calculate_bleu(predictions, references)
97
+
98
+ metrics = {
99
+ "rouge1": rouge_scores["rouge1"],
100
+ "rouge2": rouge_scores["rouge2"],
101
+ "rougeL": rouge_scores["rougeL"],
102
+ "bleu4": bleu,
103
+ "num_samples": len(predictions),
104
+ }
105
+
106
+ if include_bertscore:
107
+ print("Calculating BERTScore (this may take a few minutes)...")
108
+ bert_scores = calculate_bertscore(predictions, references)
109
+ metrics["bertscore_precision"] = bert_scores["precision"]
110
+ metrics["bertscore_recall"] = bert_scores["recall"]
111
+ metrics["bertscore_f1"] = bert_scores["f1"]
112
+
113
+ # Print results
114
+ print("\n" + "-" * 40)
115
+ print("SUMMARIZATION RESULTS:")
116
+ print("-" * 40)
117
+ print(f" ROUGE-1: {metrics['rouge1']:.4f}")
118
+ print(f" ROUGE-2: {metrics['rouge2']:.4f}")
119
+ print(f" ROUGE-L: {metrics['rougeL']:.4f}")
120
+ print(f" BLEU-4: {metrics['bleu4']:.4f}")
121
+ if include_bertscore:
122
+ print(f" BERTScore P: {metrics['bertscore_precision']:.4f}")
123
+ print(f" BERTScore R: {metrics['bertscore_recall']:.4f}")
124
+ print(f" BERTScore F: {metrics['bertscore_f1']:.4f}")
125
+
126
+ # Show examples
127
+ print("\n" + "-" * 40)
128
+ print("SAMPLE OUTPUTS:")
129
+ print("-" * 40)
130
+ for i in range(min(3, len(predictions))):
131
+ print(f"\nExample {i+1}:")
132
+ print(f" Source: {data[i].source[:100]}...")
133
+ print(f" Generated: {predictions[i][:150]}...")
134
+ print(f" Reference: {references[i][:150]}...")
135
+
136
+ return metrics
137
+
138
+
139
+ def evaluate_emotion(
140
+ pipeline,
141
+ data_path: Path,
142
+ max_samples: int | None = None,
143
+ batch_size: int = 32,
144
+ ) -> dict:
145
+ """Evaluate emotion detection with multi-label metrics."""
146
+ print("\n" + "=" * 60)
147
+ print("EMOTION DETECTION EVALUATION")
148
+ print("=" * 60)
149
+
150
+ # Load data (returns EmotionExample dataclass objects)
151
+ data = load_emotion_jsonl(str(data_path))
152
+ if max_samples:
153
+ data = data[:max_samples]
154
+ print(f"Evaluating on {len(data)} samples...")
155
+
156
+ # Get predictions
157
+ all_preds = []
158
+ all_refs = []
159
+
160
+ for i in tqdm(range(0, len(data), batch_size), desc="Predicting emotions"):
161
+ batch = data[i:i + batch_size]
162
+ texts = [ex.text for ex in batch]
163
+ refs = [set(ex.emotions) for ex in batch]
164
+
165
+ preds = pipeline.predict_emotions(texts)
166
+ pred_sets = [set(p.labels) for p in preds]
167
+
168
+ all_preds.extend(pred_sets)
169
+ all_refs.extend(refs)
170
+
171
+ # Calculate metrics
172
+ # Convert to binary arrays for sklearn
173
+ all_emotions = sorted(pipeline.emotion_labels)
174
+
175
+ def to_binary(emotion_sets, labels):
176
+ return [[1 if e in es else 0 for e in labels] for es in emotion_sets]
177
+
178
+ pred_binary = torch.tensor(to_binary(all_preds, all_emotions))
179
+ ref_binary = torch.tensor(to_binary(all_refs, all_emotions))
180
+
181
+ # Multi-label F1
182
+ f1 = multilabel_f1(pred_binary, ref_binary)
183
+
184
+ # Per-sample metrics
185
+ sample_f1s = []
186
+ for pred, ref in zip(all_preds, all_refs):
187
+ if len(pred) == 0 and len(ref) == 0:
188
+ sample_f1s.append(1.0)
189
+ elif len(pred) == 0 or len(ref) == 0:
190
+ sample_f1s.append(0.0)
191
+ else:
192
+ intersection = len(pred & ref)
193
+ precision = intersection / len(pred) if pred else 0
194
+ recall = intersection / len(ref) if ref else 0
195
+ if precision + recall > 0:
196
+ sample_f1s.append(2 * precision * recall / (precision + recall))
197
+ else:
198
+ sample_f1s.append(0.0)
199
+
200
+ avg_f1 = sum(sample_f1s) / len(sample_f1s)
201
+
202
+ metrics = {
203
+ "multilabel_f1": f1,
204
+ "sample_avg_f1": avg_f1,
205
+ "num_samples": len(all_preds),
206
+ "num_classes": len(all_emotions),
207
+ }
208
+
209
+ # Print results
210
+ print("\n" + "-" * 40)
211
+ print("EMOTION DETECTION RESULTS:")
212
+ print("-" * 40)
213
+ print(f" Multi-label F1: {metrics['multilabel_f1']:.4f}")
214
+ print(f" Sample Avg F1: {metrics['sample_avg_f1']:.4f}")
215
+ print(f" Num Classes: {metrics['num_classes']}")
216
+
217
+ return metrics
218
+
219
+
220
+ def evaluate_topic(
221
+ pipeline,
222
+ data_path: Path,
223
+ max_samples: int | None = None,
224
+ batch_size: int = 32,
225
+ ) -> dict:
226
+ """Evaluate topic classification."""
227
+ print("\n" + "=" * 60)
228
+ print("TOPIC CLASSIFICATION EVALUATION")
229
+ print("=" * 60)
230
+
231
+ # Load data (returns TopicExample dataclass objects)
232
+ data = load_topic_jsonl(str(data_path))
233
+ if max_samples:
234
+ data = data[:max_samples]
235
+ print(f"Evaluating on {len(data)} samples...")
236
+
237
+ # Get predictions
238
+ all_preds = []
239
+ all_refs = []
240
+
241
+ for i in tqdm(range(0, len(data), batch_size), desc="Predicting topics"):
242
+ batch = data[i:i + batch_size]
243
+ texts = [ex.text for ex in batch]
244
+ refs = [ex.topic for ex in batch]
245
+
246
+ preds = pipeline.predict_topics(texts)
247
+ pred_labels = [p.label for p in preds]
248
+
249
+ all_preds.extend(pred_labels)
250
+ all_refs.extend(refs)
251
+
252
+ # Calculate metrics
253
+ accuracy = accuracy_score(all_refs, all_preds)
254
+ macro_f1 = f1_score(all_refs, all_preds, average="macro", zero_division=0)
255
+
256
+ metrics = {
257
+ "accuracy": accuracy,
258
+ "macro_f1": macro_f1,
259
+ "num_samples": len(all_preds),
260
+ }
261
+
262
+ # Print results
263
+ print("\n" + "-" * 40)
264
+ print("TOPIC CLASSIFICATION RESULTS:")
265
+ print("-" * 40)
266
+ print(f" Accuracy: {metrics['accuracy']:.4f} ({metrics['accuracy']*100:.1f}%)")
267
+ print(f" Macro F1: {metrics['macro_f1']:.4f}")
268
+
269
+ # Classification report
270
+ print("\n" + "-" * 40)
271
+ print("PER-CLASS METRICS:")
272
+ print("-" * 40)
273
+ print(classification_report(all_refs, all_preds, zero_division=0))
274
+
275
+ return metrics
276
+
277
+
278
+ def main():
279
+ parser = argparse.ArgumentParser(description="Evaluate LexiMind model")
280
+ parser.add_argument("--checkpoint", type=Path, default=Path("checkpoints/best.pt"))
281
+ parser.add_argument("--labels", type=Path, default=Path("artifacts/labels.json"))
282
+ parser.add_argument("--data-dir", type=Path, default=Path("data/processed"))
283
+ parser.add_argument("--output", type=Path, default=Path("outputs/evaluation_report.json"))
284
+ parser.add_argument("--max-samples", type=int, default=None, help="Limit samples per task")
285
+ parser.add_argument("--skip-bertscore", action="store_true", help="Skip BERTScore (faster)")
286
+ parser.add_argument("--summarization-only", action="store_true")
287
+ parser.add_argument("--emotion-only", action="store_true")
288
+ parser.add_argument("--topic-only", action="store_true")
289
+ args = parser.parse_args()
290
+
291
+ print("=" * 60)
292
+ print("LexiMind Evaluation")
293
+ print("=" * 60)
294
+
295
+ start_time = time.perf_counter()
296
+
297
+ # Load model
298
+ print(f"\nLoading model from {args.checkpoint}...")
299
+ device = "cuda" if torch.cuda.is_available() else "cpu"
300
+ pipeline, labels = create_inference_pipeline(
301
+ args.checkpoint,
302
+ args.labels,
303
+ device=device,
304
+ )
305
+ print(f" Device: {device}")
306
+ print(f" Topics: {labels.topic}")
307
+ print(f" Emotions: {len(labels.emotion)} classes")
308
+
309
+ results = {}
310
+
311
+ # Determine which tasks to evaluate
312
+ eval_all = not (args.summarization_only or args.emotion_only or args.topic_only)
313
+
314
+ # Evaluate summarization
315
+ if eval_all or args.summarization_only:
316
+ val_path = args.data_dir / "summarization" / "validation.jsonl"
317
+ if not val_path.exists():
318
+ val_path = args.data_dir / "summarization" / "val.jsonl"
319
+ if val_path.exists():
320
+ results["summarization"] = evaluate_summarization(
321
+ pipeline, val_path,
322
+ max_samples=args.max_samples,
323
+ include_bertscore=not args.skip_bertscore,
324
+ )
325
+ else:
326
+ print(f"Warning: summarization validation data not found, skipping")
327
+
328
+ # Evaluate emotion
329
+ if eval_all or args.emotion_only:
330
+ val_path = args.data_dir / "emotion" / "validation.jsonl"
331
+ if not val_path.exists():
332
+ val_path = args.data_dir / "emotion" / "val.jsonl"
333
+ if val_path.exists():
334
+ results["emotion"] = evaluate_emotion(
335
+ pipeline, val_path,
336
+ max_samples=args.max_samples,
337
+ )
338
+ else:
339
+ print(f"Warning: emotion validation data not found, skipping")
340
+
341
+ # Evaluate topic
342
+ if eval_all or args.topic_only:
343
+ val_path = args.data_dir / "topic" / "validation.jsonl"
344
+ if not val_path.exists():
345
+ val_path = args.data_dir / "topic" / "val.jsonl"
346
+ if val_path.exists():
347
+ results["topic"] = evaluate_topic(
348
+ pipeline, val_path,
349
+ max_samples=args.max_samples,
350
+ )
351
+ else:
352
+ print(f"Warning: topic validation data not found, skipping")
353
+
354
+ # Save results
355
+ print("\n" + "=" * 60)
356
+ print("SAVING RESULTS")
357
+ print("=" * 60)
358
+
359
+ args.output.parent.mkdir(parents=True, exist_ok=True)
360
+ with open(args.output, "w") as f:
361
+ json.dump(results, f, indent=2)
362
+ print(f" Saved to: {args.output}")
363
+
364
+ # Final summary
365
+ elapsed = time.perf_counter() - start_time
366
+ print("\n" + "=" * 60)
367
+ print("EVALUATION COMPLETE")
368
+ print("=" * 60)
369
+ print(f" Time: {elapsed/60:.1f} minutes")
370
+
371
+ if "summarization" in results:
372
+ s = results["summarization"]
373
+ print(f"\n Summarization:")
374
+ print(f" ROUGE-1: {s['rouge1']:.4f}")
375
+ print(f" ROUGE-L: {s['rougeL']:.4f}")
376
+ if "bertscore_f1" in s:
377
+ print(f" BERTScore F1: {s['bertscore_f1']:.4f}")
378
+
379
+ if "emotion" in results:
380
+ print(f"\n Emotion:")
381
+ print(f" Multi-label F1: {results['emotion']['multilabel_f1']:.4f}")
382
+
383
+ if "topic" in results:
384
+ print(f"\n Topic:")
385
+ print(f" Accuracy: {results['topic']['accuracy']:.2%}")
386
+
387
+
388
+ if __name__ == "__main__":
389
+ main()
scripts/train.py CHANGED
@@ -156,6 +156,10 @@ def main(cfg: DictConfig) -> None:
156
  batch_size = int(dl_cfg.get("batch_size", 8))
157
  num_workers = int(dl_cfg.get("num_workers", 4))
158
 
 
 
 
 
159
  train_loaders = {
160
  "summarization": build_summarization_dataloader(
161
  summ_train, tokenizer, shuffle=True,
@@ -163,11 +167,11 @@ def main(cfg: DictConfig) -> None:
163
  batch_size=batch_size, num_workers=num_workers, pin_memory=True,
164
  ),
165
  "emotion": build_emotion_dataloader(
166
- emot_train, tokenizer, shuffle=True, max_length=max_len,
167
  batch_size=batch_size, num_workers=num_workers, pin_memory=True,
168
  ),
169
  "topic": build_topic_dataloader(
170
- topic_train, tokenizer, shuffle=True, max_length=max_len,
171
  batch_size=batch_size, num_workers=num_workers, pin_memory=True,
172
  ),
173
  }
@@ -181,12 +185,12 @@ def main(cfg: DictConfig) -> None:
181
  )
182
  if emot_val:
183
  val_loaders["emotion"] = build_emotion_dataloader(
184
- emot_val, tokenizer, shuffle=False, max_length=max_len,
185
  batch_size=batch_size, num_workers=num_workers, pin_memory=True,
186
  )
187
  if topic_val:
188
  val_loaders["topic"] = build_topic_dataloader(
189
- topic_val, tokenizer, shuffle=False, max_length=max_len,
190
  batch_size=batch_size, num_workers=num_workers, pin_memory=True,
191
  )
192
 
 
156
  batch_size = int(dl_cfg.get("batch_size", 8))
157
  num_workers = int(dl_cfg.get("num_workers", 4))
158
 
159
+ # Classification tasks don't need full 512 tokens - 256 is sufficient
160
+ # This speeds up emotion/topic forward passes significantly
161
+ classification_max_len = min(256, max_len)
162
+
163
  train_loaders = {
164
  "summarization": build_summarization_dataloader(
165
  summ_train, tokenizer, shuffle=True,
 
167
  batch_size=batch_size, num_workers=num_workers, pin_memory=True,
168
  ),
169
  "emotion": build_emotion_dataloader(
170
+ emot_train, tokenizer, shuffle=True, max_length=classification_max_len,
171
  batch_size=batch_size, num_workers=num_workers, pin_memory=True,
172
  ),
173
  "topic": build_topic_dataloader(
174
+ topic_train, tokenizer, shuffle=True, max_length=classification_max_len,
175
  batch_size=batch_size, num_workers=num_workers, pin_memory=True,
176
  ),
177
  }
 
185
  )
186
  if emot_val:
187
  val_loaders["emotion"] = build_emotion_dataloader(
188
+ emot_val, tokenizer, shuffle=False, max_length=classification_max_len,
189
  batch_size=batch_size, num_workers=num_workers, pin_memory=True,
190
  )
191
  if topic_val:
192
  val_loaders["topic"] = build_topic_dataloader(
193
+ topic_val, tokenizer, shuffle=False, max_length=classification_max_len,
194
  batch_size=batch_size, num_workers=num_workers, pin_memory=True,
195
  )
196
 
src/training/metrics.py CHANGED
@@ -66,8 +66,8 @@ def calculate_bleu(predictions: Sequence[str], references: Sequence[str]) -> flo
66
  def calculate_bertscore(
67
  predictions: Sequence[str],
68
  references: Sequence[str],
69
- model_type: str = "microsoft/deberta-xlarge-mnli",
70
- batch_size: int = 32,
71
  device: str | None = None,
72
  ) -> Dict[str, float]:
73
  """
 
66
  def calculate_bertscore(
67
  predictions: Sequence[str],
68
  references: Sequence[str],
69
+ model_type: str = "roberta-large", # Uses ~1.4GB VRAM vs ~6GB for deberta-xlarge
70
+ batch_size: int = 16,
71
  device: str | None = None,
72
  ) -> Dict[str, float]:
73
  """