Chelsea707 commited on
Commit
aabcd33
·
verified ·
1 Parent(s): bbe62ff

Add Batch bb4c0ab1-2c34-4231-b139-c90d3b8fcd4e

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/30c60a16-249d-4033-b669-29a32b67c73b_content_list.json +3 -0
  2. ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/30c60a16-249d-4033-b669-29a32b67c73b_model.json +3 -0
  3. ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/30c60a16-249d-4033-b669-29a32b67c73b_origin.pdf +3 -0
  4. ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/full.md +169 -0
  5. ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/images.zip +3 -0
  6. ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/layout.json +3 -0
  7. ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/304e62aa-ca2e-41ce-9943-374bc2eb771a_content_list.json +3 -0
  8. ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/304e62aa-ca2e-41ce-9943-374bc2eb771a_model.json +3 -0
  9. ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/304e62aa-ca2e-41ce-9943-374bc2eb771a_origin.pdf +3 -0
  10. ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/full.md +313 -0
  11. ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/images.zip +3 -0
  12. ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/layout.json +3 -0
  13. ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/39bab619-549d-4d70-8be3-54342ed7ca89_content_list.json +3 -0
  14. ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/39bab619-549d-4d70-8be3-54342ed7ca89_model.json +3 -0
  15. ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/39bab619-549d-4d70-8be3-54342ed7ca89_origin.pdf +3 -0
  16. ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/full.md +377 -0
  17. ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/images.zip +3 -0
  18. ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/layout.json +3 -0
  19. ACL/2025/Subword models struggle with word learning, but surprisal hides it/611d1f3b-0447-49d8-b7fb-0a3acb1611a0_content_list.json +3 -0
  20. ACL/2025/Subword models struggle with word learning, but surprisal hides it/611d1f3b-0447-49d8-b7fb-0a3acb1611a0_model.json +3 -0
  21. ACL/2025/Subword models struggle with word learning, but surprisal hides it/611d1f3b-0447-49d8-b7fb-0a3acb1611a0_origin.pdf +3 -0
  22. ACL/2025/Subword models struggle with word learning, but surprisal hides it/full.md +272 -0
  23. ACL/2025/Subword models struggle with word learning, but surprisal hides it/images.zip +3 -0
  24. ACL/2025/Subword models struggle with word learning, but surprisal hides it/layout.json +3 -0
  25. ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/eb796f3f-9361-43c6-9a4d-d17ea0c8a87f_content_list.json +3 -0
  26. ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/eb796f3f-9361-43c6-9a4d-d17ea0c8a87f_model.json +3 -0
  27. ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/eb796f3f-9361-43c6-9a4d-d17ea0c8a87f_origin.pdf +3 -0
  28. ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/full.md +351 -0
  29. ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/images.zip +3 -0
  30. ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/layout.json +3 -0
  31. ACL/2025/That doesn’t sound right_ Evaluating speech transcription quality in field linguistics corpora/91e9f449-d572-4370-add4-d86c8c7b9d45_content_list.json +3 -0
  32. ACL/2025/That doesn’t sound right_ Evaluating speech transcription quality in field linguistics corpora/91e9f449-d572-4370-add4-d86c8c7b9d45_model.json +3 -0
  33. ACL/2025/That doesn’t sound right_ Evaluating speech transcription quality in field linguistics corpora/91e9f449-d572-4370-add4-d86c8c7b9d45_origin.pdf +3 -0
  34. ACL/2025/That doesn’t sound right_ Evaluating speech transcription quality in field linguistics corpora/full.md +223 -0
  35. ACL/2025/That doesn’t sound right_ Evaluating speech transcription quality in field linguistics corpora/images.zip +3 -0
  36. ACL/2025/That doesn’t sound right_ Evaluating speech transcription quality in field linguistics corpora/layout.json +3 -0
  37. ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/fd8ebbd1-41f1-4e17-ad72-97ad56e52db3_content_list.json +3 -0
  38. ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/fd8ebbd1-41f1-4e17-ad72-97ad56e52db3_model.json +3 -0
  39. ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/fd8ebbd1-41f1-4e17-ad72-97ad56e52db3_origin.pdf +3 -0
  40. ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/full.md +166 -0
  41. ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/images.zip +3 -0
  42. ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/layout.json +3 -0
  43. ACL/2025/TigerLLM - A Family of Bangla Large Language Models/17573e20-04bf-4561-9f01-9d5833ffb0bb_content_list.json +3 -0
  44. ACL/2025/TigerLLM - A Family of Bangla Large Language Models/17573e20-04bf-4561-9f01-9d5833ffb0bb_model.json +3 -0
  45. ACL/2025/TigerLLM - A Family of Bangla Large Language Models/17573e20-04bf-4561-9f01-9d5833ffb0bb_origin.pdf +3 -0
  46. ACL/2025/TigerLLM - A Family of Bangla Large Language Models/full.md +287 -0
  47. ACL/2025/TigerLLM - A Family of Bangla Large Language Models/images.zip +3 -0
  48. ACL/2025/TigerLLM - A Family of Bangla Large Language Models/layout.json +3 -0
  49. ACL/2025/Towards Geo-Culturally Grounded LLM Generations/d3edfe9c-9bbe-4d2a-ae91-434e7c94ee08_content_list.json +3 -0
  50. ACL/2025/Towards Geo-Culturally Grounded LLM Generations/d3edfe9c-9bbe-4d2a-ae91-434e7c94ee08_model.json +3 -0
ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/30c60a16-249d-4033-b669-29a32b67c73b_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:220822e87ca76caa5741f72a5ae588cf53202a04c05a6b3dcd497ef0a73f5d5f
3
+ size 51349
ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/30c60a16-249d-4033-b669-29a32b67c73b_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d9d4096419ea4ce2ff8d8c0b239b1ca3cc382c1215a0997db57b24f620e2324
3
+ size 64154
ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/30c60a16-249d-4033-b669-29a32b67c73b_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b32b438f1dfd25f7db69b9fb8d177826c01d05c7db4a595f0294ec507bd8111
3
+ size 293011
ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/full.md ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Sparse-to-Dense: A Free Lunch for Lossless Acceleration of Video Understanding in LLMs
2
+
3
+ Xuan Zhang $^{1}$ , Cunxiao Du $^{2*}$ , Sicheng Yu $^{1}$ , Jiawei Wu $^{3}$ , Fengzhuo Zhang $^{3}$ , Wei Gao $^{1}$ , Qian Liu $^{2}$
4
+
5
+ $^{1}$ Singapore Management University, $^{2}$ Sea AI Lab, $^{3}$ National University of Singapore
6
+
7
+ # Abstract
8
+
9
+ Video Large Language Models (Video-LLMs) suffer from high inference latency in long video processing due to their auto-regressive decoding mechanism, posing challenges for the efficient processing of video sequences that are usually very long. We observe that attention scores in Video-LLMs during decoding exhibit pronounced sparsity, with computational focus concentrated on a small subset of critical tokens. Motivated by this insight, we introduce Sparse-to-Dense (STD), a novel decoding strategy that integrates two distinct modules: a sparse module that rapidly generates speculative tokens using efficient top- $K$ attention, and a dense module that verifies these tokens in parallel via full self-attention. This collaborative approach accelerates Video-LLMs losslessly, effectively offering a free lunch for video understanding. STD is a plug-and-play solution requiring no fine-tuning or architectural changes and achieves up to a $1.94 \times$ wall time speedup while preserving model performance. It enables a seamless conversion of standard Video-LLMs into sparse counterparts, unlocking efficient long-video processing without sacrificing accuracy.
10
+
11
+ # 1 Introduction
12
+
13
+ Recent advances in Video Large Language Models (Video-LLMs), which combine large language models with video understanding, have achieved exceptional performance on tasks like video question answering and captioning (Lin et al., 2024a; Cao et al., 2024; Zhang et al., 2025a). A common practice in Video-LLMs is representing a video as a sequence of image frames, which results in extremely long token sequences that can strain computational resources. For instance, a 1-hour video sampled at 5-second intervals produces 720 frames, which translates to 141,120 visual tokens in VILA (Lin et al., 2024a). These extremely long
14
+
15
+ token sequences cause Video-LLMs to suffer from high inference latency when processing lengthy videos, making real-time applications challenging.
16
+
17
+ This latency is primarily introduced by the auto-regressive nature of current Video-LLMs, where each new token must attend to all preceding tokens, creating substantial memory and computational challenges. While mechanisms like key-value (KV) caching are employed to store pre-computed key and value tensors and reduce redundant recomputation, frequent access to the cache imposes heavy demands on memory bandwidth due to the growing amount of KV cache with the increasing sequence length. This significantly reduces the throughput of Video-LLMs. A common approach to addressing this problem is KV cache compression (Du et al., 2024b; Chen et al., 2024b; Lin et al., 2024b; Zhang et al., 2025b) or quantization (Su et al., 2025; Hooper et al., 2024; Liu et al., 2024) at test time. However, these methods introduce discrepancies between training and inference, degrading the performance of LLMs.
18
+
19
+ In this paper, we aim to build a lossless acceleration method designed specifically for Video-LLMs that preserves the exact output distribution of the original model. Although speculative decoding (Leviathan et al., 2023; Chen et al., 2023; Hou et al., 2025) meets this requirement, it usually requires an extra draft model, which is expensive for Video-LLMs. In contrast, we observe that Video-LLMs exhibit a unique structural property, attention sparsity, which can serve as a training-free and plug-and-play draft model. Specifically, retaining only the top- $K$ KV caches in the attention layers preserves the original predictions for approximately $95\%$ of tokens (empirically verified), suggesting that most attention heads contribute minimally to the final output. Motivated by this observation, we introduce a novel decoding method called Sparse-to-Dense (STD), which leverages the sparse structure of Video-LLMs as its
20
+
21
+ draft model. This design eliminates the need for an extra trained draft model, making STD a plug-and-play solution. We refer to the original Video-LLM as the dense model because it decodes using the full KV cache, whereas the model with top- $K$ attention is termed the sparse model. Both models share identical architectures, differing only in how they compute attention. Therefore, we do not need additional GPU memory to store the sparse model, nor does it require any extra training. The top- $K$ attention in the sparse model boosts decoding speed while sacrificing some token quality, whereas the dense model is slower but guarantees accuracy. We use the sparse model to auto-regressively draft the next $\gamma$ tokens, while the dense model verifies them in parallel. This approach avoids redundant full KV cache memory and ensures the outputs exactly match those of the original Video-LLM.
22
+
23
+ We conduct experiments on representative Video-LLMs including LLaVA-OneVision (Li et al., 2024a) and Qwen2-VL (Wang et al., 2024), evaluating them on video understanding benchmarks like MLVU (Zhou et al., 2024) and VideoMME (Fu et al., 2024). Experiment results show that our STD, serving as a tuning-free, plug-and-play solution, achieves up to a $1.94 \times$ acceleration of video input processing without any performance degradation. It is immediately deployable, requiring only 20 lines of code to transform an original Video-LLM into a sparse Video-LLM, and it does not require any extra training to deploy the draft model.
24
+
25
+ # 2 Observation
26
+
27
+ In this section, we investigate the disparity in decoded tokens between two configurations of Video-LLMs: 1) sparse top- $K$ KV cache: utilizing only the top- $K$ KV caches based on the highest attention weights; and 2) dense full KV cache: employing the complete set of KV caches. We conduct experiments using the Qwen2-VL-7B (Wang et al., 2024) model on randomly selected samples from MLVU (Zhou et al., 2024), and Video-MME (Fu et al., 2024) datasets. We evaluate the next-token prediction accuracy of the model when employing sparse attention with top- $K$ KV caches. Our findings indicate that the model with sparse attention maintains an average token prediction accuracy exceeding $95\%$ . This high accuracy suggests that for the majority of decoded tokens, only the top- $K$ KV caches are necessary. However, it is impor
28
+
29
+ tant to note that the $95\%$ accuracy is measured per individual token and does not accumulate across multiple tokens. For instance, the accuracy of correctly predicting five consecutive tokens drops to approximately $(95\%)^5 \approx 77\%$ .
30
+
31
+ # 3 Method
32
+
33
+ In this section, we present Sparse-to-Dense (STD), a method designed to achieve lossless acceleration for Video-LLMs. We refer to the original model $\mathcal{M}$ as the dense model, as it requires the full KV cache during decoding, while the sparse model $\mathcal{M}_s$ uses sparse attention. Although $\mathcal{M}_s$ is faster, it is somewhat less accurate. Unlike traditional speculative decoding, which relies on an additional draft model, our approach leverages $\mathcal{M}_s$ with the same parameters as $\mathcal{M}$ . The only difference is that $\mathcal{M}_s$ loads a reduced KV cache to perform sparse attention, eliminating the need for extra GPU memory to store another model's parameters. In the following subsections, we will detail the decoding procedure and the design of the sparse model.
34
+
35
+ # 3.1 Decoding Procedures
36
+
37
+ In our STD, the sparse model $\mathcal{M}_s$ functions as a draft model to propose potential next $\gamma$ tokens, while the dense model $\mathcal{M}$ verifies them to derive the final output sequence. Given an input sequence $\{x_0,\dots ,x_{m - 1}\}$ , consisting of visual and textual tokens, the sparse model $\mathcal{M}_s$ auto-regressively generates $\gamma$ subsequent token candidates $\{x_m,\dots ,x_{m + \gamma -1}\}$ . Because the tokens proposed by the sparse model $\mathcal{M}_s$ might not align with those predicted by the dense model $\mathcal{M}$ , it requires the verification of $\mathcal{M}$ . The dense model $\mathcal{M}$ verifies all $\gamma$ proposed tokens in parallel, requiring only a single I/O operation for the full KV cache. Thus, this verification procedure accelerates the process compared with the auto-regressive decoding of $\mathcal{M}$ itself, where each token requires a separate I/O operation. During the verification, $\mathcal{M}$ identifies the first $n$ tokens that align with its predictions, where $0\leq n\leq \gamma$ , and additionally provides a bonus token $\hat{x}_{n + m}$ for free. The verified sequence $\{x_{m},\dots ,x_{m + n - 1},\hat{x}_{n + m}\}$ is then appended to the input sequence $\{x_0,\dots ,x_{m - 1}\}$ to form the context for the next round of proposal and verification.
38
+
39
+ # 3.2 Model with Sparse Attention
40
+
41
+ Next, we introduce the design of our sparse model $\mathcal{M}_s$ . Empirical observations in Section 2 indicate
42
+
43
+ that during most decoding steps, attention scores are predominantly concentrated on a small subset of KV caches, a pattern we term sparse attention (also known as top- $K$ attention (Lou et al., 2024)). Only a small fraction of tokens require more evenly distributed dense attention. This insight motivates a strategy to selectively apply sparse attention for the majority of tokens and resort to dense attention only when necessary, reducing the I/O overhead of accessing the full KV cache, and thereby improving decoding speed.
44
+
45
+ Since the number of visual tokens is typically much larger than the number of textual tokens $(m_v \gg m_t)$ , with $m_v$ often exceeding 10,000 while $m_t$ are usually around 100, our primary focus is on reducing the size of the visual KV cache. To achieve this, we leverage the attention patterns of the textual tokens $X_t$ to identify and select the most relevant KV caches from the visual tokens. Specifically, we analyze the allocation of attention scores when processing the textual tokens $X_t = \{x_{m_v}, \dots, x_{m-1}\}$ (i.e., the last $m_t$ tokens in the input sequence) to identify which KV pairs of the visual tokens $X_v$ contribute more during the prefilling stage. For each layer $l$ , we calculate the average attention scores directed toward the visual tokens $X_v$ for textual tokens $X_t$ . We then retain only the top- $K$ KV pairs of visual tokens with the highest attention scores. To balance performance and efficiency, we determine the retained $K$ KV caches only during the prefilling stage and avoid the computation-demand dynamic selections in the decoding stage. The selected visual tokens can vary across different layers and attention heads, reflecting the distinct focus of each layer and head in processing the input. The selection of the KV cache of layer $l$ can be formalized as
46
+
47
+ $$
48
+ \operatorname {C a c h e} _ {s} [ l ] = \arg \operatorname {T o p K} _ {x \in X _ {v}} \left(\frac {1}{m _ {t}} \sum_ {\hat {x} \in X _ {t}} A _ {l} (\hat {x}, x)\right),
49
+ $$
50
+
51
+ where $\operatorname{argTopK}(\cdot)$ is an operation that selects the top- $K$ elements indices with the highest values from a given set, $k$ is a predefined hyper-parameter, and $A_{l}(\hat{x},x)$ represents the attention score from token $\hat{x}$ to token $x$ in layer $l$ . For models utilizing Grouped Query Attention (GQA) (Ainslie et al., 2023), where the number of query heads equals the number of groups multiplied by the number of KV heads, we directly sum the attention scores within each group to select the top- $K$ KV caches for this head. The KV cache selection operates at the granularity of individual KV heads, allowing
52
+
53
+ each layer or head to retain a distinct subset of caches based on its specific requirements.
54
+
55
+ # 3.3 I/O complexity analysis.
56
+
57
+ In the decoding phase, the I/O complexity of our Sparse-to-Dense decoding method can be analyzed as follows. For the sparse model $\mathcal{M}_s$ , which speculatively proposes $\gamma$ subsequent tokens, the I/O cost involves accessing the selected $K$ visual KV caches and all $m_t$ textual KV caches. Thus, the total I/O for the sparse model is given by: $\mathrm{I} / \mathrm{O}_{\mathrm{sparse}} = \gamma \times (K + m_t)$ . For the dense model $\mathcal{M}$ , which verifies the proposed tokens in parallel, the I/O cost includes accessing the full KV caches of all visual and textual tokens, resulting in: $\mathrm{I} / \mathrm{O}_{\mathrm{dense}} = m_v + m_t$ . The total I/O for Sparse-to-Dense decoding is therefore: $\mathrm{I} / \mathrm{O}_{\mathrm{total}} = \gamma \times (K + m_t) + (m_v + m_t)$ , and the average I/O per token is
58
+
59
+ $$
60
+ \mathrm {I} / \mathrm {O} _ {\text {a v e r a g e}} = \frac {\mathrm {I} / \mathrm {O} _ {\text {t o t a l}}}{\alpha \times \gamma} = \frac {\gamma \times (K + m _ {t}) + m _ {v} + m _ {t}}{\alpha \times \gamma},
61
+ $$
62
+
63
+ where $\alpha$ ratio of the number of accepted tokens among all proposed tokens. In contrast, the average I/O complexity of vanilla decoding, where each token is generated using full attention, is given by: $\mathrm{I} / \mathrm{O}_{\text{average}}^{\text{vanilla}} = m_v + m_t$ . When $\alpha$ is sufficiently large, i.e., $\alpha > (K + m_t) / (m_v + m_t) + \gamma^{-1}$ , the average I/O per token in our method becomes considerably lower, resulting in improved decoding efficiency. Intuitively, we hope that the ratio between the numbers of the accepted tokens and all proposed tokens is larger than the ratio between the numbers of retrained KV pairs and the full KV cache. This can be achieved due to the concentration behavior of attention scores in Section 2. The empirical superiority of our method in the next section verifies this inequality in the realistic setting.
64
+
65
+ # 4 Experiment
66
+
67
+ Baselines. To evaluate the effectiveness of our proposed Sparse-to-Dense decoding, we compare it against the following baselines: 1) Layerskip (Elhoushi et al., 2024): This method utilizes a model with an layer-level early exit mechanism to propose draft tokens. This baseline is inspired by the work of Elhoushi et al. on text-only LLMs, and originally requires additional training. For a fair comparison with our method, we adapt it to Video-LLMs in a tuning-free manner. 2) Streaming (Chen et al., 2024a): This method employs a model with streaming attention (Xiao et al., 2023) to propose
68
+
69
+ <table><tr><td rowspan="2">Methods</td><td colspan="2">MLVU</td><td colspan="2">VideoMME-s</td><td colspan="2">VideoMME-m</td><td colspan="2">VideoMME-l</td></tr><tr><td>Acc. (%)</td><td>Speedup</td><td>Acc. (%)</td><td>Speedup</td><td>Acc. (%)</td><td>Speedup</td><td>Acc. (%)</td><td>Speedup</td></tr><tr><td colspan="9">LLaVA-OneVision-7B</td></tr><tr><td>LayerSkip</td><td>10.0</td><td>0.47×</td><td>5.6</td><td>0.33×</td><td>8.1</td><td>0.46×</td><td>4.8</td><td>0.44×</td></tr><tr><td>Streaming</td><td>34.7</td><td>1.34×</td><td>36.4</td><td>1.38×</td><td>41.0</td><td>1.51×</td><td>36.2</td><td>1.45×</td></tr><tr><td>STD (ours)</td><td>47.8</td><td>1.72×</td><td>51.8</td><td>1.82×</td><td>52.1</td><td>1.83×</td><td>52.9</td><td>1.59×</td></tr><tr><td colspan="9">Qwen2-VL-7B-Instruct</td></tr><tr><td>LayerSkip</td><td>5.2</td><td>0.63×</td><td>3.7</td><td>0.59×</td><td>4.9</td><td>0.55×</td><td>5.7</td><td>0.55×</td></tr><tr><td>Streaming</td><td>53.9</td><td>1.61×</td><td>52.9</td><td>1.32×</td><td>59.2</td><td>1.36×</td><td>59.6</td><td>1.36×</td></tr><tr><td>STD (ours)</td><td>66.1</td><td>1.94×</td><td>71.8</td><td>1.71×</td><td>73.4</td><td>1.62×</td><td>81.8</td><td>1.70×</td></tr></table>
70
+
71
+ Table 1: Comparisons of the acceptance rate (Acc.) and wall time speedup of STD and previous draft models. Bold denotes the best method. Since all the methods are lossless, we do not report the evaluation of the generated contents.
72
+
73
+ draft tokens. Similar to LayerSkip, this baseline is derived from the work of Chen et al. on text-only LLMs. To ensure comparability with our approach, we extend its implementation to Video-LLMs.
74
+
75
+ Datasets and evaluation metrics. We evaluate Sparse-to-Dense on two widely adopted benchmarks: MLVU (Zhou et al., 2024) and VideoMME (Fu et al., 2024). MLVU is specifically designed for long-duration videos, while VideoMME encompasses short, medium, and long-duration videos, providing a comprehensive assessment across various video lengths. For our evaluation, we adhere to the protocols established in previous works on speculative decoding. We report two primary metrics: acceptance rate of the draft tokens and wall time speedup.
76
+
77
+ Implementation Details. Our experiments are conducted using widely adopted state-of-the-art Video-LLMs, specifically LLaVA-OneVision (7B) (Li et al., 2024a) and Qwen2-VL (7B) (Wang et al., 2024). We prompt the Video-LLMs to generate chain-of-thought (Wei et al., 2022) responses to enhance their performance. We set the sum of the textual token count $m_t$ and the selected visual KV cache count $K$ to 1024, with a batch size of 8. The number of tokens verified by the dense model $\mathcal{M}_d$ is fixed at $\gamma = 9$ . The ablation of hyperparameters can be found in Appendix Section C. Our framework is implemented based on Hugging Face's Transformers library. All experiments are conducted on NVIDIA A100 GPUs with 80 GB of memory, and are repeated three times with different random seeds, and the average results are reported.
78
+
79
+ Main Results Table 1 summarizes the performance across various reasoning tasks. We have the following findings: 1) The draft model based on LayerSkip performs worse than that utilizing sparse attention (e.g., Streaming and STD). The primary
80
+
81
+ reason for this discrepancy is that LayerSkip causes a substantial distributional shift between the draft model and the target model, leading to a low acceptance rate. Although the draft model with layer skipping runs considerably faster than the sparse attention counterparts, this advantage is insufficient to compensate for the overall wall-time speedup loss introduced by layer skipping. 2) Draft models based on sparse attention generally provide more wall time speedup. Whether in STD or Streaming, we observe a consistently high acceptance rate. This indicates that, for most of the time, the target model does not require the full KV cache but only a sparsely selected subset cache. However, it is important to note that since LLMs perform autoregressive decoding, an incorrect token can propagate errors to subsequent tokens. Thus verification with the full KV cache is essential. 3) Our model outperforms the streaming-based draft model, achieving $62.2\%$ in acceptance length and $1.74 \times$ in wall-time speedup on average. This advantage stems from our method's ability to leverage the unique characteristics of Video-LLMs to select important KV cache. As observed in section 2, text-guided video cache selection effectively identifies and retains the most critical cache elements.
82
+
83
+ # 5 Conclusion
84
+
85
+ We introduce STD, a training-free, plug-and-play decoding method that employs sparse top- $K$ attention as the draft model in speculative decoding while leveraging full attention for verification in parallel, ensuring lossless acceleration. Extensive experiments demonstrate that STD significantly outperforms strong baselines that use LayerSkip and Streaming as the draft models. Overall, STD achieves up to a $1.94 \times$ walltime speedup while maintaining identical output quality. In the future, we hope to extend our work to accelerate long
86
+
87
+ CoT Video-LLMs such as QvQ (QwenLM Team, 2024).
88
+
89
+ # Limitation
90
+
91
+ A notable limitation of our current approach is that all KV caches are still stored in GPU memory (i.e., HBM). While HBM provides the high bandwidth necessary for fast computations, its capacity is inherently limited, which poses a significant bottleneck during inference—especially as model sizes and sequence lengths increase. The limited HBM capacity may lead to restricted batch size.
92
+
93
+ In the future, a promising solution to this challenge is to offload portions of the KV caches to CPU memory. Although CPU memory typically has lower bandwidth compared to HBM, it offers substantially larger capacity. By developing efficient data transfer and caching strategies, it may be possible to mitigate the HBM bottleneck without sacrificing inference accuracy, thereby enabling more scalable and efficient processing for large Video-LLMs.
94
+
95
+ # References
96
+
97
+ Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, and Sumit Sanghai. 2023. Gqa: Training generalized multi-query transformer models from multi-head checkpoints. arXiv preprint arXiv:2305.13245.
98
+ Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D Lee, Deming Chen, and Tri Dao. 2024. Medusa: Simple llm inference acceleration framework with multiple decoding heads. arXiv preprint arXiv:2401.10774.
99
+ Jianjian Cao, Peng Ye, Shengze Li, Chong Yu, Yansong Tang, Jiwen Lu, and Tao Chen. 2024. Madtp: Multi-modal alignment-guided dynamic token pruning for accelerating vision-language transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15710-15719.
100
+ Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, and John Jumper. 2023. Accelerating large language model decoding with speculative sampling. arXiv preprint arXiv:2302.01318.
101
+ Jian Chen, Vashisth Tiwari, Ranajoy Sadhukhan, Zhuoming Chen, Jinyuan Shi, Ian En-Hsu Yen, and Beidi Chen. 2024a. Magicdec: Breaking the latency-throughput tradeoff for long context generation with speculative decoding. arXiv preprint arXiv:2408.11049.
102
+ Liang Chen, Haozhe Zhao, Tianyu Liu, Shuai Bai, Junyang Lin, Chang Zhou, and Baobao Chang. 2024b.
103
+
104
+ An image is worth 1/2 tokens after layer 2: Plug-and-play inference acceleration for large vision-language models. arXiv preprint arXiv:2403.06764.
105
+ Cunxiao Du, Jing Jiang, Xu Yuanchen, Jiawei Wu, Sicheng Yu, Yongqi Li, Shenggui Li, Kai Xu, Liqiang Nie, Zhaopeng Tu, et al. 2024a. Glide with a cape: A low-hassle method to accelerate speculative decoding. arXiv preprint arXiv:2402.02082.
106
+ Cunxiao Du, Hao Zhou, Zhaopeng Tu, and Jing Jiang. 2024b. Revisiting the markov property for machine translation. arXiv preprint arXiv:2402.02084.
107
+ Mostafa Elhoushi, Akshit Shrivastava, Diana Liskovich, Basil Hosmer, Bram Wasti, Liangzhen Lai, Anas Mahmoud, Bilge Acun, Saurabh Agarwal, Ahmed Roman, et al. 2024. Layer skip: Enabling early exit inference and self-speculative decoding. arXiv preprint arXiv:2404.16710.
108
+ Chaoyou Fu, Yuhan Dai, Yongdong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. 2024. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. arXiv preprint arXiv:2405.21075.
109
+ Mukul Gagrani, Raghavv Goel, Wonseok Jeon, Junyoung Park, Mingu Lee, and Christopher Lott. 2024. On speculative decoding for multimodal large language models. arXiv preprint arXiv:2404.08856.
110
+ Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Michael W Mahoney, Yakun Sophia Shao, Kurt Keutzer, and Amir Gholami. 2024. Kvquant: Towards 10 million context length llm inference with kv cache quantization. arXiv preprint arXiv:2401.18079.
111
+ Yunlong Hou, Fengzhuo Zhang, Cunxiao Du, Xuan Zhang, Jiachun Pan, Tianyu Pang, Chao Du, Vincent YF Tan, and Zhuoran Yang. 2025. Banditspec: Adaptive speculative decoding via bandit algorithms. arXiv preprint arXiv:2505.15141.
112
+ Zhengmian Hu and Heng Huang. Accelerated speculative sampling based on tree monte carlo. In *Forty-first International Conference on Machine Learning*.
113
+ Doohyuk Jang, Sihwan Park, June Yong Yang, Yeonsung Jung, Jihun Yun, Souvik Kundu, Sung-Yub Kim, and Eunho Yang. 2024. Lantern: Accelerating visual autoregressive models with relaxed speculative decoding. arXiv preprint arXiv:2410.03355.
114
+ Xiaohan Lan, Yitian Yuan, Zequn Jie, and Lin Ma. 2024. Vidcompress: Memory-enhanced temporal compression for video understanding in large language models. arXiv preprint arXiv:2410.11417.
115
+ Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2023. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning, pages 19274-19286. PMLR.
116
+
117
+ Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, et al. 2024a. Llavaonevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326.
118
+ Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. 2024b. Eagle: Speculative sampling requires rethinking feature uncertainty. arXiv preprint arXiv:2401.15077.
119
+ Ji Lin, Hongxu Yin, Wei Ping, Pavlo Molchanov, Mohammad Shoeybi, and Song Han. 2024a. Vila: On pre-training for visual language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26689-26699.
120
+ Zhihang Lin, Mingbao Lin, Luxi Lin, and Rongrong Ji. 2024b. Boosting multimodal large language models with visual tokens withdrawal for rapid inference. arXiv preprint arXiv:2405.05803.
121
+ Xiaoxuan Liu, Lanxiang Hu, Peter Bailis, Alvin Cheung, Zhijie Deng, Ion Stoica, and Hao Zhang. 2023. Online speculative decoding. arXiv preprint arXiv:2310.07177.
122
+ Zirui Liu, Jiayi Yuan, Hongye Jin, Shaochen Zhong, Zhaozhuo Xu, Vladimir Braverman, Beidi Chen, and Xia Hu. 2024. Kivi: A tuning-free asymmetric 2bit quantization for kv cache. arXiv preprint arXiv:2402.02750.
123
+ Chao Lou, Zixia Jia, Zilong Zheng, and Kewei Tu. 2024. Sparser is faster and less is more: Efficient sparse attention for long-range transformers. arXiv preprint arXiv:2406.16747.
124
+ Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Zhengxin Zhang, Rae Ying Yee Wong, Alan Zhu, Lijie Yang, Xiaoxiang Shi, et al. 2024. Specinfer: Accelerating large language model serving with tree-based speculative inference and verification. In Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3, pages 932-949.
125
+ QwenLM Team. 2024. Qvq-72b preview. https:// qwenlm.github.io/blog/qvq-72b-preview/.
126
+ Xiaoqian Shen, Yunyang Xiong, Changsheng Zhao, Lemeng Wu, Jun Chen, Chenchen Zhu, Zechun Liu, Fanyi Xiao, Balakrishnan Varadarajan, Florian Bordes, et al. 2024. Longvu: Spatiotemporal adaptive compression for long video-language understanding. arXiv preprint arXiv:2410.17434.
127
+ Dingjie Song, Wenjun Wang, Shunian Chen, Xidong Wang, Michael Guan, and Benyou Wang. 2024. Less is more: A simple yet effective token reduction method for efficient multi-modal llms. arXiv preprint arXiv:2409.10994.
128
+
129
+ Zunhai Su, Wang Shen, Linge Li, Zhe Chen, Hanyu Wei, Huangqi Yu, and Kehong Yuan. 2025. AkvqvI: Attention-aware kv cache adaptive 2-bit quantization for vision-language models. arXiv preprint arXiv:2501.15021.
130
+ Ziteng Sun, Jae Hun Ro, Ahmad Beirami, and Ananda Theertha Suresh. 2024a. Optimal block-level draft verification for accelerating speculative decoding. arXiv preprint arXiv:2403.10444.
131
+ Ziteng Sun, Ananda Theertha Suresh, Jae Hun Ro, Ahmad Beirami, Himanshu Jain, and Felix Yu. 2024b. Spectr: Fast speculative decoding via optimal transport. Advances in Neural Information Processing Systems, 36.
132
+ Ruslan Svirschevski, Avner May, Zhuoming Chen, Beidi Chen, Zhihao Jia, and Max Ryabinin. 2024. Specexec: Massively parallel speculative decoding for interactive llm inference on consumer devices. arXiv preprint arXiv:2406.02532.
133
+ Yao Teng, Han Shi, Xian Liu, Xuefei Ning, Guohao Dai, Yu Wang, Zhenguo Li, and Xihui Liu. 2024. Accelerating auto-regressive text-to-image generation with training-free speculative jacobi decoding. arXiv preprint arXiv:2410.01699.
134
+ Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. 2024. Qwen2-v1: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191.
135
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837.
136
+ Yuxin Wen, Qingqing Cao, Qichen Fu, Sachin Mehta, and Mahyar Najibi. 2024. Efficient vision-language models by summarizing visual tokens into compact registers. arXiv preprint arXiv:2410.14072.
137
+ Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2023. Efficient streaming language models with attention sinks. arXiv preprint arXiv:2309.17453.
138
+ Linli Yao, Lei Li, Shuhuai Ren, Lean Wang, Yuanxin Liu, Xu Sun, and Lu Hou. 2024. Deco: Decoupling token compression from semantic abstraction in multimodal large language models. arXiv preprint arXiv:2405.20985.
139
+ Boqiang Zhang, Kehan Li, Zesen Cheng, Zhiqiang Hu, Yuqian Yuan, Guanzheng Chen, Sicong Leng, Yuming Jiang, Hang Zhang, Xin Li, et al. 2025a. Videollama 3: Frontier multimodal foundation models for image and video understanding. arXiv preprint arXiv:2501.13106.
140
+
141
+ Xuan Zhang, Fengzhuo Zhang, Cunxiao Du, Chao Du, Tianyu Pang, Wei Gao, and Min Lin. 2025b. Lighttransfer: Your long-context llm is secretly a hybrid model with effortless adaptation. In Workshop on Reasoning and Planning for Large Language Models.
142
+ Junjie Zhou, Yan Shu, Bo Zhao, Boya Wu, Shitao Xiao, Xi Yang, Yongping Xiong, Bo Zhang, Tiejun Huang, and Zheng Liu. 2024. Mlvu: A comprehensive benchmark for multi-task long video understanding. arXiv preprint arXiv:2406.04264.
143
+
144
+ # A Preliminary
145
+
146
+ Speculative Decoding We first formalize our notation and provide a brief overview of the speculative decoding in autoregressive LLMs, which is the key background knowledge for our method. We represent the input sequence for a Video-LLM as a combination of visual tokens and textual tokens. Specifically, the visual tokens are denoted as $X_{v} = \{x_{0},\dots ,x_{m_{v} - 1}\}$ , and the textual prompt is denoted as $X_{t} = \{x_{m_{v}},\dots ,x_{m - 1}\}$ . Here, $m_{v}$ is the number of visual tokens, $m_{t}$ is the number of textual tokens, and the total input sequence length is $m = m_{v} + m_{t}$ . The key and value cache for token $x_{i}$ are represented by $K_{x_i}$ and $V_{x_i}$ , respectively.
147
+
148
+ Inference of Auto-regressive Models. The inference stage of auto-regressive models, e.g., VideoLLMs, can be divided into two stages: 1) prefilling: The video LLM processes the input sequence, which includes both visual tokens $X_{v}$ and textual tokens $X_{t}$ , in an autoregressive and parallel manner. For each token $x_{i}$ in the combined input $\{X_{v}, X_{t}\}$ , the model computes and stores the corresponding KV cache entries. This stage effectively encodes the input sequence and prepares the model for generating a response. The output of this stage is the first token $x_{m}$ of the model's response. 2) decoding: After prefilling, the model enters the decoding phase, generating output tokens sequentially. At each decoding step $j = m + 1, m + 2, \dots$ , the video LLM generates a new token $x_{j}$ based on the KV cache from all prior tokens. After generating, the KV cache is updated with each newly generated token. This process continues iteratively until a stopping criterion is met, such as reaching an end-of-sequence token or hitting a maximum token limit.
149
+
150
+ # B Related Works
151
+
152
+ Sparse Attention in MLLMs Normally, an image or a video frame is represented as a large number of tokens in MLLMs, e.g., 196 visual tokens per image in VILA (Lin et al., 2024a), which significantly impacts the computational and storage during model training and inference. Visual token compression aims to reduce the number of visual tokens to address it directly. The majority of visual token compression methods either train from scratch or perform additional training based on existing models. For example, some image-based
153
+
154
+ MLLMs rely on vision-language alignment (Cao et al., 2024; Yao et al., 2024; Song et al., 2024) or aggressively removing all visual tokens after a certain layer (Wen et al., 2024), while methods designed for video-based MLLMs consider the unique characteristics of video, such as employing memory mechanisms (Lan et al., 2024) or compressing tokens along spatial and temporal dimensions sequentially (Shen et al., 2024). A smaller portion of works study the test-time (training free) visual token compression for accelerating the inference procedure. FastV (Chen et al., 2024b) performs pruning by analyzing the attention pattern from shallow layers and deep layers, while another approach directly applies full visual token removal during the inference stage (Lin et al., 2024b). In our method, STD, the design of the drafter model is related to training-free visual token compression techniques. However, these previous methods inevitably impact the original model's performance. In contrast, we propose to utilize visual token compression as a drafter model to achieve lossless inference acceleration.
155
+
156
+ Speculative Decoding Speculative decoding is proposed by (Leviathan et al., 2023) and (Chen et al., 2023) to accelerate the inference of LLMs, where the throughput of LLMs is improved $2 \sim 3$ times without sacrificing the performance. The algorithm consists of two stages: drafting and verification. The drafting stage adopts a small model (drafter) to generate a long sequence of possible future tokens swiftly, while the verification stage accepts a part of the tokens predicted in the drafting stage in a token-by-tone manner. The follow-up improves the speculative decoding from these two perspectives. Specinfer (Miao et al., 2024), Eagle (Li et al., 2024b) and Medusa (Cai et al., 2024) propose to train a drafter to generate tokens with a tree structure, and the verification is conducted on the tree in a branch-by-branch manner. Hu and Huang (Hu and Huang) also organize the draft tokens as a tree, but they verify the tokens in a branch as a whole. Glide (Du et al., 2024a) generates draft tokens as an unbalanced tree, which alleviates the burden of the drafter while achieving significant acceleration. SpecTr (Sun et al., 2024b) views speculative decoding from the optimal transport view and proposes to verify a batch of draft tokens jointly. They show that the proposed algorithm is optimal up to a multiplicative factor. Sun et al. (Sun et al., 2024a) boot the acceleration by a joint verification
157
+
158
+ ![](images/5a8164f2807337019a97d23e5a1f71f263109e3c892c0f85418bd82c388ecc08.jpg)
159
+ (a)
160
+
161
+ ![](images/d49bec2289750cf71bf541dbdf09d3b91730be1d35a480f9e34c227f2dfc1476.jpg)
162
+ (b)
163
+ Figure 1: Effect of $K$ and $\gamma$ on MLVU using LLaVA-OneVision-7B.
164
+
165
+ of a single draft trajectory. Instead of using a token-by-token manner, they accept the draft sentences as a whole. Lie et al. (Liu et al., 2023) proposes to update the parameters of drafters in an online manner, which is shown to be effective in various applications. MagicDec (Chen et al., 2024a) analyzes the speculative decoding in the long-context setting with an emphasis on the FLOPS and memory. SpecExec (Svirschevski et al., 2024) focuses on a special setting where the LLMs are offloading their parameters. Several works (Gagrani et al., 2024; Jang et al., 2024; Teng et al., 2024) study the speculative decoding of MLLMs. However, they focus either on the image understanding problem or the image generation problem. In contrast, our work is the first to study video generation acceleration via speculative decoding.
166
+
167
+ # C Ablation Stuidies
168
+
169
+ We also conducted additional experiments to analyze the impact of hyperparameters $(\gamma$ and $K)$ on model performance. As shown in Figure 1a, we can see that as gamma increases, the speed up gradually improves. This improvement is because the sparse model makes accurate predictions, which allows the computational overhead to be spread out over more tokens. However, when gamma reaches 13, the speed up starts to decline because the model's accuracy in correctly predicting 13 consecutive tokens is insufficient. At the same time, as shown in Figure 1b, when $K$ is small, the acceptance rate is low, resulting in a lower speed up. In contrast, when $K$ is large, the sparse model is not as fast, which also leads to a reduced speed-up.
ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1296e0e8d1457619b76dc366e54c1174525de688a475b186b30d8698ef866c1c
3
+ size 88121
ACL/2025/Sparse-to-Dense_ A Free Lunch for Lossless Acceleration of Video Understanding in LLMs/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e81a352f07559bcfd164a39a0669ca32ea387bf9f921f9fe82f7a8a1d3c6c37
3
+ size 283736
ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/304e62aa-ca2e-41ce-9943-374bc2eb771a_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d7ece755e298ad1e487bbb42735e08bf041d1b1f5cc0ea5def678bc3b49a103
3
+ size 70409
ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/304e62aa-ca2e-41ce-9943-374bc2eb771a_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e18bdfa0b805619f138122d967f67c811804ae73624d4b52dc31c4d17f33804d
3
+ size 88920
ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/304e62aa-ca2e-41ce-9943-374bc2eb771a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42c17222b248560461132847be01c46390a8e8d4a6619fec0b0b4c19da0a841e
3
+ size 194533
ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/full.md ADDED
@@ -0,0 +1,313 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Spurious Correlations and Beyond: Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models
2
+
3
+ Fardin Ahsan Sakib<sup>1</sup>, Ziwei Zhu<sup>1</sup>, Karen Trister Grace<sup>2</sup>, Meliha Yetisgen<sup>3</sup>, Özlem Uzuner<sup>4</sup>
4
+
5
+ $^{1}$ Department of Computer Science, $^{2}$ School of Nursing, $^{4}$ Department of Information Sciences and Technology
6
+
7
+ George Mason University
8
+
9
+ $^{3}$ Department of Biomedical Informatics & Medical Education, University of Washington
10
+
11
+ {fsakib,zzhu20,kgrace,ouzuner}@gmu.edu,melihay@uw.edu
12
+
13
+ # Abstract
14
+
15
+ Social determinants of health (SDOH) extraction from clinical text is critical for downstream healthcare analytics. Although large language models (LLMs) have shown promise, they may rely on superficial cues leading to spurious predictions. Using the MIMIC portion of the SHAC (Social History Annotation Corpus) dataset and focusing on drug status extraction as a case study, we demonstrate that mentions of alcohol or smoking can falsely induce models to predict current/past drug use where none is present, while also uncovering concerning gender disparities in model performance. We further evaluate mitigation strategies—such as prompt engineering and chain-of-thought reasoning—to reduce these false positives, providing insights into enhancing LLM reliability in health domains.
16
+
17
+ # 1 Introduction
18
+
19
+ SDOH—including substance use, employment, and living conditions—strongly influence patient outcomes and clinical decision-making (Daniel et al., 2018; Himmelstein and Woolhandler, 2018; Armour et al., 2005). Extracting SDOH information from unstructured clinical text is increasingly important for enabling downstream healthcare applications and analysis (Jensen et al., 2012; Demner-Fushman et al., 2009). Although LLMs have shown promise in clinical natural language processing (NLP) tasks (Hu et al., 2024; Liu et al., 2023; Singhal et al., 2023), they often rely on superficial cues (Tang et al., 2023; Zhao et al., 2017), potentially leading to incorrect predictions undermining trust and utility in clinical settings.
20
+
21
+ Recent work has highlighted how LLMs can exhibit "shortcut learning" behaviors (Tu et al., 2020; Ribeiro et al., 2020; Zhao et al., 2018), where they exploit spurious patterns in training data rather than learning causal, generalizable features. This phenomenon spans various NLP tasks, from nat-
22
+
23
+ ural language inference (McCoy et al., 2019) to question-answering (Jia and Liang, 2017), and in clinical domains can lead to incorrect assumptions about patient conditions (Brown et al., 2023; Jabbour et al., 2020), threatening the utility of automated systems.
24
+
25
+ We investigate how LLMs produce spurious correlations in SDOH extraction through using drug status time classification (current, past, or none/unknown) as a case study. Using the MIMIC (Johnson et al., 2016) portion of the SHAC (Lybarger et al., 2021) dataset, we examine zero-shot and in-context learning scenarios across multiple LLMs (Llama (AI, 2024), Qwen (Yang et al., 2024), Llama3-Med42-70B (Christophe et al., 2024)). We explore multiple mitigation strategies to address these spurious correlations: examining the causal role of triggers through controlled removal experiments, implementing targeted prompt engineering approaches like chain-of-thought (CoT) reasoning (Wei et al., 2022), incorporating warning-based prompts, and augmenting with additional examples. While these interventions show promise—significant false positive rates persist, highlighting the deep-rooted nature of these biases and the need for more sophisticated solutions.
26
+
27
+ # Contributions:
28
+
29
+ 1. We present the first comprehensive analysis of spurious correlations in SDOH extraction across multiple LLM architectures, including domain-specialized models. Through extensive experiments in zero-shot and ICL settings, we demonstrate how models rely on superficial cues and verify their causal influence through controlled ablation studies.
30
+
31
+ 2. We uncover systematic gender disparities in model performance, demonstrating another form of spurious correlation where models inappropriately leverage patient gender for drug
32
+
33
+ status time classification predictions.
34
+
35
+ 3. We evaluate multiple prompt-based mitigation strategies (CoT, warnings, more examples) and analyze their limitations, demonstrating that while they reduce incorrect drug status time predictions, more robust solutions are needed for reliable clinical NLP deployments.
36
+
37
+ # 2 Related Work
38
+
39
+ Previous work on extracting SDOH from clinical text spans a progression from rule-based methods to fine-tuned neural models, leveraging annotated corpora for tasks like substance use and employment status extraction (Hatef et al., 2019; Patra et al., 2021; Yu et al., 2022; Han et al., 2022; Uzuner et al., 2008; Stemerman et al., 2021; Lybarger et al., 2023). More recent efforts have explored prompt-based approaches with LLMs, including GPT-4, to reduce reliance on extensive annotations (Ramachandran et al., 2023). While these approaches achieve competitive performance, studies across NLP tasks have shown that both fine-tuned and prompting-based methods often exploit spurious correlations or superficial cues (Ribeiro et al., 2020; Geirhos et al., 2020; Tu et al., 2020). Prior investigations have focused largely on spurious correlations in standard NLP tasks and supervised scenarios (McCoy et al., 2019; Zhao et al., 2018). In contrast, our work examines how these issues manifest in zero-shot and in-context SDOH extraction settings, and we propose prompt-level strategies to mitigate these correlations.
40
+
41
+ # 3 Methodology
42
+
43
+ # 3.1 Dataset and Task
44
+
45
+ We use the MIMIC-III portion of the SHAC dataset (Lybarger et al., 2021), which comprises 4405 deidentified social history note sections derived from MIMIC-III (Johnson et al., 2016) and the University of Washington clinical notes. SHAC is annotated using the BRAT tool (Stenetorp et al., 2012), capturing a variety of SDOH event types (e.g., Alcohol, Drug, Tobacco) as triggers along with associated arguments, including temporal status. To enable demographic analysis, we augmented the SHAC data by linking it with patient demographic information available in the original MIMIC-III dataset.
46
+
47
+ In this work, we examine spurious correlations in SDOH extraction through temporal drug status
48
+
49
+ classification (current, past, or none/unknown). We adopt a two-step pipeline (Ma et al., 2022, 2023):
50
+
51
+ (1) Trigger Identification: Given a social history note, the model identifies spans corresponding to the target event type (e.g., drug use).
52
+ (2) Argument Resolution: For each trigger, models apply a multiple-choice QA prompt to determine the temporal status (current/past/none).
53
+
54
+ The dataset contains diverse patterns of substance documentation, see Appendix B for detailed examples of the task and annotation schema.
55
+
56
+ # 3.2 Experimental Setup
57
+
58
+ Model Configurations We evaluate multiple model configurations:
59
+
60
+ - Zero-Shot: Models receive only task instructions and input text, with no examples
61
+ - In-Context Learning (ICL): Models are provided with three example demonstrations before making predictions on a new instance. Examples are selected to maintain balanced representation across substance use patterns (none/single/multiple) and drug use outcomes (positive/negative).
62
+ - Fine-Tuning (SFT): We also fine-tune a Llama-3.1-8B model on the MIMIC portion of the SHAC dataset to assess whether domain adaptation reduces spurious correlations.
63
+
64
+ We consider Llama-3.1-70B (zero-shot, ICL), Llama-3.1-8B (fine-tuned on MIMIC), Qwen-72B (ICL), and Llama3-Med42-70B (ICL). These models span various parameter sizes and domain specializations. The fine-tuned Llama-8B model provides insights into whether in-domain adaptation mitigates the observed shortcut learning.
65
+
66
+ Prompting Strategies We additionally use two other prompting strategies (see Appendix A for complete templates):
67
+
68
+ Chain-of-Thought (CoT): This prompt explicitly guides reasoning through five steps: (1) read the social history note carefully, (2) identify relevant information, (3) consider examples provided, (4) explain your reasoning process, (5) provide the answer. This encourages explicit reasoning to reduce shortcuts.
69
+
70
+ Warning-Based: This incorporates explicit guidelines to counter spurious correlations: (1) evaluate each factor independently - never assume
71
+
72
+ one behavior implies another, (2) extract only explicitly stated information - avoid making assumptions based on demographic or other factors, (3) use [none] when information isn't mentioned.
73
+
74
+ Evaluation Framework Our primary evaluation metric is the false positive rate (FPR), defined as: $FPR = FP / (FP + TN)$ where FP represents false positives (predicted current/past use when ground truth was none/unknown) and TN represents true negatives (correctly predicted none/unknown). We prioritize FPR given the clinical risks of incorrect positive drug use predictions—including patient stigmatization, biased provider perceptions, and diminished trust in automated systems (Van Boekel et al., 2013; Dahl et al., 2022). A higher FPR indicates more frequent erroneous predictions that could directly impact patient care. We specifically examine FPR disparities between substance-positive and substance-negative contexts to reveal whether models rely on superficial cues rather than actual evidence. See Appendix C for extended discussion.
75
+
76
+ To analyze potential spurious correlations, we categorize notes based on their ground truth substance use status:
77
+
78
+ - Substance-positive: Notes documenting current/past use of the respective substance (alcohol or smoking)
79
+ - Substance-negative: Notes where the ground truth indicates no use or unknown status
80
+
81
+ # Experimental Settings
82
+
83
+ - Original: Evaluate models on the original notes.
84
+ - Without Alcohol/Smoking Triggers: Remove mentions of alcohol/smoking to test their causal role in inducing false positives.
85
+
86
+ # 4 Results
87
+
88
+ # 4.1 RQ1: Do Large Language Models Exhibit Spurious Correlations in SDOH Extraction?
89
+
90
+ As shown in Table 1, our analysis in a zero-shot setting with Llama-70B reveals high false positive rates for drug status time classification in alcohol-positive $(66.21\%)$ and smoking-positive $(61.11\%)$ notes. In contrast, alcohol-negative and smoking-negative notes show substantially lower false positive rates $(28.83\%$ and $29.76\%$ , respectively). This
91
+
92
+ stark contrast suggests that the mere presence of alcohol or smoking triggers biases the model towards inferring nonexistent drug use. These biases likely stem from the pre-training phase, potentially reinforcing societal assumptions about correlations between different types of substance use.
93
+
94
+ # 4.2 RQ2: Do In-Context Learning and Fine-Tuning Reduce These Spurious Correlations?
95
+
96
+ Providing three in-context examples reduces false positives significantly. For Llama-70B, ICL lowers alcohol-positive mismatches from $66.21\%$ to $48.28\%$ , though a gap remains relative to alcohol-negative notes $(11.71\%)$ . Similarly, smoking-positive mismatches decrease from $61.11\%$ to $36.42\%$ versus $18.05\%$ for smoking-negative. The effectiveness of ICL suggests that explicit examples help the model focus on relevant features, though the persistence of some bias indicates deep-rooted associations from pre-training. Fine-tuning Llama-8B on the MIMIC subset (SFT) yields further improvements: alcohol-positive mismatches drop to $32.41\%$ and smoking-positive to $36.42\%$ , with corresponding negatives at $12\%$ and $7\%$ respectively, indicating that domain adaptation helps override some pre-trained biases.
97
+
98
+ # 4.3 RQ3: Are These Superficial Mentions Causally Driving the Model's Predictions?
99
+
100
+ To confirm the causal role of alcohol and smoking mentions, we remove these triggers from the notes. Across models, this consistently lowers false positives. For instance, Llama-70B zero-shot sees alcohol-positive mismatches fall from $66.21\%$ to $55.17\%$ after removing alcohol triggers. Similarly, Llama-8B-SFT reduces alcohol-positive errors from $32.41\%$ to $26.9\%$ . Similar trends are observed across other architectures including domain-specific models (see appendix E), confirming that alcohol and smoking cues spuriously bias the models' drug-use predictions.
101
+
102
+ # 4.4 RQ4: Are there systematic demographic variations in these spurious correlations?
103
+
104
+ Beyond substance-related triggers, our analysis (Table 2) uncovers another concerning form of spurious correlation: systematic performance differences based on patient gender. Just as models incorrectly rely on mere mentions of alcohol or smoking to infer substance use, they appear to
105
+
106
+ Table 1: False Positive Rates (%) Across Different Models and Approaches. *Smoking+Alcohol* refers to cases where both *Smoking-positive* and *Alcohol-positive* are true.
107
+
108
+ <table><tr><td rowspan="2">Cases</td><td colspan="4">Llama-70B</td><td colspan="2">Llama-8B</td><td>Llama3-Med42-70B</td><td>Qwen-72B</td></tr><tr><td>Zero-shot</td><td>ICL</td><td>CoT</td><td>Warning</td><td>Increased-Examples</td><td>Vanilla</td><td>Fine-tuned</td><td>ICL</td></tr><tr><td>Alcohol-positive</td><td>66.21</td><td>48.28</td><td>33.79</td><td>40.69</td><td>45.52</td><td>73.10</td><td>32.41</td><td>66.90</td></tr><tr><td>Smoking-positive</td><td>61.11</td><td>36.42</td><td>25.93</td><td>29.63</td><td>30.25</td><td>74.07</td><td>36.42</td><td>57.41</td></tr><tr><td>Alcohol-negative</td><td>28.83</td><td>11.71</td><td>6.76</td><td>5.41</td><td>10.81</td><td>37.39</td><td>12.16</td><td>16.22</td></tr><tr><td>Smoking-negative</td><td>29.76</td><td>18.05</td><td>10.73</td><td>11.22</td><td>20.00</td><td>33.66</td><td>7.32</td><td>19.51</td></tr><tr><td>Smoking+Alcohol</td><td>73.26</td><td>51.16</td><td>34.88</td><td>45.35</td><td>39.53</td><td>81.40</td><td>40.70</td><td>76.74</td></tr></table>
109
+
110
+ leverage patient gender as an inappropriate predictive signal. For the base Llama-70B model in zero-shot settings, false positive rates show stark gender disparities - male patients consistently face higher misclassification rates compared to female patients (71.15% vs 53.66% for alcohol-positive cases, and 66.67% vs 50.88% for smoking-positive cases). This pattern persists with in-context learning, with the gender gap remaining substantial (alcohol-positive: 52.88% male vs 36.59% female). Fine-tuned models showed similar disparities, with Llama-8B-SFT maintaining a performance gap of approximately 15 percentage points between genders for alcohol-positive cases.
111
+
112
+ Notably, these gender-based differences exhibit complex interactions with substance-related triggers. Cases involving positive substances mentions show the most pronounced disparities, with male patients seeing up to 20 percentage point higher false positive rates. This suggests that the model's shortcut learning compounds across different dimensions - gender biases amplify substance-related biases and vice versa. The persistence of these interacting biases across model architectures, sizes, and prompting strategies suggests they arise from deeply embedded patterns in both pre-training data and medical documentation practices.
113
+
114
+ # 5 Mitigation Strategies and Results
115
+
116
+ We explore several mitigation techniques to address the spurious correlations identified in our analysis:
117
+
118
+ Chain-of-Thought (CoT) As shown in Table 1, instructing the model to reason step-by-step before producing an answer leads to substantial reductions across all architectures. For Llama-70B, CoT reduces alcohol-positive mismatches from $66.21\%$ (zero-shot) to $33.79\%$ , with smoking-positive cases decreasing from $61.11\%$ to $25.93\%$ . Similar improvements are observed in other models (see appendix F), with Qwen-72B showing particularly
119
+
120
+ strong response to CoT. This suggests CoT helps models avoid superficial cues and focus on explicit information.
121
+
122
+ Warning-Based Instructions We prepend explicit instructions cautioning the model not to assume drug use without evidence and to treat each factor independently. With Llama-70B, these warnings lower alcohol-positive mismatches from $66.21\%$ to approximately $40.69\%$ , and also benefit smoking-positive scenarios. While not as strong as CoT, these warnings yield meaningful improvements across different architectures.
123
+
124
+ Increased Number of Examples Providing more than three examples—up to eight—further stabilizes predictions. For Llama-70B, increasing the number of examples reduces false positive rates considerably, with alcohol-positive mismatches falling to $45.52\%$ (compared to $66.21\%$ zero-shot). Similar trends are observed in other models, though the magnitude of improvement varies (see appendix F). While not as dramatic as CoT, additional examples help guide models away from faulty heuristics.
125
+
126
+ # 6 Discussion
127
+
128
+ Our findings highlight a key challenge in applying large language models to clinical information extraction: even when models achieve strong performance on average, they rely on superficial cues rather than genuine understanding of the underlying concepts. The presence of alcohol- or smoking-related mentions biases models to infer drug use incorrectly, and these shortcuts persist across Llama variants, Qwen, and Llama3-Med42-70B. The effectiveness of mitigation strategies like chain-of-thought reasoning, warning-based instructions, and additional examples underscores the importance of careful prompt design. While these interventions help guide models to focus on explicit evidence, their partial success suggests the need for more
129
+
130
+ Table 2: Gender-Based Analysis of False Positive Rates (%) Across Models
131
+
132
+ <table><tr><td rowspan="2">Cases</td><td colspan="2">Llama-70B Zero-shot</td><td colspan="2">Llama-70B ICL</td><td colspan="2">Llama-8B SFT</td><td colspan="2">Qwen-72B</td></tr><tr><td>Female</td><td>Male</td><td>Female</td><td>Male</td><td>Female</td><td>Male</td><td>Female</td><td>Male</td></tr><tr><td>Alcohol-positive</td><td>53.66</td><td>71.15</td><td>36.59</td><td>52.88</td><td>21.95</td><td>36.54</td><td>68.29</td><td>60.58</td></tr><tr><td>Smoking-positive</td><td>50.88</td><td>66.67</td><td>28.07</td><td>40.95</td><td>24.56</td><td>42.86</td><td>49.12</td><td>55.24</td></tr><tr><td>Alcohol-negative</td><td>29.13</td><td>28.42</td><td>9.45</td><td>14.74</td><td>9.45</td><td>15.79</td><td>47.24</td><td>46.32</td></tr><tr><td>Smoking-negative</td><td>27.03</td><td>32.98</td><td>9.91</td><td>27.66</td><td>6.31</td><td>8.51</td><td>54.05</td><td>52.13</td></tr><tr><td>Smoking+Alcohol</td><td>81.82</td><td>84.62</td><td>54.55</td><td>58.97</td><td>27.27</td><td>53.85</td><td>27.27</td><td>30.77</td></tr></table>
133
+
134
+ robust approaches - integrating domain-specific knowledge, implementing adversarial training, or curating more balanced datasets. Our demographic analysis reveals that these spurious correlations are not uniformly distributed across patient groups, raising fairness concerns for clinical deployment. Addressing such disparities requires both algorithmic improvements and careful consideration of deployment strategies. Clinicians and stakeholders must be aware of these limitations before deploying LLMs in clinical decision-support systems. Understanding these systematic biases in automated analysis can inform improvements not only in model development but also in clinical documentation practices and standards.
135
+
136
+ # 7 Implications Beyond NLP: Clinical Documentation and Practice
137
+
138
+ The implications of this study extend beyond NLP methodologies. Our analysis reveals that these models not only learn but potentially amplify existing biases in clinical practice. The identified error patterns—particularly the tendency to infer substance use from smoking/alcohol mentions and gender-based performance disparities—mirror documented provider biases in clinical settings (Saloner et al., 2023; Meyers et al., 2021). Notably, these biases appear to originate partly from medical documentation practices themselves (Ivy et al., 2024; Kim et al., 2021; Markowitz, 2022). Our finding that explicit evidence-based reasoning (through CoT) reduces these biases aligns with established strategies for mitigating provider bias (Mateo and Williams). This parallel between computational and human biases suggests that systematic analysis of LLM behavior could inform broader efforts to identify and address biases in medical documentation and practice, potentially contributing to improved provider education and documentation standards.
139
+
140
+ # 8 Conclusion
141
+
142
+ This work presents the first systematic exploration of spurious correlations in SDOH extraction, revealing how contextual cues can lead to incorrect and potentially harmful predictions in clinical settings. Beyond demonstrating the problem, we've evaluated several mitigation approaches that, while promising, indicate the need for more sophisticated solutions. Future work should focus on developing robust debiasing techniques, leveraging domain expertise, and establishing comprehensive evaluation frameworks to ensure reliable deployment across diverse populations.
143
+
144
+ # 9 Limitations
145
+
146
+ Dataset limitations Our analysis relied exclusively on the MIMIC portion of the SHAC dataset, which constrains the generalizability of our findings. While we observe consistent gender-based performance disparities, a more diverse dataset could help establish the breadth of these biases.
147
+
148
+ Model coverage We focused solely on open-source large language models (e.g., Llama, Qwen). Extending the evaluation to additional data sources, closed-source models, and other domain-specific architectures would help verify the robustness of our conclusions.
149
+
150
+ Causal understanding While we established the causality of triggers through removal experiments, understanding why specific triggers affect certain models or scenarios would require deeper analysis using model interpretability techniques.
151
+
152
+ Methodology scope Our study focused exclusively on generative methods; results may not generalize to traditional pipeline-based approaches that combine sequence labeling and relation classification.
153
+
154
+ Mitigation effectiveness While we identified various spurious correlations, our mitigation strategies
155
+
156
+ could not completely address the problem, leaving room for future work on addressing these issues.
157
+
158
+ # 10 Ethics Statement
159
+
160
+ All experiments used de-identified social history data from the SHAC corpus, with LLMs deployed on a secure university server. We followed all data use agreements and institutional IRB protocols. Although the dataset is fully de-identified, biases within the models could raise ethical concerns in real-world applications. Further validation and safeguards are recommended before clinical deployment.
161
+
162
+ # 11 Acknowledgments
163
+
164
+ We thank our collaborators for their valuable feedback and support. Generative AI assistants were used for grammar checking and LaTeX formatting; the authors retain full responsibility for the final content and analysis.
165
+
166
+ # References
167
+
168
+ Meta AI. 2024. Llama 3.1 model card. https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md. Accessed: 2024-12-13.
169
+ BS Armour, T Woollery, A Malarcher, TF Pechacek, and C Husten. 2005. Annual smoking-attributable mortality, years of potential life lost, and productivity losses—United States, 1997-2001. JAMA: Journal of the American Medical Association, 294(7).
170
+ Alexander Brown, Nenad Tomasev, Jan Freyberg, Yuan Liu, Alan Karthikesalingam, and Jessica Schrouff. 2023. Detecting shortcut learning for fair medical ai using shortcut testing. Nature communications, 14(1):4314.
171
+ Clément Christophe, Praveen K Kanithi, Tathagata Raha, Shadab Khan, and Marco AF Pimentel. 2024. Med42-v2: A suite of clinical llms. Preprint, arXiv:2408.06142.
172
+ Rachel A Dahl, J Priyanka Vakkalanka, Karisa K Harland, and Joshua Radke. 2022. Investigating healthcare provider bias toward patients who use drugs using a survey-based implicit association test: Pilot study. Journal of addiction medicine, 16(5):557-562.
173
+ Hilary Daniel, Sue S Bornstein, Gregory C Kane, Health, and Public Policy Committee of the American College of Physicians*. 2018. Addressing social determinants to improve patient care and promote health equity: an american college of physicians position paper. Annals of internal medicine, 168(8):577-578.
174
+
175
+ Dina Demner-Fushman, Wendy W Chapman, and Clement J McDonald. 2009. What can natural language processing do for clinical decision support? Journal of biomedical informatics, 42(5):760-772.
176
+ Robert Geirhos, Jorn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. 2020. Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11):665-673.
177
+ Sifei Han, Robert F Zhang, Lingyun Shi, Russell Richie, Haixia Liu, Andrew Tseng, Wei Quan, Neal Ryan, David Brent, and Fuchiang R Tsui. 2022. Classifying social determinants of health from unstructured electronic health records using deep learning-based natural language processing. Journal of biomedical informatics, 127:103984.
178
+ Elham Hatef, Masoud Rouhizadeh, Iddrisu Tia, Elyse Lasser, Felicia Hill-Briggs, Jill Marsteller, Hadi Kharrazi, et al. 2019. Assessing the availability of data on social and behavioral determinants in structured and unstructured electronic health records: a retrospective analysis of a multilevel health care system. *JMIR medical informatics*, 7(3):e13802.
179
+ David U Himmelstein and Steffie Woolhandler. 2018. Determined action needed on social determinants. Annals of internal medicine, 168(8):596-597.
180
+ Yan Hu, Qingyu Chen, Jingcheng Du, Xueqing Peng, Vipina Kuttichi Keloth, Xu Zuo, Yujia Zhou, Zehan Li, Xiaoqian Jiang, Zhiyong Lu, et al. 2024. Improving large language models for clinical named entity recognition via prompt engineering. Journal of the American Medical Informatics Association, page ocad259.
181
+ Zalaya K Ivy, Sharon Hwee, Brittany C Kimball, Michael D Evans, Nicholas Marka, Catherine Bendel, and Alexander A Boucher. 2024. Disparities in documentation: evidence of race-based biases in the electronic medical record. Journal of Racial and Ethnic Health Disparities, pages 1-7.
182
+ Sarah Jabbour, David Fouhey, Ella Kazerooni, Michael W Sjoding, and Jenna Wiens. 2020. Deep learning applied to chest x-rays: exploiting and preventing shortcuts. In Machine Learning for Healthcare Conference, pages 750-782. PMLR.
183
+ Peter B Jensen, Lars J Jensen, and Søren Brunak. 2012. Mining electronic health records: towards better research applications and clinical care. Nature Reviews Genetics, 13(6):395-405.
184
+ Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328.
185
+ Alistair EW Johnson, Tom J Pollard, Lu Shen, Li-wei H Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. Scientific data, 3(1):1-9.
186
+
187
+ Min Kyung Kim, Joy Noel Baumgartner, Jennifer Headley, Julius Kirya, James Kaggwa, and Joseph R Egger. 2021. Medical record bias in documentation of obstetric and neonatal clinical quality of care indicators in uganda. Journal of Clinical Epidemiology, 136:10-19.
188
+ Zhengliang Liu, Yue Huang, Xiaowei Yu, Lu Zhang, Zihao Wu, Chao Cao, Haixing Dai, Lin Zhao, Yiwei Li, Peng Shu, et al. 2023. Deid-gpt: Zero-shot medical text de-identification by gpt-4. arXiv preprint arXiv:2303.11032.
189
+ Kevin Lybarger, Nicholas J Dobbins, Ritche Long, Angad Singh, Patrick Wedgeworth, Ozlem Uzuner, and Meliha Yetisgen. 2023. Leveraging natural language processing to augment structured social determinants of health data in the electronic health record. Journal of the American Medical Informatics Association, 30(8):1389-1397.
190
+ Kevin Lybarger, Mari Ostendorf, and Meliha Yetisgen. 2021. Annotating social determinants of health using active learning, and characterizing determinants using neural event extraction. Journal of Biomedical Informatics, 113:103631.
191
+ Mingyu Derek Ma, Alexander K Taylor, Wei Wang, and Nanyun Peng. 2022. Dice: data-efficient clinical event extraction with generative models. arXiv preprint arXiv:2208.07989.
192
+ Yubo Ma, Yixin Cao, YongChing Hong, and Aixin Sun. 2023. Large language model is not a good few-shot information extractor, but a good reranker for hard samples! arXiv preprint arXiv:2303.08559.
193
+ David M Markowitz. 2022. Gender and ethnicity bias in medicine: A text analysis of 1.8 million critical care records. *PNAS nexus*, 1(4):pgac157.
194
+ CM Mateo and DR Williams. Addressing bias and reducing discrimination. The professional responsibility of health care providers, 2020:95.
195
+ RT McCoy, E Pavlick, and T Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. arxiv preprint arxiv: 190201007.
196
+ SA Meyers, VA Earnshaw, Brittany D'Ambrosio, Natasia Courchesne, Dan Verb, and LR Smith. 2021. The intersection of gender and drug use-related stigma: A mixed methods systematic review and synthesis of the literature. *Drug and alcohol dependence*, 223:108706.
197
+ Braja G Patra, Mohit M Sharma, Veer Vekaria, Prakash Adekkanattu, Olga V Patterson, Benjamin Glicksberg, Lauren A Lepow, Euijung Ryu, Joanna M Biernacka, Al'ona Furmanchuk, et al. 2021. Extracting social determinants of health from electronic health records using natural language processing: a systematic review. Journal of the American Medical Informatics Association, 28(12):2716-2727.
198
+
199
+ Giridhar Kaushik Ramachandran, Yujuan Fu, Bin Han, Kevin Lybarger, Nicholas J Dobbins, Ozlem Uzuner, and Meliha Yetisgen. 2023. Prompt-based extraction of social determinants of health using few-shot learning. Preprint, arXiv:2306.07170.
200
+ Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of nlp models with checklist. arXiv preprint arXiv:2005.04118.
201
+ Brendan Saloner, Wenshu Li, Michael Flores, Ana M Progovac, and Benjamin Lé Cook. 2023. A widening divide: Cigarette smoking trends among people with substance use disorder and criminal legal involvement: Study examines cigarette smoking trends among people with substance use disorders and people with criminal legal involvement. *Health Affairs*, 42(2):187-196.
202
+ Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. 2023. Large language models encode clinical knowledge. Nature, 620(7972):172-180.
203
+ Rachel Stemerman, Jaime Arguello, Jane Brice, Ashok Krishnamurthy, Mary Houston, and Rebecca Kitzmiller. 2021. Identification of social determinants of health using multi-label classification of electronic health record clinical notes. *JAMIA open*, 4(3):00aa069.
204
+ Pontus Stenetorp, Sampo Pyysalo, Goran Topić, Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsujii. 2012. Brat: a web-based tool for nlp-assisted text annotation. In Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 102-107.
205
+ Ruixiang Tang, Dehan Kong, Longtao Huang, and Hui Xue. 2023. Large language models can be lazy learners: Analyze shortcuts in in-context learning. arXiv preprint arXiv:2305.17256.
206
+ Lifu Tu, Garima Lalwani, Spandana Gella, and He He. 2020. An empirical study on robustness to spurious correlations using pre-trained language models. Transactions of the Association for Computational Linguistics, 8:621-633.
207
+ Özlem Uzuner, Ira Goldstein, Yuan Luo, and Isaac Kohane. 2008. Identifying patient smoking status from medical discharge records. Journal of the American Medical Informatics Association, 15(1):14-24.
208
+ Leonieke C Van Boekel, Evelien PM Brouwers, Jaap Van Weeghel, and Henk FL Garretsen. 2013. Stigma among health professionals towards patients with substance use disorders and its consequences for healthcare delivery: systematic review. Drug and alcohol dependence, 131(1-2):23-35.
209
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
210
+
211
+ et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837.
212
+
213
+ An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jianxin Yang, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Xuejing Liu, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zhifang Guo, and Zhihao Fan. 2024. Qwen2 technical report. Preprint, arXiv:2407.10671.
214
+
215
+ Zehao Yu, Xi Yang, Chong Dang, Songzi Wu, Prakash Adekkanattu, Jyotishman Pathak, Thomas J George, William R Hogan, Yi Guo, Jiang Bian, et al. 2022. A study of social and behavioral determinants of health in lung cancer patients using transformers-based natural language processing models. In AMIA Annual Symposium Proceedings, volume 2021, page 1225.
216
+
217
+ Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457.
218
+
219
+ Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. arXiv preprint arXiv:1804.06876.
220
+
221
+ # A Prompting Strategies
222
+
223
+ All prompting approaches share a base system message identifying the model's role as "an AI assistant specialized in extracting and analyzing social history information from medical notes." Each strategy then builds upon this foundation with specific modifications:
224
+
225
+ # Zero-Shot
226
+
227
+ The baseline approach uses a minimal prompt structure: System: AI assistant specialized in social history extraction User: For the following social history note: [Clinical note text] [Task instruction] [Options if applicable] This setup evaluates the model's ability to perform extraction tasks using only its pre-trained knowledge, without additional guidance or examples.
228
+
229
+ # In-Context Learning (ICL)
230
+
231
+ This approach augments the base prompt with three carefully selected demonstration examples. Each example follows a structured JSON format: json "id": "example-id", "instruction": "Extract all Drug text spans...", "input": "Social History: Patient denies drug use...", "options": "[Multiple choice options if applicable]", "output": "Expected extraction or classification"
232
+
233
+ # Chain-of-Thought (CoT)
234
+
235
+ Building upon ICL, this method explicitly guides the model through a structured reasoning process: Please approach this task step-by-step: 1. Carefully read the social history note 2. Identify all relevant information related to the question 3. Consider the examples provided 4. Explain your reasoning process 5. Provide your final answer This approach aims to reduce spurious correlations and shortcut learning by encouraging explicit articulation of the reasoning process before arriving at the final extraction or classification.
236
+
237
+ # Warning-Based
238
+
239
+ This specialized approach incorporates explicit rules and warnings in the system message: Important Guidelines: 1. Evaluate each factor independently - never assume one behavior implies another 2. Extract only explicitly stated information - don't make assumptions based on demographics or other factors 3. If information isn't mentioned, use [none] or select "not mentioned" option These guidelines specifically address the challenge of false positives in substance use detection by discouraging inference-based conclusions without explicit textual evidence. The warnings are designed to counteract the model's tendency to make assumptions based on superficial cues or demographic factors.
240
+
241
+ # B Dataset Details
242
+
243
+ # B.1 Data Format and Annotation Process
244
+
245
+ The SHAC dataset originally consists of paired text files (.txt) containing social history notes and annotation files (.ann) capturing SDOH information. We convert these into a question-answering format to evaluate LLMs. Below we demonstrate this process with a synthetic example:
246
+
247
+ # Raw Note (.txt)
248
+
249
+ SOCIAL HISTORY:
250
+
251
+ Patient occasionally uses alcohol. Denies any illicit drug use.
252
+
253
+ # BRAT Annotations (.ann)
254
+
255
+ T1 Alcohol 24 31 alcohol
256
+ T2 Drug 47 50 drug
257
+ T3 StatusTime 8 19 occasionally
258
+ T4 StatusTime 32 37 denies
259
+
260
+ E1 Alcohol:T1 Status:T3
261
+ E2 Drug:T2 Status:T4
262
+
263
+ A1 StatusTimeVal T3 current
264
+ A2 StatusBarT4none
265
+
266
+ Here, T1 and T2 are triggers - spans of text that indicate the presence of SDOH events (e.g., "alcohol" for substance use). The annotations also capture arguments - additional information about these events, such as their temporal status represented by T3 and T4. For example, T3 ("occasionally") indicates a temporal status of current for alcohol use.
267
+
268
+ We transform these structured annotations into two types of questions:
269
+
270
+ Trigger Identification Questions about identifying relevant event spans:
271
+
272
+ ```jsonl
273
+ {"id": "0001-Alcohol",
274
+ "instruction": "Extract all Alcohol text spans as it is from the note. If multiple spans present, separate them by [SEP]. If none, output [none].",
275
+ "input": "SOCIAL HISTORY: Patient occasionally uses alcohol. Denies any illicit drug use.",
276
+ "output": "alcohol"}
277
+ ```
278
+
279
+ Argument-Resolution Questions about determining event properties:
280
+
281
+ ```jsonl
282
+ {"id": "0001-Alcohol_StatusTime", "instruction": "Choose the best StatusTime value for the <alcohol> (Alcohol) from the note:", "input": "SOCIAL HISTORY: Patient occasionally uses alcohol. Denies any illicit drug use.", "options": "Options: (a) none. (b) current. (c) past. (d) Not Applicable.", "output": "(b) current."}
283
+ ```
284
+
285
+ # C Metric Selection and Justification
286
+
287
+ Our focus on False Positive Rate (FPR) is motivated by the unique risks associated with incorrect substance use predictions in clinical settings (Van Boekel et al., 2013; Dahl et al., 2022). While traditional metrics like accuracy or F1-score treat all errors equally, FPR specifically captures the rate of unwarranted "positive" classifications—a critical concern when dealing with sensitive patient information. High FPR values indicate that models frequently make unjustified drug use predictions, which could lead to:
288
+
289
+ - Patient stigmatization and potential discrimination
290
+ - Reduced quality of care due to biased provider perceptions
291
+ - Diminished trust in automated clinical decision support systems
292
+
293
+ Conversely, lower FPR values suggest better model reliability in avoiding these harmful misclassifications. While comprehensive evaluation would benefit from additional metrics, FPR serves as a particularly relevant indicator for assessing model safety and reliability in clinical applications.
294
+
295
+ # D Model Fine-tuning and Computational Resources
296
+
297
+ We fine-tuned Llama-8B using LoRA with rank 64 and dropout 0.1. Key training parameters include a learning rate of 2e-4, batch size of 4, and 5 training epochs. Training was conducted on 2 NVIDIA A100 GPUs for approximately 3 hours using mixed precision (FP16). For our main experiments, we used several large language models: Llama-70B (70B parameters), Qwen-72B (72B parameters), Llama3-Med42-70B (70B parameters), and our fine-tuned Llama-8B (8B parameters). The inference experiments across all models required approximately 100 GPU hours on 2 NVIDIA A100 GPUs. This computational budget covered all experimental settings including zero-shot, in-context learning, and the evaluation of various mitigation strategies.
298
+
299
+ # E Trigger Removal Experiments
300
+
301
+ Table 3: Impact of Trigger Removal on Llama 3.1 Models False Positive Rates (%)
302
+
303
+ <table><tr><td rowspan="2">Cases</td><td colspan="3">Llama 3.1 70b Zero-shot</td><td colspan="3">Llama 3.1 8b SFT</td></tr><tr><td>Full</td><td>Without Alcohol</td><td>Without Smoking</td><td>Full</td><td>Without Alcohol</td><td>Without Smoking</td></tr><tr><td>Alcohol-positive</td><td>66.21</td><td>55.17</td><td>64.14</td><td>32.41</td><td>26.90</td><td>33.10</td></tr><tr><td>Smoking-positive</td><td>61.11</td><td>54.94</td><td>56.79</td><td>36.42</td><td>32.10</td><td>31.48</td></tr><tr><td>Alcohol-negative</td><td>28.83</td><td>25.23</td><td>23.87</td><td>12.16</td><td>12.16</td><td>8.11</td></tr><tr><td>Smoking-negative</td><td>29.76</td><td>22.93</td><td>26.34</td><td>7.32</td><td>6.83</td><td>7.32</td></tr><tr><td>Smoking+Alcohol</td><td>73.26</td><td>65.12</td><td>72.09</td><td>40.70</td><td>32.56</td><td>41.86</td></tr></table>
304
+
305
+ Table 4: Impact of Trigger Removal on Additional Models' False Positive Rates (%)
306
+
307
+ <table><tr><td rowspan="2">Cases</td><td colspan="3">Llama 3.1 70B ICL</td><td colspan="3">Llama3-Med42-70B</td><td colspan="3">Qwen-72B</td></tr><tr><td>Full</td><td>Without Alcohol</td><td>Without Smoking</td><td>Full</td><td>Without Alcohol</td><td>Without Smoking</td><td>Full</td><td>Without Alcohol</td><td>Without Smoking</td></tr><tr><td>Alcohol-positive</td><td>48.28</td><td>38.62</td><td>47.59</td><td>66.90</td><td>53.10</td><td>64.83</td><td>62.76</td><td>51.72</td><td>54.48</td></tr><tr><td>Smoking-positive</td><td>36.42</td><td>32.72</td><td>32.09</td><td>57.41</td><td>51.85</td><td>52.47</td><td>53.09</td><td>45.68</td><td>51.23</td></tr><tr><td>Alcohol-negative</td><td>11.71</td><td>16.22</td><td>10.81</td><td>16.22</td><td>16.22</td><td>13.96</td><td>46.85</td><td>45.05</td><td>47.75</td></tr><tr><td>Smoking-negative</td><td>18.05</td><td>14.15</td><td>15.12</td><td>19.51</td><td>14.15</td><td>19.51</td><td>53.17</td><td>49.27</td><td>49.76</td></tr><tr><td>Smoking+Alcohol</td><td>51.16</td><td>44.19</td><td>46.51</td><td>76.74</td><td>66.28</td><td>73.26</td><td>56.98</td><td>43.02</td><td>50.00</td></tr></table>
308
+
309
+ # F Mitigation Experiments
310
+
311
+ Table 5: Impact of Mitigation Strategies on Additional Models' False Positive Rates (%)
312
+
313
+ <table><tr><td rowspan="2">Cases</td><td colspan="4">Llama3-Med42-70B</td><td colspan="4">Qwen-72B</td></tr><tr><td>ICL</td><td>CoT</td><td>Warning</td><td>Increased Examples</td><td>ICL</td><td>CoT</td><td>Warning</td><td>Increased Examples</td></tr><tr><td>Alcohol-positive</td><td>66.90</td><td>48.28</td><td>62.76</td><td>63.45</td><td>62.76</td><td>28.97</td><td>34.38</td><td>36.55</td></tr><tr><td>Smoking-positive</td><td>57.41</td><td>35.19</td><td>53.09</td><td>50.62</td><td>53.09</td><td>23.46</td><td>32.09</td><td>33.33</td></tr><tr><td>Alcohol-negative</td><td>16.22</td><td>6.76</td><td>16.67</td><td>15.76</td><td>46.85</td><td>19.82</td><td>22.07</td><td>26.12</td></tr><tr><td>Smoking-negative</td><td>19.51</td><td>13.66</td><td>18.54</td><td>18.05</td><td>53.17</td><td>17.07</td><td>25.85</td><td>29.27</td></tr><tr><td>Smoking+Alcohol</td><td>76.74</td><td>53.49</td><td>72.09</td><td>68.60</td><td>56.98</td><td>32.56</td><td>37.21</td><td>41.86</td></tr></table>
ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e209184402658712bacafc4c3b4a89f493b7bb0614338913dc261ac0a5007df3
3
+ size 250694
ACL/2025/Spurious Correlations and Beyond_ Understanding and Mitigating Shortcut Learning in SDOH Extraction with Large Language Models/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34e29bcb6dc7cb665f13994600097cb2fa110b6b04b23c06ce51663f6ac103e9
3
+ size 310610
ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/39bab619-549d-4d70-8be3-54342ed7ca89_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d429a8b5d4c6cf88c4bac3c8c71fc5c839928f2d005a6da4fed1ebc7d88df4b
3
+ size 102683
ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/39bab619-549d-4d70-8be3-54342ed7ca89_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e70e06b9dcfe02d71ce336a14fc019b6970cf6a803681db4634ffaee8ede2064
3
+ size 123664
ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/39bab619-549d-4d70-8be3-54342ed7ca89_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1502884d6013c6c03e6410ff0f7eb3921aca86669bcdd057c114db75418bcc3a
3
+ size 777727
ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/full.md ADDED
@@ -0,0 +1,377 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # State-offset Tuning: State-based Parameter-Efficient Fine-Tuning for State Space Models
2
+
3
+ Wonjun Kang $^{1,2*}$
4
+
5
+ Kevin Galim2* ung Il Koo2,4†
6
+
7
+ Yuchen Zeng $^{3*}$
8
+ Nam Ik Cho $^{1}$
9
+
10
+ Minjae Lee
11
+
12
+ $^{1}$ Seoul National University $^{2}$ FuriosaAI
13
+
14
+ $^{3}$ UW-Madison $^{4}$ Ajou University
15
+
16
+ {kangwj1995, kevin.galim, minjae.lee, hikoo}@furiosa.ai, yzeng58@wisc.edu, nicho@snu.ac.kr
17
+
18
+ # Abstract
19
+
20
+ State Space Models (SSMs) have emerged as efficient alternatives to Transformers, mitigating their quadratic computational cost. However, the application of Parameter-Efficient Fine-Tuning (PEFT) methods to SSMs remains largely unexplored. In particular, prompt-based methods like Prompt Tuning and Prefix-Tuning, which are widely used in Transformers, do not perform well on SSMs. To address this, we propose state-based methods as a superior alternative to prompt-based methods. This new family of methods naturally stems from the architectural characteristics of SSMs. State-based methods adjust state-related features directly instead of depending on external prompts. Furthermore, we introduce a novel state-based PEFT method: State-offset Tuning. At every timestep, our method directly affects the state at the current step, leading to more effective adaptation. Through extensive experiments across diverse datasets, we demonstrate the effectiveness of our method. Code is available at https://github.com/furiosa-ai/ssm-state-tuning.
21
+
22
+ # 1 Introduction
23
+
24
+ Large Language Models (LLMs) have gained significant attention for their strong performance in NLP tasks (Achiam et al., 2023; Brown et al., 2020), but suffer from the quadratic complexity of Transformer architectures (Vaswani et al., 2017). To mitigate this, subquadratic alternatives have gained interest (Katharopoulos et al., 2020; Peng et al., 2023; Sun et al., 2023), with State Space Models (SSMs) emerging as a promising solution (Gu and Dao, 2024; Dao and Gu, 2024).
25
+
26
+ Meanwhile, as LLMs scale up, full fine-tuning for downstream tasks becomes prohibitively expensive. Consequently, Parameter-Efficient Fine
27
+
28
+ ![](images/b12da167d19c4f3b0b6ac17146c9ed377f4a5d5b7d0a30f7b1f7f6a9156fb3cf.jpg)
29
+ Figure 1: Illustration of our proposed State-offset Tuning on a Mamba block (Gu and Dao, 2024). State-offset Tuning injects a trainable state-offset $h'$ at each timestep in the SSM module while keeping other parameters frozen, enabling parameter-efficient fine-tuning and improved downstream performance.
30
+
31
+ Tuning (PEFT) (Houlsby et al., 2019; Hu et al., 2021; He et al., 2021; Zaken et al., 2022; Liu et al., 2021, 2022; Zeng and Lee, 2024) has emerged, which aims to reduce the number of trainable parameters while achieving adaptation performance comparable to full fine-tuning.
32
+
33
+ However, research on PEFT methods for SSMs remains limited despite their growing popularity. For instance, prompt-based PEFT methods, such as Prompt Tuning (Lester et al., 2021) and Prefix-Tuning (Li and Liang, 2021), have been widely applied to Transformers but fail to adapt effectively to SSMs (Galim et al., 2024). Therefore, new PEFT strategies tailored to SSMs are needed to fully leverage their architectural properties.
34
+
35
+ To bridge this gap, we introduce state-based PEFT methods that leverage the intrinsic properties of SSMs, offering a superior alternative to prompt-based methods. Building on this concept, we propose State-offset Tuning. This method directly ad
36
+
37
+ justs the state-related features rather than relying on external prompts, enabling more effective adaptation.
38
+
39
+ In summary, our main contributions are:
40
+
41
+ - We introduce state-based methods, a new family of PEFT techniques for SSMs, offering a superior alternative to prompt-based approaches.
42
+ - We propose State-offset Tuning as a new state-based PEFT method.
43
+ - We demonstrate the effectiveness of our method through experiments on a variety of datasets, consistently outperforming existing fine-tuning techniques.
44
+
45
+ # 2 Related Works
46
+
47
+ # 2.1 State Space Models
48
+
49
+ Linear State-Space Layers (LSSL) are one of the earliest applications of SSMs in sequence modeling (Gu et al., 2021), leveraging HiPPO (Gu et al., 2020) to initialize the state matrix. However, its high computational overhead limits practicality. Gu et al. (2022) introduced Structured State Space Models (S4), which mitigate this by structuring the state matrix. Recently, Mamba (Gu and Dao, 2024; Dao and Gu, 2024) enhanced modeling capabilities by introducing an input-dependent S6 block.
50
+
51
+ # 2.2 Parameter-Efficient Fine-Tuning
52
+
53
+ In this section, we review existing PEFT methods. For more details, see Sec. D.
54
+
55
+ Parameter-based Methods One approach to parameter-based PEFT methods is to selectively fine-tune specific layers within the model while keeping the remaining layers frozen. BitFit (Zaken et al., 2022) is a lightweight and effective strategy that focuses solely on fine-tuning a model's bias terms. Furthermore, LoRA (Hu et al., 2021) represents a notable parameter-based PEFT method by introducing low-rank matrices for weight updates, facilitating efficient adaptation.
56
+
57
+ Prompt-based Methods Instead of fine-tuning model parameters, Prompt Tuning (Lester et al., 2021) enhances models by preponding trainable soft embeddings to the prompt. Prefix-Tuning (Li and Liang, 2021) builds on this approach by injecting trainable embeddings into each Transformer layer, achieving strong adaptation results for Transformer-based LLMs.
58
+
59
+ PEFT for SSMs Concurrently, Galim et al. (2024) showed that LoRA outperforms prompt-based methods on SSMs. Furthermore, they proposed Selective Dimension Tuning (SDT) for fine-tuning the SSM module while applying LoRA on the linear projection matrices when fine-tuning Mamba models. Yoshimura et al. (2025) suggested a new PEFT method called Additional-scan, which increases the hidden state dimension of SSMs, fine-tuning only its additional parameters.
60
+
61
+ # 3 PEFT Methods on SSMs
62
+
63
+ SSM Preliminaries Assuming a single channel dimension, SSMs such as S4 (Gu et al., 2022) transform a signal $x_{t} \in \mathbb{R}$ into $y_{t} \in \mathbb{R}$ through an $H$ -dimensional latent state $\pmb{h}_{t} \in \mathbb{R}^{H}$ as below:
64
+
65
+ $$
66
+ \boldsymbol {h} _ {t} = \overline {{\boldsymbol {A}}} \boldsymbol {h} _ {t - 1} + \overline {{\boldsymbol {B}}} x _ {t}, \quad y _ {t} = \boldsymbol {C h} _ {t},
67
+ $$
68
+
69
+ where $\overline{B} \in \mathbb{R}^{H \times 1}$ controls input influence, $\overline{A} \in \mathbb{R}^{H \times H}$ governs state dynamics, and $C \in \mathbb{R}^{1 \times H}$ maps the state to the output. $\overline{A}$ and $\overline{B}$ represent discretized versions of $A$ and $B$ , parameterized by a learnable step size $\Delta \in \mathbb{R}$ .
70
+
71
+ In S6 (the SSM module of Mamba), input dependency is integrated by using input-dependent $\overline{A}_t$ , $\overline{B}_t$ , and $C_t$ at every timestep. Specifically, given $D$ channels with $\boldsymbol{x}_t \in \mathbb{R}^D$ , learnable parameters $\boldsymbol{W}_B$ , $\boldsymbol{W}_C \in \mathbb{R}^{H \times D}$ , and $\boldsymbol{W}_{\Delta} \in \mathbb{R}^{D \times D}$ compute $\boldsymbol{B}_t = \boldsymbol{W}_B \boldsymbol{x}_t$ , $C_t = W_C \boldsymbol{x}_t$ , and $\Delta = W_{\Delta} \boldsymbol{x}_t$ . In this section, we consider S4 for simplicity.
72
+
73
+ # 3.1 Prompt-based PEFT Methods on SSMs
74
+
75
+ Prefix-Tuning Can Update Only the Initial State of an SSM Generally, SSMs assume that the initial hidden state is $h_0 = 0$ . We can express $h_t$ with $h_0$ as $h_t = \sum_{i=1}^{t} \overline{A}^{t-i} \overline{B}_i x_i + \overline{A}^t h_0$ .
76
+
77
+ Assume we have virtual tokens $x_{(-V + 1)},\ldots ,x_0$ If we prepend virtual tokens as prefix to the input sequence, we can write the updated $\widehat{\pmb{h}}_t$ as below:
78
+
79
+ $$
80
+ \widehat {\boldsymbol {h}} _ {t} = \boldsymbol {h} _ {t} + \overline {{\boldsymbol {A}}} ^ {t} \sum_ {i = 0} ^ {V - 1} \overline {{\boldsymbol {A}}} ^ {i} \overline {{\boldsymbol {B}}} x _ {- i} = \boldsymbol {h} _ {t} + \overline {{\boldsymbol {A}}} ^ {t} \boldsymbol {h} _ {\text {p r e f i x}}.
81
+ $$
82
+
83
+ By introducing a non-zero $\widehat{h}_0$ , we can substitute $\widehat{h}_0$ for $h_{\mathrm{prefix}}$ making Prefix-Tuning, or optimizing virtual tokens, equivalent to updating the initial state. As optimized virtual tokens only affect the initial state $\widehat{h}_0$ , Prefix-Tuning's expressivity is upper-bounded by updating the initial state directly (Galim et al., 2024). Since Prefix-Tuning is an extended version of Prompt Tuning, this upper bound is applicable to Prompt Tuning as well.
84
+
85
+ Galim et al. (2024) showed Initial State Tuning, an advanced version of Prefix-Tuning, which directly optimizes the channel-specific initial state $\pmb{h}^{\prime}\in \mathbb{R}^{H}$ , resulting in $DH$ trainable parameters in total across all $D$ channels. The updated output $\widehat{y}_t$ for Initial State Tuning can be written as in Table 1.
86
+
87
+ # 3.2 State-based Methods: A New Family of PEFT Methods for SSMs
88
+
89
+ We define state-based methods as a new family of PEFT methods specifically designed for SSMs. These methods directly modify the intrinsic state-related features within the SSM module.
90
+
91
+ In contrast, prompt-based methods, such as Prefix-Tuning, influence the hidden state of the SSM module indirectly by introducing external virtual tokens. While both approaches adjust the hidden state of the SSM module, state-based methods operate within the SSM module itself, offering a more direct and expressive adaptation strategy.
92
+
93
+ Based on our definition, we classify Initial State Tuning as a state-based method. While Initial State Tuning surpasses Prefix-Tuning (Galim et al., 2024), it still falls short compared to other finetuning methods on SSMs. To bridge this gap, we propose a novel state-based method for enhanced performance.
94
+
95
+ <table><tr><td>Initial State Tuning</td><td>ŷt=yt+ Ct(Πi=1tAi)h&#x27;</td></tr><tr><td>State-offset Tuning (h)</td><td>ŷt=yt+ Cth&#x27;</td></tr><tr><td>State-offset Tuning (y)</td><td>ŷt=yt+y&#x27;</td></tr></table>
96
+
97
+ Table 1: State-based methods for S6. Our methods eliminate the time-dependent coefficient $\prod_{i=1}^{t} \overline{A}_i$ , ensuring a uniform effect across timesteps.
98
+
99
+ <table><tr><td>Prompt-based</td><td>Timestep T</td><td>Timestep T + 1</td></tr><tr><td>Prefix</td><td colspan="2">[prefix, x1, ..., xT] → [prefix, x1, ..., xT, xT+1]</td></tr><tr><td>Suffix</td><td colspan="2">[x1, ..., xT, suffix] → [x1, ..., xT, suffix, xT+1]</td></tr><tr><td>Iterative Suffix</td><td colspan="2">[x1, ..., xT, suffix] → [x1, ..., xT, xT+1, suffix]</td></tr></table>
100
+
101
+ Table 2: Comparison of Prefix-Tuning, Suffix-Tuning, and Iterative Suffix-Tuning.
102
+
103
+ # 4 Proposed State-based PEFT Method
104
+
105
+ In this section, we propose State-offset Tuning as a new state-based PEFT method. A visual comparison with Initial State Tuning and Prefix-Tuning is provided in Sec. A.
106
+
107
+ # 4.1 State-offset Tuning
108
+
109
+ Initial State Tuning introduces an additional term $h'$ with a coefficient $\overline{A}^t$ for S4 and $\prod_{i=1}^{t} \overline{A}_i$ for S6. However, this coefficient, which varies for each timestep, tends to decrease over time, leading to inconsistent effects. This is related to the issue that SSMs struggle to recall early tokens (Fu et al., 2022). To address this and ensure a consistent effect for each timestep, we introduce State-offset Tuning, which eliminates this coefficient.
110
+
111
+ State-offset Tuning adds a constant, learnable state-offset $h'$ to the hidden state $h$ before obtaining the updated output $\widehat{y}_t$ (Fig. 1). Therefore, unlike Initial State Tuning, State-offset Tuning does not alter the hidden state dynamics directly. Instead, State-offset Tuning adds a constant $h'$ repetitively for each timestep, ensuring a uniform impact.
112
+
113
+ We formulate State-offset Tuning $(h)$ for S6 in Table 1, where we optimize $h' \in \mathbb{R}^H$ . In S4, $C_t$ does not depend on the input, simplifying to a constant $C$ . This allows us to optimize a bias $y'$ instead of $h'$ (with $y' := C h'$ for each dimension). We name this method State-offset Tuning $(y)$ . For S4, State-offset Tuning $(y)$ and State-offset Tuning $(h)$ are equivalent. In S6, opting for the simpler State-offset Tuning $(y)$ enhances parameter efficiency by decreasing the tunable parameters from $D H$ to $D$ .
114
+
115
+ # 4.2 Connection to Prompt-based Methods
116
+
117
+ To further validate the methodology of State-offset Tuning, we examine its connection to prompt-based methods and demonstrate its correspondence to Iterative Suffix-Tuning.
118
+
119
+ Iterative Suffix-Tuning Li and Liang (2021) showed that in Transformers, inserting virtual tokens at the beginning (Prefix-Tuning) or the end (Suffix-Tuning, referred to as Infix-Tuning in their work) yields similar performance.
120
+
121
+ However, for SSMs, the position of the inserted virtual tokens is crucial, as these models tend to forget early tokens. The effect of Prefix-Tuning and Suffix-Tuning diminishes as the model processes subsequent timesteps. This leads to the question: how can we maintain consistent influence of virtual tokens across all timesteps in SSMs?
122
+
123
+ To achieve this, we propose Iterative Suffix-Tuning. As shown in Table 2, both Prefix-Tuning and Suffix-Tuning hold virtual tokens in fixed positions throughout all timesteps. Conversely, Iterative Suffix-Tuning shifts virtual tokens to the sequence's last position at each timestep, ensur
124
+
125
+ <table><tr><td colspan="2">Model Size</td><td colspan="9">Mamba 1.4B</td><td colspan="4">Mamba 130M</td></tr><tr><td colspan="2">Dataset</td><td rowspan="2">Params(%)</td><td colspan="5">Spider</td><td colspan="3">SAMSum</td><td rowspan="2">Params(%)</td><td colspan="2">DART</td><td>GLUE</td></tr><tr><td>Type</td><td>Method</td><td>All</td><td>Easy</td><td>Medium</td><td>Hard</td><td>Extra</td><td>R1</td><td>R2</td><td>RL</td><td>MET.</td><td>BLEU</td><td>Avg.</td></tr><tr><td rowspan="3">-</td><td>Pretrained</td><td>0.00</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>10.9</td><td>1.5</td><td>10.2</td><td>0.00</td><td>18.1</td><td>1.2</td><td>41.0</td></tr><tr><td>Full Fine-tuning (All)</td><td>100.00</td><td>66.2</td><td>84.3</td><td>69.5</td><td>53.4</td><td>43.4</td><td>51.2</td><td>27.3</td><td>42.9</td><td>100.00</td><td>71.0</td><td>51.8</td><td>80.5</td></tr><tr><td>Full Fine-tuning (S6)</td><td>4.46</td><td>56.7</td><td>76.6</td><td>57.8</td><td>46.0</td><td>34.9</td><td>51.1</td><td>26.9</td><td>42.2</td><td>4.31</td><td>70.3</td><td>48.7</td><td>79.3</td></tr><tr><td rowspan="3">Parameter based</td><td>LoRA</td><td>0.46</td><td>56.3</td><td>75.0</td><td>56.5</td><td>50.6</td><td>33.7</td><td>50.5</td><td>26.4</td><td>42.2</td><td>0.92</td><td>69.9</td><td>50.8</td><td>78.3</td></tr><tr><td>BitFit</td><td>0.03</td><td>51.3</td><td>74.2</td><td>50.9</td><td>43.1</td><td>26.5</td><td>50.3</td><td>25.7</td><td>41.9</td><td>0.06</td><td>67.0</td><td>43.7</td><td>77.9</td></tr><tr><td>Additional-scan</td><td>0.34</td><td>26.9</td><td>44.4</td><td>25.6</td><td>21.3</td><td>10.2</td><td>37.6</td><td>17.5</td><td>30.9</td><td>0.68</td><td>60.6</td><td>15.8</td><td>62.4</td></tr><tr><td rowspan="2">Prompt based</td><td>Prompt Tuning</td><td>0.01</td><td>43.6</td><td>65.3</td><td>42.4</td><td>33.3</td><td>25.3</td><td>50.1</td><td>25.6</td><td>41.6</td><td>0.04</td><td>66.2</td><td>39.8</td><td>63.8</td></tr><tr><td>Prefix-Tuning</td><td>12.81</td><td>39.7</td><td>65.7</td><td>38.6</td><td>31.0</td><td>15.1</td><td>50.6</td><td>26.5</td><td>42.1</td><td>22.69</td><td>66.6</td><td>42.5</td><td>68.6</td></tr><tr><td rowspan="3">State based</td><td>Initial State Tuning</td><td>0.23</td><td>51.8</td><td>77.8</td><td>51.1</td><td>35.1</td><td>32.5</td><td>50.0</td><td>26.0</td><td>41.3</td><td>0.45</td><td>69.1</td><td>46.2</td><td>77.4</td></tr><tr><td>State-offset Tuning (h)</td><td>0.23</td><td>57.4</td><td>77.4</td><td>59.9</td><td>44.8</td><td>33.7</td><td>50.9</td><td>26.5</td><td>42.4</td><td>0.45</td><td>70.0</td><td>47.0</td><td>78.5</td></tr><tr><td>State-offset Tuning (y)</td><td>0.01</td><td>53.0</td><td>77.4</td><td>55.4</td><td>40.8</td><td>22.9</td><td>50.6</td><td>26.1</td><td>42.0</td><td>0.03</td><td>66.8</td><td>45.2</td><td>77.7</td></tr></table>
126
+
127
+ Table 3: Experimental results for fine-tuning the SSM module (S6) of Mamba (Gu and Dao, 2024) models. We assess Spider and its subsets using execution accuracy, SAMSum with ROUGE-1/2/L scores, DART using METEOR and BLEU scores, and GLUE by calculating the average score. To demonstrate the effectiveness of our methods, we configure the hyperparameters of each method to ensure their parameter budget is comparable to or exceeds that of our methods. Bold and underline indicate the best and the second-best results, respectively, among all methods (excluding full fine-tuning). Our State-offset Tuning $(h)$ outperforms all other methods on most datasets, and our State-offset Tuning $(y)$ shows comparable or better performance than other methods despite its significantly fewer trainable parameters.
128
+
129
+ ing uniform influence in SSMs. This method is akin to how State-offset Tuning eliminates the time-varying coefficient in Initial State Tuning, enforcing a consistent effect at every timestep. We show that Iterative Suffix-Tuning in SSMs is equivalent to State-offset Tuning (as detailed in Sec. B).
130
+
131
+ # 5 Experiments
132
+
133
+ # 5.1 Experiment Setup
134
+
135
+ We conduct experiments for fine-tuning the SSM module (S6) using pretrained Mamba (Gu and Dao, 2024) and Mamba-2 (Dao and Gu, 2024) models on four datasets: Spider (Yu et al., 2018), SAM-Sum (Gliwa et al., 2019), DART (Nan et al., 2021), and GLUE (Wang et al., 2019). For further information on datasets, evaluation metrics, and experimental details, refer to Secs. E and F. We use LoRA (Hu et al., 2021), BitFit (Zaken et al., 2022), and Additional-scan (Yoshimura et al., 2025) as parameter-based methods. For prompt-based methods, we employ Prompt Tuning (Lester et al., 2021) and Prefix-Tuning<sup>1</sup> (Li and Liang, 2021). For state-based methods, we utilize Initial State Tuning (Galim et al., 2024), along with our proposed methods, State-offset Tuning $(h)$ and State-offset Tuning $(y)$ .
136
+
137
+ # 5.2 Experimental Results
138
+
139
+ Table 3 shows the results on Mamba models. Additional results, including Mamba-2 results, are provided in Sec. G. In the appendix, we further compare the training speed, training memory usage, and computational overhead during inference between LoRA and State-offset Tuning $(h)$ . Our findings show that State-offset Tuning $(h)$ is faster, more memory-efficient, and introduces lower FLOP overhead compared to LoRA. Additionally, we evaluate the performance of State-offset Tuning $(h)$ within SSMs against Prefix-Tuning in Transformers, further highlighting the effectiveness of our approach.
140
+
141
+ State-based Methods Outperform Prompt-based Methods Table 3 shows that all state-based methods outperform prompt-based methods, supporting the claim that state-based methods are superior to prompt-based methods on SSMs.
142
+
143
+ In particular, our State-offset Tuning $(h)$ achieves the best results among all tested PEFT methods on most datasets. Our State-offset Tuning $(y)$ outperforms Initial State Tuning on most datasets, using just $0.01\%$ of the parameters compared to $0.23\%$ by Initial State Tuning.
144
+
145
+ State-offset Tuning Outperforms Parameter-Based Methods State-offset Tuning $(h)$ outperforms BitFit across all datasets and surpasses LoRA on most datasets. Notably, it also outperforms Additional-scan, a method specifically designed for fine-tuning SSM modules, across all datasets.
146
+
147
+ Furthermore, State-offset Tuning $(h)$ achieves performance comparable to full fine-tuning (S6), highlighting the effectiveness of state-based PEFT for SSM modules, despite using significantly fewer parameters. The results from Mamba-2 (Table 11) further validate the effectiveness of our method. We also include a comparison to Selective Dimension Tuning (SDT) (Galim et al., 2024) in Sec. G.4, showing that our method outperforms SDT while using fewer parameters.
148
+
149
+ # 6 Conclusion
150
+
151
+ In this paper, we introduce state-based methods as a new family of PEFT methods for State Space Models, serving as a superior alternative to prompt-based methods. We propose State-offset Tuning as a new state-based PEFT method and demonstrate its effectiveness through extensive experiments.
152
+
153
+ # 7 Limitations
154
+
155
+ While we demonstrate that State-offset Tuning is effective for fine-tuning SSMs in the text domain, its applicability to other domains, such as vision or speech, remains unexplored. Existing PEFT methods, such as LoRA and Prompt Tuning, have been successfully applied across various domains (Jia et al., 2022; Gal et al., 2023; Ran et al., 2024). Extending State-offset Tuning to models in other domains, such as Vision Mamba (Zhu et al., 2025), is an interesting direction for future work.
156
+
157
+ Potential Risks Our approach enables parameter-efficient fine-tuning (PEFT) of pretrained SSMs, significantly reducing the computational cost of adaptation. While this is beneficial for resource-constrained scenarios, it also presents potential risks. Specifically, adversaries could leverage our method to efficiently fine-tune pretrained SSMs on harmful or biased data, enabling the rapid adaptation of models for malicious purposes with minimal computational resources. This could lead to the proliferation of harmful or deceptive models that reinforce misinformation, bias, or toxicity. To mitigate these risks, future work should explore more robust safety measures, such as integrating ethical fine-tuning constraints and monitoring mechanisms.
158
+
159
+ # References
160
+
161
+ Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman,
162
+
163
+ Shyamal Anadkat, et al. 2023. GPT-4 technical report. arXiv preprint arXiv:2303.08774.
164
+ Satanjeev Banerjee and Alon Lavie. 2005. ME-TEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72, Ann Arbor, Michigan. Association for Computational Linguistics.
165
+ Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second pascal recognising textual entailment challenge. In Proceedings of the second PASCAL challenges workshop on recognising textual entailment, volume 1. CiteSeer.
166
+ Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. TAC, 7(8):1.
167
+ Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. 2023. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pages 2397-2430. PMLR.
168
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901.
169
+ Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In _Machine learning challenges_ workshop, pages 177-190. Springer.
170
+ Tri Dao and Albert Gu. 2024. Transformers are SSMs: Generalized models and efficient algorithms through structured state space duality. In International Conference on Machine Learning.
171
+ Bill Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Third international workshop on paraphrasing (IWP2005).
172
+ Daniel Y Fu, Tri Dao, Khaled Kamal Saab, Armin W Thomas, Atri Rudra, and Christopher Re. 2022. Hungry hungry hippos: Towards language modeling with state space models. In International Conference on Learning Representations.
173
+ Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit Haim Bermano, Gal Chechik, and Daniel Cohen-Or. 2023. An image is worth one word: Personalizing text-to-image generation using textual inversion. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net.
174
+
175
+ Kevin Galim, Wonjun Kang, Yuchen Zeng, Hyung Il Koo, and Kangwook Lee. 2024. Parameter-efficient fine-tuning of state space models. arXiv preprint arXiv:2410.09016.
176
+ Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2020. The Pile: An 800GB dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027.
177
+ Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and William B Dolan. 2007. The third pascal recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1-9.
178
+ Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A human-annotated dialogue dataset for abstractive summarization. EMNLP-IJCNLP 2019, page 70.
179
+ Albert Gu and Tri Dao. 2024. Mamba: Linear-time sequence modeling with selective state spaces. In First Conference on Language Modeling.
180
+ Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, and Christopher Ré. 2020. Hippo: Recurrent memory with optimal polynomial projections. In Advances in Neural Information Processing Systems, volume 33, pages 1474-1487.
181
+ Albert Gu, Karan Goel, and Christopher Re. 2022. Efficiently modeling long sequences with structured state spaces. In International Conference on Learning Representations.
182
+ Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, and Christopher Re. 2021. Combining recurrent, convolutional, and continuous-time models with linear state space layers. In Advances in Neural Information Processing Systems, volume 34, pages 572-585.
183
+ Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. 2021. Towards a unified view of parameter-efficient transfer learning. In International Conference on Learning Representations.
184
+ Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning, pages 2790-2799.
185
+ Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2021. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations.
186
+
187
+ Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. 2022. Visual prompt tuning. In European Conference on Computer Vision, pages 709-727. Springer.
188
+ Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. 2020. Transformers are RNNs: Fast autoregressive transformers with linear attention. In International Conference on Machine Learning, pages 5156-5165.
189
+ Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045-3059.
190
+ Xiang Lisa Li and Percy Liang. 2021. Prefix-Tuning: Optimizing Continuous Prompts for Generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582-4597.
191
+ Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
192
+ Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2022. P-Tuning: Prompt Tuning Can Be Comparable to Finetuning Across Scales and Tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 61-68.
193
+ Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. GPT Understands, Too. arXiv:2103.10385.
194
+ Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Open-Review.net.
195
+ Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, Sayak Paul, and Benjamin Bossan. 2022. Peft: State-of-the-art parameter-efficient fine-tuning methods. https://github.com/huggingface/peft.
196
+ Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, et al. 2021. DART: Open-Domain Structured Data Record to Text Generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 432-447.
197
+ Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. pages 311-318.
198
+
199
+ Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. Preprint, arXiv:1912.01703.
200
+ Bo Peng, Eric Alcaide, Quentin Gregory Anthony, Alon Albalak, Samuel Arcadinho, Stella Biderman, Huanqi Cao, Xin Cheng, Michael Nguyen Chung, Leon Derczynski, et al. 2023. RWKV: Reinventing RNNs for the transformer era. In The 2023 Conference on Empirical Methods in Natural Language Processing.
201
+ Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784-789, Melbourne, Australia. Association for Computational Linguistics.
202
+ Lingmin Ran, Xiaodong Cun, Jia-Wei Liu, Rui Zhao, Song Zijie, Xintao Wang, Jussi Keppo, and Mike Zheng Shou. 2024. X-adapter: Adding universal compatibility of plugins for upgraded diffusion model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8775-8784.
203
+ Ying Sheng, Shiyi Cao, Dacheng Li, Coleman Hooper, Nicholas Lee, Shuo Yang, Christopher Chou, Banghua Zhu, Lianmin Zheng, Kurt Keutzer, et al. 2023. S-lora: Serving thousands of concurrent lora adapters. CoRR.
204
+ Daniel G. A. Smith and Johnnie Gray. 2018. opt_einsum - a python package for optimizing contraction order for einsum-like expressions. Journal of Open Source Software, 3(26):753.
205
+ Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.
206
+ Vladislav Sovrasov. 2018-2024. ptflops: a flops counting tool for neural networks in pytorch framework.
207
+ Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, and Furu Wei. 2023. Retentive network: A successor to transformer for large language models. arXiv preprint arXiv:2307.08621.
208
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30.
209
+
210
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations.
211
+ Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.
212
+ Adina Williams, Nikita Nangia, and Samuel R Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2018, pages 1112-1122. Association for Computational Linguistics (ACL).
213
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
214
+ Masakazu Yoshimura, Teruaki Hayashi, and Yota Maeda. 2025. MambaPEFT: Exploring parameter-efficient fine-tuning for mamba. In The Thirteenth International Conference on Learning Representations.
215
+ Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, et al. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911-3921.
216
+ Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. 2022. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1-9.
217
+ Yuchen Zeng and Kangwook Lee. 2024. The expressive power of low-rank adaptation. In International Conference on Learning Representations.
218
+ Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, and Xinggang Wang. 2025. Vision mamba: Efficient visual representation learning with bidirectional state space model. In _Forty-first International Conference on Machine Learning_.
219
+
220
+ # A Visual Comparison of Prompt-based Methods and State-based Methods
221
+
222
+ ![](images/a6c4522debd040bb917cdc09cfccbeb894e35cfbe29c6b3a1c7570f585d596fa.jpg)
223
+ Fig. 2 compares prompt-based methods and state-based methods, including our proposed State-offset Tuning, within the S6 block.
224
+ Figure 2: Visual comparison of prompt-based methods and state-based methods in the S6 block.
225
+ Figure 3: Two different implementations of Iterative Suffix-Tuning in S6. We show that Fig. 3b is equivalent to State-offset Tuning.
226
+
227
+ State-based Methods Operate within the SSM Module Fig. 2 shows that prompt-based methods, such as Prefix-Tuning, rely on virtual tokens external to the S6 block. In contrast, state-based methods, such as Initial State Tuning, State-offset Tuning $(h)$ , and State-offset Tuning $(y)$ , directly adjust state-related features within the S6 block.
228
+
229
+ State-offset Tuning Affects the Current Timestep Figure 2 illustrates how Prefix-Tuning and Initial State Tuning modify features at early timesteps, indirectly affecting the current state. However, this impact diminishes over time. In contrast, State-offset Tuning $(h)$ and State-offset Tuning $(y)$ directly influence the state at each timestep, resulting in more effective adaptation.
230
+
231
+ # B Iterative Suffix-Tuning and State-offset Tuning
232
+
233
+ In this section, we show that Iterative Suffix-Tuning for SSMs is equivalent to State-offset Tuning.
234
+
235
+ # State-offset Tuning is Iterative Suffix-Tuning
236
+
237
+ Fig. 3 provides two different implementations of
238
+
239
+ ![](images/2a8a0d7c1404f0e4d463182fb25280fff03a07cfc75f96cdf81df787d1be0214.jpg)
240
+ (a) Iterative Suffix-Tuning (with $t + 1$ as current timestep)
241
+
242
+ ![](images/d4cdbc774d59f50a77a47344acca665c017fda6a14d02bdfe4da55e18157851f.jpg)
243
+ (b) Iterative Suffix-Tuning (with $t$ as current timestep)
244
+
245
+ Iterative Suffix-Tuning on SSMs (S6) with virtual token (suffix) $x_{t + 1}$ . Fig. 3a views $t + 1$ as current timestep. In this case, input-dependent $C_{t + 1} = W_C x_{t + 1}$ is determined solely by the suffix $x_{t + 1} \in \mathbb{R}^D$ , which is constant at inference time, thus the input dependency of $C$ is lost, reducing the expressive power of S6.
246
+
247
+ To address this, we view $t$ as current timestep instead and interpret $x_{t + 1}$ as future token (Fig. 3b). Consequently, we time-shift $x_{t + 1}$ by multiplying it with the inverse of $\overline{A}_{t + 1}$ .
248
+
249
+ Fig. 3a: $y_{t + 1} = C_{t + 1}(\overline{A}_{t + 1}\pmb{h}_t + \overline{B}_{t + 1}x_{t + 1})$ ,
250
+
251
+ Fig. 3b: $y_{t} = C_{t}(h_{t} + \overline{A}_{t + 1}^{-1}\overline{B}_{t + 1}x_{t + 1})$ .
252
+
253
+ Therefore, according to the equation corresponding to Fig. 3b, Iterative Suffix-Tuning can be implemented by updating only $\overline{A}_{t+1}^{-1}\overline{B}_{t+1}x_{t+1}$ . Since this term depends solely on the constant suffix $x_{t+1}$ , we can directly replace it with a learnable parameter $h^{\prime}(h^{\prime}:=\overline{A}_{t+1}^{-1}\overline{B}_{t+1}x_{t+1})$ , which is equivariant to State-offset Tuning $(h)$ (Table 1).
254
+
255
+ # C Low-Rank State-offset Tuning
256
+
257
+ State-offset Tuning $(h)$ shows superior parameter efficiency on Mamba versus other PEFT methods. To further reduce trainable parameters, we can represent the learnable state-offset as a product of two low-rank matrices, inspired by LoRA (Hu et al., 2021). This is particularly useful for Mamba-2,
258
+
259
+ where the state dimension is larger than in Mamba, leading to an increased number of trainable parameters. In such cases, low-rank techniques can effectively mitigate the parameter overhead. Experimental results of State-offset Tuning $(h)$ with lower rank on Mamba-2 are provided in Sec. G.2.
260
+
261
+ # D PEFT Baselines
262
+
263
+ In this section, we provide a more detailed description of the baseline methods.
264
+
265
+ LoRA (Hu et al., 2021) LoRA aims to fine-tune large models by maintaining the bulk of pretrained parameters untouched while introducing trainable low-rank matrices within each Transformer's layer. This method leverages linear algebra principles where a large matrix can be effectively approximated by two low-rank matrices, thus reducing the number of parameters. LoRA includes a scaling parameter to adjust the influence of original and LoRA weights during training. We use the Hugging Face version (Apache License 2.0, Mangrulkar et al. (2022)) of LoRA for our experiments.
266
+
267
+ Prompt Tuning (Lester et al., 2021) This method involves freezing the entire model and adding a trainable soft prompt to the input. The prompt consists of continuous virtual tokens that provide additional context.
268
+
269
+ Prefix-Tuning (Li and Liang, 2021) Similar to Prompt Tuning, Prefix-Tuning adds trainable tokens but extends them across every Transformer layer by appending trainable embeddings to the attention matrices. To combat the instability in training these prefixes, an over-parameterized MLP is utilized, which can be discarded after training.
270
+
271
+ BitFit (Zaken et al., 2022) This PEFT method simplifies fine-tuning by training only the bias terms while freezing the other model weights, drastically reducing trainable parameters.
272
+
273
+ SDT (Galim et al., 2024) SDT (Selective Dimension Tuning) employs a sparse updating approach for the matrices $A$ , $B$ , and $C$ ( $W_B$ and $W_C$ for S6), while additionally applying LoRA to the linear projection layers. All remaining layers are kept frozen. The process for determining which parameters to update involves a warmup stage, during which parameters are flagged as updatable if they exhibit a significant gradient magnitude. In our SDT experiments, we excluded LoRA from the lin
274
+
275
+ ear projection layers and focused solely on its S6 component.
276
+
277
+ Additional-scan (Yoshimura et al., 2025) This approach enhances the model's expressivity by expanding the state dimensions for $A$ , $W_{B}$ , and $W_{C}$ . During training, only the added dimensions are marked as trainable.
278
+
279
+ E Datasets
280
+
281
+ <table><tr><td>Dataset</td><td>#Train</td><td>#Valid</td><td>#Epochs</td><td>Model size</td><td>Metrics</td></tr><tr><td>RTE</td><td>2490</td><td>277</td><td>10</td><td>130m</td><td>Acc.</td></tr><tr><td>MRPC</td><td>3668</td><td>408</td><td>10</td><td>130m</td><td>Acc.</td></tr><tr><td>CoLA</td><td>8551</td><td>1043</td><td>10</td><td>130m</td><td>Acc.</td></tr><tr><td>SST-2</td><td>67349</td><td>872</td><td>10</td><td>130m</td><td>Acc.</td></tr><tr><td>QNLI</td><td>104743</td><td>5463</td><td>10</td><td>130m</td><td>Acc.</td></tr><tr><td>QQP</td><td>363846</td><td>40430</td><td>3</td><td>130m</td><td>Acc.</td></tr><tr><td>MNLI</td><td>392702</td><td>19647</td><td>3</td><td>130m</td><td>Acc.</td></tr><tr><td>Spider</td><td>6918</td><td>1034</td><td>10</td><td>1.4B, 2.8B</td><td>Acc.</td></tr><tr><td>SAMSum</td><td>14732</td><td>819</td><td>10</td><td>1.4B</td><td>ROUGE</td></tr><tr><td>DART</td><td>62659</td><td>2768</td><td>10</td><td>130m</td><td>METEOR, BLEU</td></tr></table>
282
+
283
+ Table 4: Dataset details. We report the number of training and validation samples, number of training epochs, employed model size and evaluation metrics.
284
+
285
+ This paper examines four datasets across two domains: Natural Language Understanding (NLU) and Natural Language Generation (NLG). Table 4 presents detailed information for each dataset.
286
+
287
+ GLUE (Wang et al., 2019) A benchmark comprising nine tasks in English for assessing language understanding models, including sentiment analysis, linguistic acceptability, and question answering. We use the following datasets: RTE (Dagan et al., 2005; Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), MRPC (Dolan and Brockett, 2005), CoLA (Warstadt et al., 2019), SST-2 (Socher et al., 2013), QNLI (Rajpurkar et al., 2018), $\mathsf{QQP}^2$ , and MNLI (Williams et al., 2018). Evaluation is mainly through accuracy, except for CoLA where Matthews correlation is used. The final metric is calculated as the average accuracy (Matthews correlation for CoLA) across all datasets. The individual datasets are available under different permissive licenses. We use the version hosted at https://huggingface.co/datasets/nyu-mll/glue.
288
+
289
+ SAMSum (Gliwa et al., 2019) A dataset for dialogue summarization featuring about 16,000 synthetic conversations in English with summaries, created to simulate digital communications with
290
+
291
+ varied tones and styles. Its structure helps in developing systems that process conversational text. The dataset is evaluated via ROUGE score. This dataset is available under the CC BY-NC-ND 4.0 license. We use the version hosted at https://huggingface.co/datasets/Samsung/samsum.
292
+
293
+ Spider (Yu et al., 2018) A text-to-SQL dataset with 10,000 annotated SQL queries across $200+$ databases, classifying queries from easy to extra hard based on SQL operation complexity. It involves translating English questions to SQL, evaluated via execution accuracy. Execution accuracy considers the output correct if the model's predicted SQL query and the ground truth SQL query yield the same results when executed on the database. This dataset is available under the CC BY-SA 4.0 license. We use the version hosted at https://huggingface.co/datasets/xlangai/spider.
294
+
295
+ DART (Nan et al., 2021) Comprising over 80,000 instances, DART focuses on English RDF-to-text generation, organized by structured data triples and corresponding text summaries. It is assessed using METEOR and BLEU metrics. This dataset is available under the MIT license. We use the version hosted at https://huggingface.co/datasets/Yale-LILY/dart.
296
+
297
+ # F Experimental Details
298
+
299
+ For every dataset, we select the model size based on how difficult the dataset is and conduct a brief grid search for one epoch using a subset of the data (1k-2k instances) with learning rates of $\{4\times 10^{-1},2\times 10^{-1},1\times 10^{-1},\dots,1\times 10^{-5}\}$ . The best learning rate is then selected as the rate that has the lowest training loss. In our experimental results, we report the metric from the best epoch observed on the validation set during training, employing early stopping. Each experiment is conducted once. We apply fine-tuning methods to the SSM module (S6) of Mamba (130M, 1.4B, $2.8\mathrm{B})^3$ and the SSM module (SSD) of Mamba-2 (130M, 1.3B) $^4$ pretrained from Pile (MIT License, Gao et al. (2020)) using AdamW (Loshchilov and Hutter, 2019) with a linear decay schedule for the learning rate. In general, we choose hyperparameters for each individual method to ensure that all
300
+
301
+ methods operate within a similar parameter budget. Tables 5 and 6 show selected learning rates and chosen hyperparameters for each method. For assessing NLG tasks, we utilize beam search with five beams and a maximum beam length of 1024. BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and METEOR (Banerjee and Lavie, 2005) metrics are computed using Hugging Face's evaluate library<sup>5</sup>.
302
+
303
+ We use an NVIDIA RTX 3090 24GB for training models with less than 1 billion parameters, and an NVIDIA H100 80GB for larger models. We implemented our project in PyTorch (Modified BSD license, Paszke et al. (2019)), utilizing the Hugging Face trainer (Apache License 2.0, Wolf et al. (2020)). We train with batch size 4 for 10 epochs on all datasets except QQP and MNLI for which we use 3 epochs, allowing each training run to finish in under 16 hours. This project spanned three months, utilizing four NVIDIA RTX 3090 24GB GPUs and four NVIDIA H100 80GB GPUs, totaling approximately 17,000 GPU hours.
304
+
305
+ # G Additional Experimental Results
306
+
307
+ # G.1 Mamba Results
308
+
309
+ Training Speed and Memory Usage We conduct a small experiment to compare the memory usage and training speed of State-offset Tuning $(h)$ and LoRA, as they performed most similarly in terms of dataset metrics in our experiments. Using a single H100 GPU, we train for 100 batch iterations with a batch size of 4 and a 1K context, continuously measuring memory usage and batch latency.
310
+
311
+ Table 7 shows the training speed and maximum memory usage for different Mamba sizes for State-offset Tuning $(h)$ and LoRA. State-offset Tuning $(h)$ uses less memory and is faster, even with more trainable parameters. In this experiment, we selected hyperparameters to ensure LoRA has less trainable parameters than State-offset Tuning $(h)$ . We believe State-offset Tuning $(h)$ 's efficiency stems from our optimized einsum implementation, enhanced with the opt_einsum (MIT License, Smith and Gray (2018)) Python package to reduce memory usage and improve latency.
312
+
313
+ <table><tr><td>Model</td><td colspan="10">Mamba</td><td colspan="3">Mamba-2</td></tr><tr><td>Method / Dataset</td><td>RTE</td><td>MRPC</td><td>CoLA</td><td>SST-2</td><td>QNLI</td><td>QQP</td><td>MNLI</td><td>DART</td><td>SAMSum</td><td>Spider</td><td>DART</td><td>SAMSum</td><td>Spider</td></tr><tr><td>LoRA</td><td>2e-03</td><td>2e-03</td><td>4e-05</td><td>2e-03</td><td>1e-03</td><td>1e-03</td><td>2e-03</td><td>4e-03</td><td>2e-03</td><td>4e-03</td><td>4e-03</td><td>2e-03</td><td>4e-03</td></tr><tr><td>Additional-scan</td><td>4e-03</td><td>2e-03</td><td>2e-03</td><td>1e-01</td><td>2e-03</td><td>4e-02</td><td>4e-03</td><td>4e-03</td><td>4e-03</td><td>4e-03</td><td>2e-02</td><td>4e-03</td><td>1e-02</td></tr><tr><td>SDT</td><td>1e-03</td><td>4e-02</td><td>1e-01</td><td>4e-02</td><td>2e-02</td><td>2e-02</td><td>1e-01</td><td>4e-02</td><td>2e-02</td><td>4e-02</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Initial State Tuning</td><td>4e-04</td><td>1e-03</td><td>2e-03</td><td>2e-03</td><td>2e-03</td><td>2e-03</td><td>2e-03</td><td>2e-03</td><td>2e-04</td><td>1e-03</td><td>4e-03</td><td>2e-04</td><td>4e-04</td></tr><tr><td>State-offset Tuning (h)</td><td>1e-03</td><td>2e-04</td><td>2e-04</td><td>1e-04</td><td>1e-04</td><td>4e-05</td><td>4e-04</td><td>4e-04</td><td>1e-04</td><td>2e-04</td><td>1e-03</td><td>2e-05</td><td>2e-05</td></tr><tr><td>State-offset Tuning (h) (low rank)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>4e-03</td><td>2e-04</td><td>2e-04</td></tr><tr><td>State-offset Tuning (y)</td><td>1e-03</td><td>2e-03</td><td>1e-03</td><td>1e-03</td><td>2e-03</td><td>1e-03</td><td>1e-03</td><td>4e-03</td><td>1e-03</td><td>2e-03</td><td>1e-02</td><td>2e-04</td><td>1e-03</td></tr></table>
314
+
315
+ Table 5: Learning rates for each method and dataset. For Mamba and Mamba-2, learning rates for each method and dataset are determined via a small grid search on a dataset subset. The learning rate yielding the best training loss is chosen as the final rate.
316
+
317
+ <table><tr><td>Method /Model</td><td>Mamba 130M</td><td>Mamba 1.4B</td><td>Mamba-2 130M</td><td>Mamba-2 1.3B</td></tr><tr><td rowspan="4">LoRA</td><td>Rank = 8</td><td>Rank = 8</td><td>Rank = 16</td><td>Rank = 16</td></tr><tr><td>α = 8</td><td>α = 8</td><td>α = 16</td><td>α = 16</td></tr><tr><td>Dropout = 0.1</td><td>Dropout = 0.1</td><td>Dropout = 0.1</td><td>Dropout = 0.1</td></tr><tr><td>Modules = all weight matrices in S6</td><td>Modules = all weight matrices in S6</td><td>Modules = all weight matrices in SSD</td><td>Modules = all weight matrices in SSD</td></tr><tr><td>Additional-scan</td><td>#States = 8</td><td>#States = 8</td><td>#States = 32</td><td>#States = 32</td></tr><tr><td rowspan="2">SDT</td><td>Freeze #Channels = 50.0%</td><td>Freeze #Channels = 50.0%</td><td rowspan="2">-</td><td rowspan="2">-</td></tr><tr><td>Freeze #States = 75.0%</td><td>Freeze #States = 75.0%</td></tr><tr><td>Initial State Tuning</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>State-offset Tuning (h)</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>State-offset Tuning (h) (low rank)</td><td>-</td><td>-</td><td>Rank = 32</td><td>Rank = 64</td></tr><tr><td>State-offset Tuning (y)</td><td>-</td><td>-</td><td>-</td><td>-</td></tr></table>
318
+
319
+ Table 6: Hyperparameter settings for each model and PEFT method. In general, we adjust hyperparameters to maintain a similar number of trainable parameters.
320
+
321
+ <table><tr><td>Model</td><td>Method</td><td>Params (%)</td><td>Mem. (GB)</td><td>Latency (s)</td></tr><tr><td rowspan="2">130M</td><td>State-offset Tuning (h)</td><td>0.45</td><td>4.2</td><td>0.13</td></tr><tr><td>LoRA</td><td>0.35</td><td>5.44</td><td>0.18</td></tr><tr><td rowspan="2">370M</td><td>State-offset Tuning (h)</td><td>0.42</td><td>9.36</td><td>0.33</td></tr><tr><td>LoRA</td><td>0.32</td><td>11.56</td><td>0.45</td></tr><tr><td rowspan="2">790M</td><td>State-offset Tuning (h)</td><td>0.3</td><td>13.91</td><td>0.49</td></tr><tr><td>LoRA</td><td>0.23</td><td>17.17</td><td>0.61</td></tr><tr><td rowspan="2">1.4B</td><td>State-offset Tuning (h)</td><td>0.23</td><td>18.77</td><td>0.67</td></tr><tr><td>LoRA</td><td>0.17</td><td>22.99</td><td>0.8</td></tr><tr><td rowspan="2">2.8B</td><td>State-offset Tuning (h)</td><td>0.19</td><td>31.49</td><td>1.13</td></tr><tr><td>LoRA</td><td>0.14</td><td>37.84</td><td>1.33</td></tr></table>
322
+
323
+ Table 7: Training speed and memory usage. For each Mamba size, we compare the maximum memory usage and mean latency for processing a single batch during training. Our State-offset Tuning $(h)$ is compared against LoRA, as it demonstrated the most similar performance in the experiment section. We configure LoRA to use fewer trainable parameters than State-offset Tuning $(h)$ . Despite this, State-offset Tuning $(h)$ still consumes less memory and is faster in training.
324
+
325
+ FLOP Overhead While it is possible to avoid extra FLOP with LoRA in constrained single-task settings by merging weights into the pretrained model, real-world serving scenarios often require a single pretrained model to support multiple downstream tasks simultaneously via multiple LoRA adapters. In such cases, avoiding extra FLOP would require storing separately merged models for each task in memory—an inefficient solution. Alternatively, merging weights dynamically at inference time introduces significant computational bottlenecks. As
326
+
327
+ a result, many recent works focus on serving many LoRA adapters efficiently without weight merging (Sheng et al., 2023).
328
+
329
+ <table><tr><td colspan="2">Sequence Length</td><td>L=128</td><td>L=256</td><td>L=512</td><td>L=1024</td><td rowspan="2">Relative (%)</td></tr><tr><td>Model</td><td>Method</td><td colspan="4">GFLOP</td></tr><tr><td rowspan="3">130M</td><td>Pretrained</td><td>16.45</td><td>32.90</td><td>65.81</td><td>131.61</td><td>100.000</td></tr><tr><td>State-offset Tuning (h)</td><td>16.46</td><td>32.91</td><td>65.83</td><td>131.65</td><td>+ 0.029</td></tr><tr><td>LoRA</td><td>16.61</td><td>33.21</td><td>66.42</td><td>132.84</td><td>+ 0.937</td></tr><tr><td rowspan="3">370M</td><td>Pretrained</td><td>47.35</td><td>94.69</td><td>189.39</td><td>378.77</td><td>100.000</td></tr><tr><td>State-offset Tuning (h)</td><td>47.36</td><td>94.72</td><td>189.44</td><td>378.87</td><td>+ 0.027</td></tr><tr><td>LoRA</td><td>47.76</td><td>95.52</td><td>191.03</td><td>382.06</td><td>+ 0.867</td></tr><tr><td rowspan="3">790M</td><td>Pretrained</td><td>101.22</td><td>202.44</td><td>404.88</td><td>809.75</td><td>100.000</td></tr><tr><td>State-offset Tuning (h)</td><td>101.24</td><td>202.48</td><td>404.95</td><td>809.90</td><td>+ 0.019</td></tr><tr><td>LoRA</td><td>101.84</td><td>203.67</td><td>407.34</td><td>814.67</td><td>+ 0.608</td></tr><tr><td rowspan="3">1.4B</td><td>Pretrained</td><td>175.23</td><td>350.45</td><td>700.90</td><td>1401.79</td><td>100.000</td></tr><tr><td>State-offset Tuning (h)</td><td>175.25</td><td>350.50</td><td>701.00</td><td>1401.99</td><td>+ 0.014</td></tr><tr><td>LoRA</td><td>176.05</td><td>352.09</td><td>704.17</td><td>1408.35</td><td>+ 0.468</td></tr><tr><td rowspan="3">2.8B</td><td>Pretrained</td><td>353.66</td><td>707.32</td><td>1414.63</td><td>2829.25</td><td>100.000</td></tr><tr><td>State-offset Tuning (h)</td><td>353.70</td><td>707.40</td><td>1414.80</td><td>2829.59</td><td>+ 0.012</td></tr><tr><td>LoRA</td><td>355.03</td><td>710.05</td><td>1420.09</td><td>2840.17</td><td>+ 0.386</td></tr></table>
330
+
331
+ Table 8: FLOP overhead across various model sizes and sequence lengths. State-offset Tuning adds less than $0.03\%$ overhead, whereas LoRA incurs over $30\times$ more extra FLOP compared to ours.
332
+
333
+ Given these practical considerations, we evaluate LoRA without weight merging and conduct experiments comparing the additional FLOP of LoRA and our State-offset Tuning method during inference. We use ptflops (Sovrasov, 2018-2024) to measure computational overhead. As shown in Table 8, our method adds less than $0.03\%$ overhead, while LoRA results in more than 30 times
334
+
335
+ the additional FLOP compared to ours. These results highlight the superior FLOP efficiency of our method compared to LoRA.
336
+
337
+ Mamba 2.8B Results Table 9 shows the experimental results using Mamba 2.8B. Our State-offset Tuning $(h)$ outperforms all methods except full fine-tuning.
338
+
339
+ Mamba Results on GLUE Dataset Table 10 shows the full results on the GLUE dataset using Mamba 130M. Our State-offset Tuning $(h)$ achieves the highest average score among all PEFT methods.
340
+
341
+ # G.2 Mamba-2 Results
342
+
343
+ Table 11 shows experimental results with Mamba-2 (Dao and Gu, 2024) models. State-offset Tuning $(h)$ with low-rank adaptation (Sec. C) significantly reduces the number of trainable parameters. It outperforms existing methods on the Spider benchmark by a large margin and achieves performance comparable to other approaches on the SAMSum and DART datasets.
344
+
345
+ # G.3 State-offset Tuning in SSMs vs. Prefix-Tuning in Transformers
346
+
347
+ To highlight the effectiveness of State-offset Tuning, we compare its performance with Prefix-Tuning on the Transformer model Pythia (Biderman et al., 2023). We conduct full fine-tuning and Prefix-Tuning experiments on Pythia 160M on GLUE tasks. The results are shown in Table 12.
348
+
349
+ Full fine-tuning on Mamba 130M generally surpasses Pythia 160M, consistent with Gu and Dao (2024). Prefix-Tuning on both Mamba and Pythia reaches about $85 - 90\%$ of their full fine-tuning performance.
350
+
351
+ Our State-offset Tuning achieves approximately $98\%$ of full fine-tuning performance, effectively closing the gap. This success highlights its precise design for SSM-based models.
352
+
353
+ # G.4 Comparison to Selective Dimension Tuning (SDT)
354
+
355
+ We additionally compare our method with Selective Dimension Tuning (SDT) (Galim et al., 2024), a technique derived from theoretical analysis of SSMs. Note that the hyperparameter selection differs from that used in Galim et al. (2024) to ensure the parameter count is more comparable to ours. As shown in Table 13, our method consistently
356
+
357
+ outperforms SDT in most cases while using fewer parameters.
358
+
359
+ <table><tr><td colspan="2">Model Size</td><td colspan="6">Mamba 2.8B</td></tr><tr><td colspan="2">Dataset</td><td rowspan="2">Params(%)</td><td colspan="5">Spider</td></tr><tr><td>Type</td><td>Method</td><td>All</td><td>Easy</td><td>Medium</td><td>Hard</td><td>Extra</td></tr><tr><td rowspan="2">-</td><td>Full Fine-tuning (All)</td><td>100.00</td><td>71.8</td><td>87.5</td><td>73.5</td><td>63.8</td><td>51.8</td></tr><tr><td>Full Fine-tuning (S6)</td><td>4.44</td><td>65.7</td><td>81.9</td><td>68.8</td><td>58.0</td><td>41.0</td></tr><tr><td rowspan="3">Parameter based</td><td>LoRA</td><td>0.38</td><td>63.9</td><td>86.3</td><td>68.2</td><td>49.4</td><td>34.3</td></tr><tr><td>BitFit</td><td>0.02</td><td>59.9</td><td>82.3</td><td>60.8</td><td>52.9</td><td>31.3</td></tr><tr><td>Additional-scan</td><td>0.28</td><td>35.0</td><td>62.0</td><td>31.9</td><td>27.4</td><td>12.1</td></tr><tr><td rowspan="2">Prompt based</td><td>Prompt Tuning</td><td>0.01</td><td>50.7</td><td>75.4</td><td>53.8</td><td>37.4</td><td>19.3</td></tr><tr><td>Prefix-Tuning</td><td>10.82</td><td>45.1</td><td>75.0</td><td>45.1</td><td>32.2</td><td>13.9</td></tr><tr><td rowspan="3">State based</td><td>Initial State Tuning</td><td>0.19</td><td>59.7</td><td>82.3</td><td>62.3</td><td>43.7</td><td>35.5</td></tr><tr><td>State-offset Tuning (h)</td><td>0.19</td><td>65.0</td><td>89.1</td><td>65.9</td><td>51.7</td><td>40.4</td></tr><tr><td>State-offset Tuning (y)</td><td>0.01</td><td>63.1</td><td>85.9</td><td>64.1</td><td>52.3</td><td>37.3</td></tr></table>
360
+
361
+ Table 9: Experimental results of fine-tuning the SSM module using pretrained Mamba 2.8B. State-offset Tuning $(h)$ stands out as the most effective method among all PEFT approaches.
362
+
363
+ <table><tr><td colspan="2">Model Size</td><td colspan="9">Mamba 130M</td></tr><tr><td colspan="2">Dataset</td><td rowspan="2">Params(%)</td><td colspan="8">GLUE</td></tr><tr><td>Type</td><td>Method</td><td>RTE</td><td>MRPC</td><td>CoLA</td><td>SST-2</td><td>QNLI</td><td>QQP</td><td>MNLI</td><td>Avg.</td></tr><tr><td rowspan="2">-</td><td>Full Fine-tuning (All)</td><td>100.00</td><td>71.1</td><td>80.6</td><td>63.2</td><td>92.2</td><td>87.4</td><td>87.9</td><td>80.8</td><td>80.5</td></tr><tr><td>Full Fine-tuning (S6)</td><td>4.31</td><td>69.7</td><td>78.9</td><td>59.1</td><td>91.5</td><td>88.1</td><td>87.5</td><td>80.5</td><td>79.3</td></tr><tr><td rowspan="3">Parameter based</td><td>LoRA</td><td>0.92</td><td>66.1</td><td>78.7</td><td>57.8</td><td>90.8</td><td>87.8</td><td>86.9</td><td>79.8</td><td>78.3</td></tr><tr><td>BitFit</td><td>0.06</td><td>69.5</td><td>80.4</td><td>54.7</td><td>92.0</td><td>86.2</td><td>85.3</td><td>77.2</td><td>77.9</td></tr><tr><td>Additional-scan</td><td>0.68</td><td>57.9</td><td>74.0</td><td>38.6</td><td>79.0</td><td>79.9</td><td>70.5</td><td>36.9</td><td>62.4</td></tr><tr><td rowspan="2">Prompt based</td><td>Prompt Tuning</td><td>0.04</td><td>56.0</td><td>71.6</td><td>12.0</td><td>89.4</td><td>76.8</td><td>79.6</td><td>61.5</td><td>63.8</td></tr><tr><td>Prefix-Tuning</td><td>22.69</td><td>67.5</td><td>75.7</td><td>43.4</td><td>91.5</td><td>83.4</td><td>83.1</td><td>35.6</td><td>68.6</td></tr><tr><td rowspan="3">State based</td><td>Initial State Tuning</td><td>0.45</td><td>66.8</td><td>78.4</td><td>53.0</td><td>92.4</td><td>86.4</td><td>86.1</td><td>78.5</td><td>77.4</td></tr><tr><td>State-offset Tuning (h)</td><td>0.45</td><td>67.4</td><td>80.8</td><td>56.2</td><td>91.9</td><td>87.7</td><td>85.6</td><td>79.7</td><td>78.5</td></tr><tr><td>State-offset Tuning (y)</td><td>0.03</td><td>70.0</td><td>79.6</td><td>52.5</td><td>91.7</td><td>86.3</td><td>85.6</td><td>78.2</td><td>77.7</td></tr></table>
364
+
365
+ Table 10: Full results of fine-tuning the SSM module on the GLUE dataset using pretrained Mamba 130M. Our State-offset Tuning $(h)$ achieves the highest average score among all PEFT methods.
366
+
367
+ <table><tr><td colspan="2">Model Size</td><td colspan="9">Mamba-2 1.3B</td><td colspan="3">Mamba-2 130M</td></tr><tr><td colspan="2">Dataset</td><td rowspan="2">Params (%)</td><td colspan="5">Spider</td><td colspan="3">SAMSum</td><td rowspan="2">Params (%)</td><td colspan="2">DART</td></tr><tr><td>Type</td><td>Method</td><td>All</td><td>Easy</td><td>Medium</td><td>Hard</td><td>Extra</td><td>R1</td><td>R2</td><td>RL</td><td>MET.</td><td>BLEU</td></tr><tr><td rowspan="2">-</td><td>Full Fine-tuning (All)</td><td>100.00</td><td>64.8</td><td>85.9</td><td>65.7</td><td>54.0</td><td>42.2</td><td>51.0</td><td>26.9</td><td>42.5</td><td>100.00</td><td>66.6</td><td>34.9</td></tr><tr><td>Full Fine-tuning (SSD)</td><td>2.42</td><td>55.1</td><td>76.2</td><td>56.1</td><td>42.5</td><td>34.3</td><td>50.5</td><td>26.3</td><td>42.4</td><td>4.17</td><td>65.7</td><td>39.7</td></tr><tr><td rowspan="3">Parameter based</td><td>LoRA</td><td>0.37</td><td>45.4</td><td>69.0</td><td>44.4</td><td>37.4</td><td>21.1</td><td>49.7</td><td>25.9</td><td>41.7</td><td>0.76</td><td>70.3</td><td>49.6</td></tr><tr><td>BitFit</td><td>0.02</td><td>50.9</td><td>71.4</td><td>51.6</td><td>45.4</td><td>24.1</td><td>50.9</td><td>26.5</td><td>42.6</td><td>0.03</td><td>66.2</td><td>39.0</td></tr><tr><td>Additional-scan</td><td>0.47</td><td>31.9</td><td>57.3</td><td>30.5</td><td>23.0</td><td>7.2</td><td>43.0</td><td>20.1</td><td>34.8</td><td>0.91</td><td>58.5</td><td>16.0</td></tr><tr><td rowspan="2">Prompt based</td><td>Prompt Tuning</td><td>0.01</td><td>45.2</td><td>62.5</td><td>46.9</td><td>34.5</td><td>25.9</td><td>49.6</td><td>26.1</td><td>41.6</td><td>0.04</td><td>65.5</td><td>36.9</td></tr><tr><td>Prefix-Tuning</td><td>6.99</td><td>47.4</td><td>71.0</td><td>48.2</td><td>32.2</td><td>25.9</td><td>50.8</td><td>26.5</td><td>42.6</td><td>12.81</td><td>69.2</td><td>46.5</td></tr><tr><td rowspan="4">State based</td><td>Initial State Tuning</td><td>1.84</td><td>54.3</td><td>73.4</td><td>57.2</td><td>45.4</td><td>27.1</td><td>50.4</td><td>26.4</td><td>42.3</td><td>3.53</td><td>65.3</td><td>37.2</td></tr><tr><td>State-offset Tuning (h)</td><td>1.84</td><td>58.5</td><td>79.3</td><td>61.6</td><td>44.6</td><td>33.7</td><td>48.8</td><td>24.7</td><td>40.5</td><td>3.53</td><td>70.0</td><td>46.3</td></tr><tr><td>State-offset Tuning (h) (low rank)</td><td>0.35</td><td>60.5</td><td>79.0</td><td>65.7</td><td>52.3</td><td>27.7</td><td>50.4</td><td>26.8</td><td>42.5</td><td>0.72</td><td>69.8</td><td>47.9</td></tr><tr><td>State-offset Tuning (y)</td><td>0.01</td><td>43.6</td><td>66.5</td><td>42.1</td><td>36.9</td><td>21.1</td><td>50.3</td><td>26.2</td><td>42.2</td><td>0.03</td><td>65.9</td><td>38.7</td></tr></table>
368
+
369
+ Table 11: Experimental results of fine-tuning the SSM module using pretrained Mamba-2 (Dao and Gu, 2024) models. We evaluate Spider and its subsets with execution accuracy, SAMSum using ROUGE-1/2/L scores, and DART through METEOR and BLEU scores. State-offset Tuning $(h)$ with low-rank adaptation (Sec. C) significantly reduces trainable parameters. It outperforms existing methods on Spider by a wide margin and matches the performance of other approaches on SAMSum and DART.
370
+
371
+ <table><tr><td colspan="2">Dataset</td><td rowspan="2">Params (%)</td><td colspan="9">GLUE</td></tr><tr><td>Model</td><td>Method</td><td>RTE</td><td>MRPC</td><td>CoLA</td><td>SST-2</td><td>QNLI</td><td>QQP</td><td>MNLI</td><td>Avg.</td><td>Relative (%)</td></tr><tr><td rowspan="2">Pythia 160M</td><td>Full Fine-tuning</td><td>100.00</td><td>64.3</td><td>77.0</td><td>20.5</td><td>88.7</td><td>85.0</td><td>88.8</td><td>79.2</td><td>71.9</td><td>100</td></tr><tr><td>Prefix-Tuning</td><td>8.36</td><td>57.4</td><td>75.0</td><td>4.6</td><td>88.2</td><td>81.5</td><td>80.6</td><td>62.2</td><td>64.2</td><td>89</td></tr><tr><td rowspan="3">Mamba 130M</td><td>Full Fine-tuning</td><td>100.00</td><td>71.1</td><td>80.6</td><td>63.2</td><td>92.2</td><td>87.4</td><td>87.9</td><td>80.8</td><td>80.5</td><td>100</td></tr><tr><td>Prefix-Tuning</td><td>22.69</td><td>67.5</td><td>75.7</td><td>43.4</td><td>91.5</td><td>83.4</td><td>83.1</td><td>35.6</td><td>68.6</td><td>85</td></tr><tr><td>State-offset Tuning (h)</td><td>0.45</td><td>67.4</td><td>80.8</td><td>56.2</td><td>91.9</td><td>87.7</td><td>85.6</td><td>79.7</td><td>78.5</td><td>98</td></tr></table>
372
+
373
+ Table 12: Prefix-Tuning experiments on Pythia 160M and Mamba 130M on GLUE tasks. State-offset Tuning for Mamba achieves approximately $98\%$ of full fine-tuning performance, while Prefix-Tuning reaches about $85 - 90\%$ in both SSM and Transformer architectures.
374
+
375
+ <table><tr><td>Model Size</td><td colspan="9">Mamba 1.4B</td><td colspan="4">Mamba 130M</td></tr><tr><td rowspan="2">Dataset</td><td rowspan="2">Params (%)</td><td colspan="5">Spider</td><td colspan="3">SAMSum</td><td rowspan="2">Params (%)</td><td colspan="2">DART</td><td>GLUE</td></tr><tr><td>All</td><td>Easy</td><td>Medium</td><td>Hard</td><td>Extra</td><td>R1</td><td>R2</td><td>RL</td><td>MET.</td><td>BLEU</td><td>Avg.</td></tr><tr><td>Method</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>SDT</td><td>0.26</td><td>19.8</td><td>38.3</td><td>16.6</td><td>16.1</td><td>4.8</td><td>46.3</td><td>21.5</td><td>37.7</td><td>0.51</td><td>67.5</td><td>48.2</td><td>63.7</td></tr><tr><td>State-offset Tuning (h)</td><td>0.23</td><td>57.4</td><td>77.4</td><td>59.9</td><td>44.8</td><td>33.7</td><td>50.9</td><td>26.5</td><td>42.4</td><td>0.45</td><td>70.0</td><td>47.0</td><td>78.5</td></tr><tr><td>State-offset Tuning (y)</td><td>0.01</td><td>53.0</td><td>77.4</td><td>55.4</td><td>40.8</td><td>22.9</td><td>50.6</td><td>26.1</td><td>42.0</td><td>0.03</td><td>66.8</td><td>45.2</td><td>77.7</td></tr></table>
376
+
377
+ Table 13: Comparison with Selective Dimension Tuning (SDT) (Galim et al., 2024) on Spider, SAMSum, DART, and GLUE. Our method outperforms SDT in most cases while using fewer parameters. Note that the hyperparameter configuration of SDT differs from that in Galim et al. (2024) to ensure a more comparable parameter count.
ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4aa0be24d2e881410d8cf32cb35cda66c70281edab0307ab603934b9a06f209e
3
+ size 728928
ACL/2025/State-offset Tuning_ State-based Parameter-Efficient Fine-Tuning for State Space Models/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edfdb15e0be5c14bac3beceedd6982b277daa17c964afe897a055d6e43f7a4fb
3
+ size 490439
ACL/2025/Subword models struggle with word learning, but surprisal hides it/611d1f3b-0447-49d8-b7fb-0a3acb1611a0_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b552f0abf6398589b6d1255c0d8c3bd0a11fd5ac682ea56e75123dd208bcf3f
3
+ size 81352
ACL/2025/Subword models struggle with word learning, but surprisal hides it/611d1f3b-0447-49d8-b7fb-0a3acb1611a0_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:839e7aa82cd39de58067e4befc7a2945d945c07937331c5656a4e0aaeff3b558
3
+ size 102685
ACL/2025/Subword models struggle with word learning, but surprisal hides it/611d1f3b-0447-49d8-b7fb-0a3acb1611a0_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:726fb651946de0cc7a551327b689168522231eb17cb52ef7c2dbc758421c23ea
3
+ size 1713334
ACL/2025/Subword models struggle with word learning, but surprisal hides it/full.md ADDED
@@ -0,0 +1,272 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Subword models struggle with word learning, but surprisal hides it
2
+
3
+ Bastian Bunzeck and Sina Zarrieß
4
+
5
+ Computational Linguistics, Department of Linguistics
6
+
7
+ Bielefeld University, Germany
8
+
9
+ {bastian.bunzeck, sina.zarriess}@uni-bielefeld.de
10
+
11
+ # Abstract
12
+
13
+ We study word learning in subword and character language models with the psycholinguistic lexical decision task. While subword LMs struggle to discern words and non-words with high accuracy, character LMs solve this task easily and consistently. Only when supplied with further contexts do subword LMs perform similarly to character models. Additionally, when looking at word-level and syntactic learning trajectories, we find that both processes are separable in character LMs. Word learning happens before syntactic learning, whereas both occur simultaneously in subword LMs. This raises questions about the adequacy of subword LMs for modeling language acquisition and positions character LMs as a viable alternative to study processes below the syntactic level.
14
+
15
+ # 1 Introduction
16
+
17
+ When humans acquire their first language(s), they first learn to recognize single words, mostly from short, fragmentary utterances (Cameron-Faulkner et al., 2003; Bunzeck and Diessel, 2024), before fully understanding the grammatical processes governing them (Tomasello, 1992; Behrens, 2021). This simple fact about language acquisition has received surprisingly little attention in the body of work that treats LMs as models of language learners (Warstadt and Bowman, 2022; Portelance and Jasbi, 2024). While word learning in children is comparatively well studied (Plunkett, 1997; Yu and Ballard, 2007; Waxman and Gelman, 2009; Bergelson and Swingley, 2012; Clark and Casillas, 2015; Frank et al., 2021), the implicit word learning processes in LMs are not. Current studies focus on syntax (Mueller et al., 2022; Choshen et al., 2022), or investigate word learning in close connection with syntax through surprisal (Chang and Bergen, 2022; Portelance et al., 2023; Shafiabadi and Wisniewski, 2025; Ficarra et al., 2025). Architecture-wise, a key limitation to the precise study of word learning
18
+
19
+ ![](images/7ab142c3729b2e79fb187eee05fbf359ea012c191020a7628c42bd29423b46bb.jpg)
20
+ Figure 1: Illustration of word learning in human learners and transformer LLMs (top), and of our lexical decision test that probes discrimination of words from non-words (bottom). While human learners build up an mental lexicon from experience with language, artificial learners assign probabilities to strings based on their frequency.
21
+
22
+ is subword tokenization (e.g., BPE, Gage, 1994), which splits words into linguistically (Arnett and Bergen, 2025) and cognitively implausible units (Beinborn and Pinter, 2023).
23
+
24
+ To gauge word learning in a syntax-independent manner, we use the psycholinguistic lexical decision task (Meyer and Schvaneveldt, 1971; Le Godais et al., 2017), i.e., deciding which word in a given word/non-word pair is real. We find that models with character-level tokenization learn this task quickly and reliably. In contrast, subword LMs of all sizes perform significantly worse in a syntax-independent setting and only achieve comparable accuracy when stimuli are measured through surprisal, or "unexpectedness," in linguistic context. By comparing word and syntactic learning (measured via BLiMP, Warstadt et al., 2020), we further find that character models quickly acquire word knowledge and only later develop syntactic knowledge. In subword models, word and syntax learning happen concurrently. This demonstrates how ele
25
+
26
+ mentary modeling decisions, such as tokenization methods, significantly impact learning trajectories in LMs, a fact that warrants more scrutiny when using LMs as models of language acquisition.
27
+
28
+ # 2 Related work
29
+
30
+ Word learning in humans is a multifaceted phenomenon that involves different kinds of (extra)linguistic knowledge (Waxman and Gelman, 2009): while phoneticians are concerned with word recognition and sequence segmentation (Jusczyk, 1999), developmental psychologists frequently equate word learning with correct reference to real world objects (Stager and Werker, 1997; Ackermann et al., 2020). Psycholinguists usually focus on the mental lexicon and learning which words belong to it (Goldinger, 1996), whereas usage-based scholars take into account children's productions and their ability to be competent language users, even with few words (Tomasello, 1992).
31
+
32
+ Although aspects like word recognition and sequence segmentation have been studied in LMs (e.g. Goriely and Buttery, 2025a), the most common approach to word learning in LMs is measuring the predictability of words via surprisal (negative log-probability, Hale, 2001). Chang and Bergen (2022) train LMs on book texts and wiki data. They define a surprisal threshold below which words are said to be learned and find that frequent function words are learned earliest. Here, the alignment between models and real learners is questionable: children first utter nouns and verbs (Tomasello, 2000), but also rely on function words for challenges like speech segmentation (Dye et al., 2019). Portelance et al. (2023) show that in LSTMs trained on child-directed speech, surprisal correlates with word-level age of acquisition. Chang et al. (2024) observe that learning curves for surprisal values are stable for frequent tokens, while infrequent tokens are "forgotten" again over pre-training. Shafiabadi and Wisniewski (2025) introduce anti-surprisal (in incorrect contexts) to track false usage, which also fluctuates over pre-training. These studies cast word learning as the ability to anticipate words' expectedness in a given syntactic (and semantic) context. We note a certain conceptual leap to the original works on surprisal, where it is primarily viewed as an incremental measure of processing difficulty in syntactic comprehension (Levy, 2008; Demberg and Keller, 2009). A simple word like dog might be surprising and therefore hard to parse
33
+
34
+ in some contexts, but very expected in others, independently of being already learned on the pure word level. A further methodological drawback of surprisal as a measure of word learning is that it corresponds almost directly to the next-token prediction objective LMs are trained on. This contrasts with typical probing paradigms used in the domain of syntax, which implement the idea to "challenge" models in minimal pair set-ups that are not observed directly as string sequences in training, thereby testing abstracted, implicit linguistic knowledge rather than observed patterns in the data. In a similar vein, we want to probe the word knowledge of an LM at a fundamental level and beyond surface-level word sequences that LMs are known to excel in predicting. We want to know if the artificial learner knows that the word doggie exists in the English language, but moggie does not.
35
+
36
+ Lexical decision is widely used in human studies but remains an underexplored LM benchmark. Le Godais et al. (2017) show that character-based LSTMs achieve about $95\%$ accuracy on such tasks. Lavechin et al. (2023) find that speech-based LMs need significantly more input to still perform poorly $(56.8\%)$ than phoneme-level LSTMs $(75.4\%)$ on a phonetic dataset. For the same data, Goriely et al. (2024) find that GPT-2-based subword BabyLMs achieve $70\%$ accuracy and a comparable character models reach nearly $90\%$ . Finally, for another lexical decision dataset, Bunzeck et al. (2025) report near-perfect accuracy for character-based grapheme Llama models, while phoneme models perform at $60 - 70\%$ .
37
+
38
+ # 3 Experiments
39
+
40
+ Models We train triplets of increasingly larger Llama models (Touvron et al., 2023) with character/subword tokenization on the BabyLM 10M corpus (Choshen et al., 2024). Training details are found in Appendix A. As ablations, we test subword Pythias (Biderman et al., 2023) and character/subword GPT-2 models (Goriely et al., 2024).
41
+
42
+ Test data We follow the idea of forced-choice lexical decision (Baddeley et al., 1993), where participants must decide which is real: an existing word or a synthesized non-word. We use wuggy (Keuleers and Brysbaert, 2010) to generate minimal pairs of words/non-words that differ in one or two syllables, akin to syntactic minimal pair tests such as BLiMP. We derive 1,000 non-words (e.g. monding) each from 1,000 high-frequency/low
43
+
44
+ <table><tr><td rowspan="2">Tokenization</td><td rowspan="2">Model</td><td rowspan="2">Parameters</td><td rowspan="2">Data size</td><td colspan="2">Lexical decision</td><td colspan="2">Surprisal</td><td colspan="2">Anti-surprisal</td></tr><tr><td>highFrq</td><td>lowFrq</td><td>highFrq</td><td>lowFrq</td><td>highFrq</td><td>lowFrq</td></tr><tr><td rowspan="10">Subword (BPE)</td><td rowspan="6">Pythia</td><td>14M</td><td rowspan="6">825GB</td><td>66.6</td><td>62.5</td><td>90.5</td><td>85.5</td><td>71.4</td><td>77.7</td></tr><tr><td>70M</td><td>72.5</td><td>68.8</td><td>94.5</td><td>94.0</td><td>77.0</td><td>83.6</td></tr><tr><td>160M</td><td>77.8</td><td>73.0</td><td>96.4</td><td>95.8</td><td>78.0</td><td>85.7</td></tr><tr><td>410M</td><td>81.9</td><td>78.1</td><td>97.7</td><td>97.9</td><td>77.1</td><td>84.1</td></tr><tr><td>1B</td><td>87.5</td><td>83.2</td><td>97.7</td><td>97.9</td><td>76.6</td><td>83.8</td></tr><tr><td>1.4B</td><td>87.8</td><td>81.6</td><td>97.9</td><td>97.9</td><td>76.5</td><td>84.7</td></tr><tr><td>GPT-2</td><td>97.5M</td><td>100M words</td><td>35.6</td><td>79.1</td><td>99.0</td><td>99.2</td><td>84.7</td><td>86.9</td></tr><tr><td rowspan="3">Llama</td><td>2.51M</td><td rowspan="3">10M words</td><td>70.9</td><td>58.4</td><td>86.7</td><td>70.9</td><td>78.6</td><td>67.7</td></tr><tr><td>7.77M</td><td>79.5</td><td>63.2</td><td>91.3</td><td>78.1</td><td>81.1</td><td>72.9</td></tr><tr><td>30.03M</td><td>83.6</td><td>68.6</td><td>92.7</td><td>81.1</td><td>83.7</td><td>76.1</td></tr><tr><td rowspan="4">Character</td><td>GPT-2</td><td>85.3M</td><td>100M words</td><td>98.7</td><td>97.3</td><td>99.8</td><td>99.4</td><td>98.0</td><td>96.3</td></tr><tr><td rowspan="3">Llama</td><td>0.49M</td><td rowspan="3">10M words</td><td>97.6</td><td>83.0</td><td>98.2</td><td>84.3</td><td>98.0</td><td>83.1</td></tr><tr><td>3.73M</td><td>98.9</td><td>90.2</td><td>99.4</td><td>90.3</td><td>98.5</td><td>88.8</td></tr><tr><td>21.94M</td><td>99.0</td><td>93.3</td><td>99.8</td><td>94.7</td><td>99.0</td><td>92.5</td></tr></table>
45
+
46
+ Table 1: Accuracy scores (in %) for (i) lexical decision, (ii) surprisal and (iii) anti-surprisal experiments
47
+
48
+ frequency words (e.g. sending), which preserve syllable-bigram frequencies and match their origin words in length (cf. Appendix B).
49
+
50
+ Lexical decision For a word/non-word pair $(w, *w)$ , we measure $-\log(P(w|_{-}))$ and $-\log(P(*w|_{-}))$ , i.e. how "surprised" a LM is by the word in the context of a pretended whitespace (and BOS token). If $-\log(P(w|_{-})) < -\log(P(*w|_{-}))$ , the LM's lexical decision is correct. As autoregressive LMs are sequence prediction models, we need a preceding context for which we can calculate surprisal. A single whitespace is the most neutral starting token available (and for subword models also signals that the first subword is word-initial). For all experiments, we calculate the average surprisal over all tokens of a word (which, in some cases, is characterized by a mismatch in token numbers between words and non-words, cf. Appendix B) with minicons (Misra, 2022).
51
+
52
+ Surprisal To measure LMs' knowledge of words presented in regular syntactic contexts, we calculate the surprisal of words and non-words $(w, *w)$ as $-\log(P(w_i | w_{n<i}))$ , i.e. the degree to which the LM is "surprised" by the word in the context of plausible preceding tokens, including a BOS token. We create stimuli by sampling sentences that contain our target words from OpenSubtitles (Lison and Tiedemann, 2016) and substituting them with matching non-words for the false stimuli. If $-\log(P(w_i | w_{n<i})) < -\log(P(*w_i | w_{n<i}))$ , the LM's decision is correct.
53
+
54
+ Anti-surprisal Inspired by Shafiabadi and Wisniewski (2025), we include anti-surprisal, a mea
55
+
56
+ sure of word surprisal on negative instances. We create negative samples by selecting sentences that our original words do not occur in, and then randomly $^{1}$ placing words/non-words into these sentences at the same index, where index $\geq 3$ . By doing so, we compromise between lexical decision and surprisal measurement. There are two reasons to include this measure: i) surprisal in negative samples provides the model with word material as context, but without semantic or syntactic signals that could prime the model towards recognizing it; this allows us to assess whether the mere presence of other words in context makes it easier for the model to distinguish words from non-words, compared to our lexical decision set-up where only whitespace is given. In addition, ii) we want to see if the presence of an ill-fitting context actively deteriorates performance in the sense of a model suddenly preferring non-words over existing words. Again, if $-\log(P(w_i|w_{n<i})) < -\log(P(*w_i|w_{n<i}))$ , the LM's decision is correct.
57
+
58
+ Learning trajectories To assess when word learning happens in relation to syntax learning, we further evaluate intermediate checkpoints of our models on our word learning tests and BLiMP as a syntactic benchmark. In line with previous studies (Chang and Bergen, 2022; Viering and Loog, 2023), we space our checkpoints logarithmically - 10 for the first $10\%$ of training, 9 more for the remaining $90\%$ . For the Pythia models, we extract similarly spaced checkpoints (the GPT-2 models do not provide checkpoints, so we exclude them).
59
+
60
+ # 4 Results
61
+
62
+ **Lexical decision** The lexical decision results (Table 1) show a strong contrast between character and subword models. Character models achieve near-perfect accuracy $(97 - 99\%)$ on high-frequency words, regardless of model size. The performance on low-frequency words steadily increases with model size and reaches a near-perfect level for our largest character Llama and the character GPT-2. On the other hand, all BPE models get surprisingly low scores on high-frequency words: The smallest Pythia model discriminates between word and nonwords with an accuracy of only $67\%$ , and the BPE GPT-2 performs below the chance baseline. Even the largest and best BPE model reaches only $87.8\%$ on high-frequency words – almost $10\%$ less than the smallest character model.
63
+
64
+ Scaling laws generally hold, with larger models outperforming smaller ones. Interestingly, for the BPE models, there is a consistent gap between high and low-frequency words that cannot be closed by larger models. Smaller character models also show a performance gap between high and low-frequency words, but it narrows considerably with larger models. These results point to substantial differences in how subword and character models learn words. Such a surprising lack of ability in distinguishing words from non-words (without context) is a blatant, hitherto overlooked gap in subword models.
65
+
66
+ Surprisal and anti-surprisal Results for the second experiment (Table 1) differ from those for lexical decision. In the surprisal setting, the difference between BPE and character models is less pronounced. On high-frequency data, nearly all models (except the smallest BPE Llama) achieve over $90\%$ accuracy. Still, larger character models yield the best results. In the low-frequency data condition, the pattern is similar, though scores are generally lower. Very large BPE models outperform our Llamas there, but the character GPT-2 remains superior. This may be attributed to the limited lexical exposure of our Llamas, trained on only 10M tokens. In the anti-surprisal setting, character models again drastically outperform BPE models and achieve nearly perfect scores on high-frequency data, while BPE models only reach $70 - 80\%$ accuracy. This setting is the only one where Pythia models get better scores on low-frequency data, with a gap of $6 - 8\%$ , which increases with model size (not the case for BPE Llamas or character models). This contrast between surprisal and antisurprisal might
67
+
68
+ ![](images/ad1f5f21e0221eb64553f10c91775fd3690b1fc15498f32abe7e259240c1d7c5.jpg)
69
+ Figure 2: Selected lexical and syntactic learning curves
70
+
71
+ be an indicator of the entanglement of word learning and syntactic learning in subword models. It is plausible that for high-frequency words, the BPE models have strong expectations about which word should come next in a certain context, and because this expectation is not matched by the real (but ill-fitting) word, a made-up non-word is preferred. We argue that this should still not be the case for an ideal language model – if a model is indeed well-tuned, it should assign a higher probability to an ill-fitting but existing word which is still in distribution, than to a completely ill-fitting string that is out-of-distribution. In any case, BPE models catch up to character models if (and only if) provided with additional syntactic/semantic context information. While random context somewhat aids BPE models, a substantial gap remains between the largest BPE models and the character models, where performance remains excellent, even in the presence of implausible contexts.
72
+
73
+ Learning trajectories Figure 2 displays learning curves for syntactic agreement phenomena, lexical decision, and both surprisal conditions across the 19 saved checkpoints (complete curves reproduced in Appendix C). The first 10 checkpoints correspond to the first $10\%$ of pretraining, the remaining 9 checkpoints represent $10\%$ of training steps over the remaining $90\%$ of pretraining. For character models, the high-frequency, low-frequency, and syntactic curves are clearly separated. On high-frequency data, word learning is rapid and follows power-law curves; the low-frequency scores im
74
+
75
+ prove a little later and at a lower rate, but with the same trajectory. Syntactic phenomena improve later, mostly in s-shaped curves (e.g., det.-noun agreement), or are not learned at all in small models (subj.-verb agreement). In contrast, the syntactic and lexical curves for BPE models form sheaves of s-shaped trajectories. There is no principal difference in learning dynamics between the syntactic and the word level, improvements occur simultaneously. This further confirms the results of our previous experiments: in BPE models, word learning is dependent on syntax learning, and words cannot be recognized reliably outside of plausible contexts. Additionally, these different levels of learning cannot be disentangled in BPE models, whereas in character models, syntactic learning follows word learning. $^3$
76
+
77
+ # 5 Discussion
78
+
79
+ In contrast to previous work, this study set out to explore when (and if) LMs learn what valid words are, and not how LMs learn when words are validly used. How should these results now be interpreted in the light of language acquisition and the use of BabyLMs to model corresponding processes? In reality, words are not learned in isolation, but from usage. Yet, words have been widely shown to be represented as solid "standalone" units in the mental lexicon, and to be units that human learners acquire early on (cf. Waxman and Gelman, 2009, also Montag et al., 2018). Usage-based approaches have tested aspects of word learning independently from syntax (mostly in object naming tasks, cf. Tomasello and Todd, 1983) and relate it to pragmatic aspects of communication. For example, children only react to words they know and ignore similar-sounding words. They even struggle with learning words that are phonetically extremely similar (like our stimuli), and only later gain this capacity (Stager and Werker, 1997). Similarly, in production, the earliest words come in isolation, slowly emerge into pivot schemas and holophrases (Tomasello, 1992, 2003), and only then finally turn into complex sentences. Of course, syntax also aids in discovering aspects of "wordiness", like SV(X)-sentences offering cues for agent-patient relationships. It would be very interesting to further disentangle these levels of word knowledge
80
+
81
+ in a follow-up study, but our current study is not concerned with the "you shall know a word by the company it keeps"-level of word knowledge, but rather with the "what do valid words of the language look like"-level, which can be assessed via lexical decision. As such, we believe that the separated learning curves of our character-level models represent more human-like learning than the highly correlated curves found in the subword models, but in reality, an overlap between them is definitely expected.
82
+
83
+ Reasons for the tremendous performance differences between subword and character models remain open to further inquiry. One plausible explanation is that character models have much more context available to calculate meaningful sequence probabilities, as words are split into many more tokens. While this is true, it is also exactly the point that we are stressing here: it is hard to imagine that arbitrary subword units lead to human-like, plausible word-level representations (like in exemplar models of lexical storage, cf. Bybee, 2010), whereas character models might offer a more justified level of granularity (and, e.g., better fit reading times, cf. Oh et al., 2021). Our findings also align with results on LMs' sensitivity to character-level perturbations (Moradi and Samwald, 2021; Zhu et al., 2024) and their inability to solve character-level tasks, like counting occurrences of the letter $r$ in the word strawberry (Zhang and He, 2024; Shin and Kaneko, 2024; Cosma et al., 2025).
84
+
85
+ # 6 Conclusion
86
+
87
+ We have shown that the lexical decision approach to the study of word learning in LMs complements surprisal-based approaches and reveals difficulties that surprisal hides: subword LMs struggle with lexical decision, whereas character models master this task with ease. Additionally, in subword LMs, lexical and syntactic learning are inseparable, whereas word learning in character models precedes syntactic learning; the processes are related, yet separable. It is plausible that the a priori token splitting in subword models preempts a word discovery process in them, whereas character models first have to pass through this developmental stage, possibly in a somewhat more human-like manner. In any case, as we have shown, decisions about the representational levels of LMs tremendously influence their learning pathways on the different levels of linguistic analysis.
88
+
89
+ # Limitations
90
+
91
+ The generalizability of our findings is constrained by a few factors. The present study has only focused on the English language, but it is plausible that other languages with different writing systems or graphematic and phonotactic rule systems exhibit different patterns under different tokenization schemes. Here, phonetic transcriptions might provide a viable alternative, but real narrow transcriptions that accurately capture the whole breadth of human input are scarce and extremely costly and laborious to manually produce (although novel datasets like Goriely and Buttery, 2025b provide an alternative through automatic transcription). Besides, for the character LMs, we focus only on small models, as very large models with such tokenization, especially ones providing intermediate checkpoints, are nonexistent at this moment. It would still be interesting to see how they compare to subword models in a setting where parameter size and training data are greatly increased.
92
+
93
+ As already mentioned in Section 2, from a developmental perspective, word learning in humans also includes other processes than statistical pattern recognition from the input: semantic aspects and real-world reference are equally important, as are multimodal input and communicative intent. The ongoing form-vs.-function debate on LMs (Mahowald et al., 2024) has begun to consider these aspects, and further studies should aim at incorporating them; for example, the object naming paradigm used in many developmental psychology studies would lend itself naturally to the study of multimodal models.
94
+
95
+ Finally, we also want to mention that there are attempts to add more linguistic theory to tokenizers. Looking into different tokenizers that, for example, try to be more morphology-aware (Hofmann et al., 2022; Bauwens and Delobelle, 2024; Yehezkel and Pinter, 2023; Uzan et al., 2024) or implement other optimization tricks (Schmidt et al., 2024) could yield even more fine-grained points of comparison, but for the present study and its limited scope we focused on the most popular tokenization scheme (BPE) and the most linguistically minimalist alternative (characters).
96
+
97
+ # Ethical considerations
98
+
99
+ Due to the nature of this work, no concrete ethical aspects or repercussions need to be discussed. However, we would like to stress that, of course,
100
+
101
+ BabyLMs not supposed to simulate real babies, but only abstractions of a very specific part of their learning capacity (frequency-driven, domain-general learning mechanisms such as entrenchment or resonance), and therefore all claims about their implications for language development in the real world should be interpreted in this light.
102
+
103
+ # Acknowledgments
104
+
105
+ We would like to thank the three anonymous ARR reviewers for their insightful comments and the highly valuable discussions throughout the rebuttal period.
106
+
107
+ This research has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - CRC-1646, project number 512393437, project A02.
108
+
109
+ # References
110
+
111
+ Lena Ackermann, Robert Hepach, and Nivedita Mani. 2020. Children learn words easier when they are interested in the category to which the word belongs. Developmental Science, 23(3):e12915.
112
+ Philip A. Allen, Albert F. Smith, Mei-Ching Lien, Jeremy Grabbe, and Martin D. Murphy. 2005. Evidence for an Activation Locus of the Word-Frequency Effect in Lexical Decision. Journal of Experimental Psychology: Human Perception and Performance, 31(4):713-721.
113
+ Catherine Arnett and Benjamin Bergen. 2025. Why do language models perform worse for morphologically complex languages? In Proceedings of the 31st International Conference on Computational Linguistics, pages 6607-6623, Abu Dhabi, UAE. Association for Computational Linguistics.
114
+ Harald Baayen, Piepenbrock R, and Gulikers L. 1995. CELEX2.
115
+ Alan Baddeley, Hazel Emslie, and Ian Nimmo-Smith. 1993. The Spot-the-Word test: A robust estimate of verbal intelligence based on lexical decision. *British Journal of Clinical Psychology*, 32(1):55–65.
116
+ Thomas Bauwens and Pieter Delobelle. 2024. BPEknockout: Pruning Pre-existing BPE Tokenisers with Backwards-compatible Morphological Semisupervision. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 5810-5832, Mexico City, Mexico. Association for Computational Linguistics.
117
+ Heike Behrens. 2021. Constructivist Approaches to First Language Acquisition. Journal of Child Language, 48(5):959-983.
118
+
119
+ Lisa Beinborn and Yuval Pinter. 2023. Analyzing Cognitive Plausibility of Subword Tokenization. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4478-4486, Singapore. Association for Computational Linguistics.
120
+ Elika Bergelson and Daniel Swingley. 2012. At 6-9 months, human infants know the meanings of many common nouns. Proceedings of the National Academy of Sciences, 109(9):3253-3258.
121
+ Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling. Preprint, arXiv:2304.01373.
122
+ Bastian Bunzeck and Holger Diessel. 2024. The richness of the stimulus: Constructional variation and development in child-directed speech. First Language, 45(2):152-176.
123
+ Bastian Bunzeck, Daniel Duran, Leonie Schade, and Sina Zarrieß. 2025. Small language models also work with small vocabularies: Probing the linguistic abilities of grapheme- and phoneme-based baby llamas. In Proceedings of the 31st International Conference on Computational Linguistics, pages 6039-6048, Abu Dhabi, UAE. Association for Computational Linguistics.
124
+ Joan Bybee. 2010. Language, Usage and Cognition, 1 edition. Cambridge University Press.
125
+ Thea Cameron-Faulkner, Elena Lieven, and Michael Tomasello. 2003. A construction based analysis of child directed speech. Cognitive Science, 27(6):843-873.
126
+ Tyler A. Chang and Benjamin K. Bergen. 2022. Word Acquisition in Neural Language Models. Transactions of the Association for Computational Linguistics, 10:1-16.
127
+ Tyler A. Chang, Zhuowen Tu, and Benjamin K. Bergen. 2024. Characterizing Learning Curves During Language Model Pre-Training: Learning, Forgetting, and Stability. Transactions of the Association for Computational Linguistics, 12:1346-1362.
128
+ Leshem Choshen, Ryan Cotterell, Michael Y. Hu, Tal Linzen, Aaron Mueller, Candace Ross, Alex Warstadt, Ethan Wilcox, Adina Williams, and Chengxu Zhuang. 2024. [Call for Papers] The 2nd BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus. Preprint, arXiv:2404.06214.
129
+ Leshem Choshen, Guy Hacohen, Daphna Weinshall, and Omri Abend. 2022. The Grammar-Learning Trajectories of Neural Language Models. In Proceedings of the 60th Annual Meeting of the Association
130
+
131
+ for Computational Linguistics (Volume 1: Long Papers), pages 8281-8297, Dublin, Ireland. Association for Computational Linguistics.
132
+ Eve V Clark and Marisa Casillas. 2015. First language acquisition. In *The Routledge Handbook of Linguistics*, pages 311-328. Routledge.
133
+ Adrian Cosma, Stefan Ruseti, Emilian Radoi, and Mihai Dascalu. 2025. The Strawberry Problem: Emergence of Character-level Understanding in Tokenized Language Models. arXiv preprint.
134
+ Vera Demberg and Frank Keller. 2009. A Computational Model of Prediction in Human Parsing: Unifying Locality and Surprisal Effects. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 31, pages 1888-1893.
135
+ Cristina Dye, Yarden Kedar, and Barbara Lust. 2019. From lexical to functional categories: New foundations for the study of language development. First Language, 39(1):9-32.
136
+ Filippo Ficarra, Ryan Cotterell, and Alex Warstadt. 2025. A Distributional Perspective on Word Learning in Neural Language Models. arXiv preprint.
137
+ Michael C. Frank, Mika Braginsky, Daniel Yurovsky, and Virginia A. Marchman. 2021. Variability and Consistency in Early Language Learning: The Word-bank Project. The MIT Press.
138
+ Philip Gage. 1994. A new algorithm for data compression. The C Users Journal, 12(2):23-38.
139
+ Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2020. The Pile: An 800GB Dataset of Diverse Text for Language Modeling. Preprint, arXiv:2101.00027.
140
+ Stephen D. Goldinger. 1996. Auditory Lexical Decision. Language and Cognitive Processes, 11(6):559-568.
141
+ Zébulon Goriely and Paula Buttery. 2025a. BabyLM's First Words: Word Segmentation as a Phonological Probing Task. arXiv preprint.
142
+ Zébulon Goriely and Paula Buttery. 2025b. IPA-CHILDES &amp; G2P+: Feature-Rich Resources for Cross-Linguual Phonology and Phonemic Language Modeling. arXiv preprint.
143
+ Zébulon Goriely, Richard Diehl Martinez, Andrew Caines, Paula Buttery, and Lisa Beinborn. 2024. From babble to words: Pre-training language models on continuous streams of phonemes. In *The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning*, pages 37-53, Miami, FL, USA. Association for Computational Linguistics.
144
+ John Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Second Meeting of the North American Chapter of the Association for Computational Linguistics.
145
+
146
+ Valentin Hofmann, Hinrich Schuetze, and Janet Pierrehumbert. 2022. An Embarrassingly Simple Method to Mitigate Undesirable Properties of Pretrained Language Model Tokenizers. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 385-393, Dublin, Ireland. Association for Computational Linguistics.
147
+ Peter W. Jusczyk. 1999. How infants begin to extract words from speech. Trends in Cognitive Sciences, 3(9):323-328.
148
+ Emmanuel Keuleers and Marc Brysbaert. 2010. Wuggy: A multilingual pseudoword generator. Behavior Research Methods, 42(3):627-633.
149
+ Marvin Lavechin, Yaya Sy, Hadrien Titeux, María Andrea Cruz Blandón, Okko Räsanen, Hervé Bredin, Emmanuel Dupoux, and Alejandrina Cristia. 2023. BabySLM: Language-acquisition-friendly benchmark of self-supervised spoken language models. In INTERSPEECH 2023, pages 4588-4592. ISCA.
150
+ Gael Le Godais, Tal Linzen, and Emmanuel Dupoux. 2017. Comparing character-level neural language models using a lexical decision task. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 125-130, Valencia, Spain. Association for Computational Linguistics.
151
+ Roger Levy. 2008. Expectation-based syntactic comprehension. Cognition, 106(3):1126-1177.
152
+ Pierre Lison and Jörg Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 923-929, Portoroz, Slovenia. European Language Resources Association (ELRA).
153
+ Kyle Mahowald, Anna A. Ivanova, Idan A. Blank, Nancy Kanwisher, Joshua B. Tenenbaum, and Evelina Fedorenko. 2024. Dissociating language and thought in large language models. Trends in Cognitive Sciences, pages 517-540.
154
+ James L. McClelland and David E. Rumelhart. 1981. An interactive activation model of context effects in letter perception: I. An account of basic findings. Psychological Review, 88(5):375-407.
155
+ David E. Meyer and Roger W. Schvaneveldt. 1971. Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. Journal of Experimental Psychology, 90(2):227-234.
156
+ Kanishka Misra. 2022. Minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models. arXiv preprint.
157
+ Jessica L. Montag, Michael N. Jones, and Linda B. Smith. 2018. Quantity and Diversity: Simulating Early Word Learning Environments. Cognitive Science, 42(S2):375-412.
158
+
159
+ Milad Moradi and Matthias Samwald. 2021. Evaluating the Robustness of Neural Language Models to Input Perturbations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1558-1570, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
160
+ Aaron Mueller, Robert Frank, Tal Linzen, Luheng Wang, and Sebastian Schuster. 2022. Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1352-1368, Dublin, Ireland. Association for Computational Linguistics.
161
+ Byung-Doh Oh, Christian Clark, and William Schuler. 2021. Surprisal Estimators for Human Reading Times Need Character Models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3746-3757, Online. Association for Computational Linguistics.
162
+ Kim Plunkett. 1997. Theories of early language acquisition. Trends in Cognitive Sciences, 1(4):146-153.
163
+ Eva Portelance, Yuguang Duan, Michael C. Frank, and Gary Lupyan. 2023. Predicting Age of Acquisition for Children's Early Vocabulary in Five Languages Using Language Model Surprisal. Cognitive Science, 47(9):e13334.
164
+ Eva Portelance and Masoud Jasbi. 2024. The Roles of Neural Networks in Language Acquisition. Language and Linguistics Compass, 18(6):e70001.
165
+ Craig W Schmidt, Varshini Reddy, Haoran Zhang, Alec Alameddine, Omri Uzan, Yuval Pinter, and Chris Tanner. 2024. Tokenization Is More Than Compression. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 678-702, Miami, Florida, USA. Association for Computational Linguistics.
166
+ Nazanin Shafiabadi and Guillaume Wisniewski. 2025. Beyond surprisal: A dual metric framework for lexical skill acquisition in LLMs. In Proceedings of the 31st International Conference on Computational Linguistics, pages 6636-6641, Abu Dhabi, UAE. Association for Computational Linguistics.
167
+ Andrew Shin and Kunitake Kaneko. 2024. Large language models lack understanding of character composition of words. In ICML 2024 Workshop on Llms and Cognition.
168
+ Christine L. Stager and Janet F. Werker. 1997. Infants listen for more phonetic detail in speech perception than in word-learning tasks. Nature, 388(6640):381-382.
169
+ Michael Tomasello. 1992. First Verbs: A Case Study of Early Grammatical Development, 1 edition. Cambridge University Press.
170
+
171
+ Michael Tomasello. 2000. The item-based nature of children's early syntactic development. Trends in Cognitive Sciences, 4(4).
172
+ Michael Tomasello. 2003. Constructing a Language: A Usage-Based Theory of Language Acquisition. Harvard University Press.
173
+ Michael Tomasello and Jody Todd. 1983. Joint attention and lexical acquisition style. First Language, 4(12):197-211.
174
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. LLaMA: Open and Efficient Foundation Language Models. Preprint, arXiv:2302.13971.
175
+ Omri Uzan, Craig W. Schmidt, Chris Tanner, and Yuval Pinter. 2024. Greed is All You Need: An Evaluation of Tokenizer Inference Methods. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 813-822, Bangkok, Thailand. Association for Computational Linguistics.
176
+ Tom Viering and Marco Loog. 2023. The Shape of Learning Curves: A Review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(6):7799-7819.
177
+ Alex Warstadt and Samuel R. Bowman. 2022. What Artificial Neural Networks Can Tell Us about Human Language Acquisition, 1 edition, pages 17-60. CRC Press, Boca Raton.
178
+ Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Gotlieb Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mosquera, Bhargavi Paranjabe, Adina Williams, Tal Linzen, and Ryan Cotterell. 2023. Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora. In Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning, pages 1-6, Singapore. Association for Computational Linguistics.
179
+ Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2020. BLiMP: The Benchmark of Linguistic Minimal Pairs for English. Transactions of the Association for Computational Linguistics, 8:377-392.
180
+ Sandra R. Waxman and Susan A. Gelman. 2009. Early word-learning entails reference, not merely associations. Trends in Cognitive Sciences, 13(6):258-263.
181
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick Von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin
182
+
183
+ Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
184
+ Shaked Yehezkel and Yuval Pinter. 2023. Incorporating context into subword vocabularies. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 623-635, Dubrovnik, Croatia. Association for Computational Linguistics.
185
+ Chen Yu and Dana H. Ballard. 2007. A unified model of early word learning: Integrating statistical and social cues. Neurocomputing, 70(13-15):2149-2165.
186
+ Yidan Zhang and Zhenan He. 2024. Large Language Models Can Not Perform Well in Understanding and Manipulating Natural Language at Both Character and Word Levels? In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 11826-11842, Miami, Florida, USA. Association for Computational Linguistics.
187
+ Kaijie Zhu, Qinlin Zhao, Hao Chen, Jindong Wang, and Xing Xie. 2024. PromptBench: A unified library for evaluation of large language models. Journal of Machine Learning Research, 25(254):1-22.
188
+
189
+ # A Model hyperparameters and training details
190
+
191
+ We use the transformers library (Wolf et al., 2020) to train our models. The corresponding hyperparameters are listed in Table 2. We opted for very small LMs and little training data because the BabyLM paradigm has consolidated itself as a well-established method of developmentally plausible language modeling (Warstadt et al., 2023; Choshen et al., 2024). Also, we noticed the word learning to occur quite rapidly in our models, so we argue that small LMs offer more fine-grained opportunities for investigating these processes – in larger LMs, even a singular training step can already influence performance on such a brittle task like lexical decision tremendously.
192
+
193
+ The subword models feature a considerably higher number of parameters, as the embedding layer of transformer language models accounts for a quite large share of overall model parameters. In light of our vastly different vocabulary sizes (102 for character models, 8,002 for subword models), these differences are not surprising. The subword tokenizer is a regular BPE tokenizer self-trained with the tokenizers library and contains 8,000 subword tokens, a beginning-of-sequence token and a combined end-of-sequence/padding token. The character tokenizer contains these two special tokens and all printable ASCII characters, which are sufficient to represent all graphemes of the English language.
194
+
195
+ Our models were trained on a Apple M2 Pro processor with the MPS backend. For the small models, training lasted approx. 20min, for the medium-sized models it took approx. 1h and for the largest models approx. 10h (no principal differences between character and subword models). We share our models with their checkpoints on the HuggingFace hub. $^4$
196
+
197
+ Loss curves for all models can be found in Figure 3. For the test loss, we calculated the perplexity over a held-out portion of our training corpus that is comparable in composition to the training data. We find no principal differences in loss development, although the character models converge faster. Larger models also tend to converge faster and generally reach smaller absolute loss values. As the similar train and test loss curves indicate, all Llama models succeed in optimizing for their next-
198
+
199
+ token prediction objective. However, it remains open to further inquiry how much these scores are constrained by the comparatively small capacity of our BabyLMs, and whether larger models would enhance performance further.
200
+
201
+ # B Data creation and tokenization analysis
202
+
203
+ Data creation Word frequency influences lexical decision performance greatly (McClelland and Rumelhart, 1981; Allen et al., 2005). To incorporate this effect into our study, we create two distinct data sets from words included in wuggy: (i) high-frequency stimuli with a frequency score over 7.0 (occurrences per 1M words in BNC, COCA and other English corpora, as reported in CELEX, Baayen et al., 1995) and (ii) low-frequency stimuli with a frequency score over 0.0 but below 0.7 (so at least one order of magnitude lower). We opted to rely on the CELEX frequency scores because we compare models trained on different corpora – our models are trained on the 10M BabyLM corpus, the models by Goriely et al. (2024) are trained on the 100M BabyLM corpus and the Pythia models are trained on The Pile (Gao et al., 2020). As such, frequency scores from these corpora would taint analyses of other models and hinder comparability. For the contextualised stimuli, we sample sentences from the OpenSubtitles (Lison and Tiedemann, 2016) portion of the BabyLM 2024 corpus (Choshen et al., 2024). Both Wuggy and BabyLM data are licensed under the MIT license<sup>5</sup>, therefore we release our own stimuli artifacts on Hugging-Face<sup>6</sup> under the same license. The data contain no information that names or uniquely identifies individual people or offensive content, and are commonly used in computational linguistics.
204
+
205
+ Analysis of tokenization To further assess the influence that these frequency scores have on the resulting tokenization for our own models, we offer a brief analysis: Figure 4 shows a pairplot between three numerical variables – (i) the number of tokens that our original words are split into, (ii) the number of tokens that the derived non-word are split into and (iii) the corresponding frequency score from CELEX. While the three plots on the diagonal axis show a layered kernel density estimate (KDE) for
206
+
207
+ <table><tr><td></td><td>small-char</td><td>medium-char</td><td>large-char</td><td>small-bpe</td><td>medium-bpe</td><td>large-bpe</td></tr><tr><td>Embedding size</td><td>128</td><td>256</td><td>512</td><td>128</td><td>256</td><td>512</td></tr><tr><td>Hidden size</td><td>128</td><td>256</td><td>512</td><td>128</td><td>256</td><td>512</td></tr><tr><td>Layers</td><td>4</td><td>8</td><td>12</td><td>4</td><td>8</td><td>12</td></tr><tr><td>Attention heads</td><td>4</td><td>8</td><td>12</td><td>4</td><td>8</td><td>12</td></tr><tr><td>Context size</td><td>128</td><td>128</td><td>128</td><td>128</td><td>128</td><td>128</td></tr><tr><td>Vocab. size</td><td>102</td><td>102</td><td>102</td><td>8,002</td><td>8,002</td><td>8,002</td></tr><tr><td>Parameters</td><td>486,016</td><td>3,726,592</td><td>21,940,736</td><td>2,508,416</td><td>7,771,392</td><td>30,030,336</td></tr></table>
208
+
209
+ Table 2: Model hyperparameters for our self-trained Llama models
210
+
211
+ ![](images/984bcc36cfffe7af9fec9925ba6ffaa2584ef68c178fdda4d216d1e335e241b9.jpg)
212
+ Figure 3: Loss curves for our self-trained Llama models
213
+
214
+ each individual variable, the other plots are scatterplots which visualize the relationship between the variables. The data points are colored for their tokenization scheme.
215
+
216
+ In the upper left and lower right plot we can see that both real and non-words are split similarly in the two distinct tokenization schemes. Words split by the BPE tokenizer tend to have fewer tokens, mostly between one and six. For the character-based tokenizer, a normal distribution is visible, with its peak at six tokens.
217
+
218
+ The upper right and lower left scatter plots show the relationship between tokenization for real and non-words. The character-level tokenization exhibits perfect alignment between both kinds of stimuli, they are always split into the exact same number of tokens. Subword tokenization is slightly skewed towards the non-word tokens. This means that non-words are more often split into more tokens than real words, although the reverse case is not completely infrequent.
219
+
220
+ # C Full learning curves for BLiMP and word learning
221
+
222
+ Figure 5 shows the full learning curves of all phenomena included in BLiMP (individual syntactic paradigms belonging to one phenomenon are displayed in the same sub-figure) as well as our own lexical benchmarks (all displayed in individual subfigures), for our six self-trained models and the six Pythia models that we compare them to. We fit a fifth-order polynomial curve to the individual data points and display it on a logarithmic scale.
223
+
224
+ It should be noted that we plot the number of the checkpoint on the x-scale. However, the individual amounts of actual textual data seen between these checkpoints differs vastly between our self-trained models (10M lexical tokens) and the Pythia models (825GB of textual data; as the dataset has since been taken down, no lexical token counts are possible anymore).
225
+
226
+ ![](images/84d2bf2da27b5ba46d4a731977219173b36c0e104ff566a24ad6dee301b25396.jpg)
227
+ Figure 4: Pairplot displaying (i) number of tokens of words, (ii) frequency scores from CELEX (Baayen et al., 1995) and (iii) number of tokens of non-words (for BPE and character tokenization)
228
+
229
+ ![](images/ce6382d527b0585212b3ad39bfb2b59a59d6e62e77aeac5df5ea6854ca684671.jpg)
230
+
231
+ ![](images/02c946c1f440bc9d458e627273edc8f1c1dd63814c09b9f4f8fb1746726d0f42.jpg)
232
+
233
+ # D Correlations between word learning and syntactic learning
234
+
235
+ As an additional measure of commonalities between word learning and syntactic learning, we calculate Spearman-rank correlation scores between ordered accuracy scores for our lexical tasks and BLiMP paradigms. Table 3 shows the underlying numerical values for the correlation heatmap provided in Figure 6 (please note that the heatmap is rotated in comparison to the table). All scores are statistically significant $(p < 0.05)$ . Due to the similar learning curves found in Figure 2, we average accuracy scores over all lexical phenomena (lexical decision and both surprisal settings), and then calculate correlations between them (both high and low frequency) and the coarse-grained BLiMP phenomena. For BPE models, lexical performance is highly correlated with more than half of the BLiMP phenomena. The character models show much weaker correlation with syntactic learning. This further confirms our findings about the strong entanglement of lexical and syntactic learning in subword models and their weaker ties
236
+
237
+ in character models.
238
+
239
+ # E Final BLiMP scores for all models
240
+
241
+ We reproduce the final syntactic evaluation scores for all models that we incorporated in our lexical analyses in Table 4. Generally, scores improve with larger models and with more training data. Most strikingly, subword models are consistently superior to comparable character models trained on the same amount of data. These differences, however, are most pronounced for the small models trained on very little data, like our Llama models trained on 10M tokens (7% for smallest models, 2% for largest models). For the comparable GPT-2 models trained on 100M tokens, the gap becomes much smaller (0.4%).
242
+
243
+ # F Development of word/non-word differences
244
+
245
+ In Figure 7, we plot the average difference between word and non-word negative log-probability values across training, for both high-frequency and low-frequency data. Positive scores indicate preference
246
+
247
+ ![](images/30224b3d7f0c1da2e06e8a9b44a3c14efd056804e5b6524aea3b0811be26e107.jpg)
248
+ Figure 5: Learning curves for all paradigms in BLiMP and high/low frequency lexical decision data, separated for models (rows) and phenomenon sets (columns)
249
+
250
+ ![](images/5c3729b650c0e888b04bc7e9bc65a4815695c68ffaf1099197036c823bc39219.jpg)
251
+ Figure 6: Correlation heatmap
252
+
253
+ for real words. For the character models, the differences are generally less pronounced and get most extreme at the end of pre-training (where accuracy scores do not change anymore), especially for the lexical decision data, which is already consistent at very early training stages. For the BPE models,
254
+
255
+ we see that at the beginning they actually prefer non-words in the lexical decision task. Only after the first $10\%$ of training they begin to discern words and non-words. While overall tendencies remain the same for both frequency conditions, the absolute differences are generally lower and the
256
+
257
+ <table><tr><td rowspan="2">BLiMP phenomenon</td><td colspan="2">small-char</td><td colspan="2">medium-char</td><td colspan="2">large-char</td><td colspan="2">small-bpe</td><td colspan="2">medium-bpe</td><td colspan="2">large-bpe</td></tr><tr><td>highFrq</td><td>lowFrq</td><td>highFrq</td><td>lowFrq</td><td>highFrq</td><td>lowFrq</td><td>highFrq</td><td>lowFrq</td><td>highFrq</td><td>lowFrq</td><td>highFrq</td><td>lowFrq</td></tr><tr><td>Anaphor agr.</td><td>-0.393</td><td>-0.435</td><td>0.580</td><td>0.277</td><td>0.332</td><td>0.555</td><td>0.569</td><td>0.559</td><td>0.930</td><td>0.910</td><td>0.772</td><td>0.759</td></tr><tr><td>Argument structure</td><td>0.339</td><td>0.589</td><td>0.467</td><td>0.825</td><td>0.545</td><td>0.895</td><td>0.949</td><td>0.959</td><td>0.970</td><td>0.987</td><td>0.979</td><td>0.986</td></tr><tr><td>Binding</td><td>-0.718</td><td>-0.629</td><td>0.660</td><td>0.918</td><td>0.653</td><td>0.922</td><td>0.901</td><td>0.889</td><td>0.992</td><td>0.993</td><td>0.979</td><td>0.977</td></tr><tr><td>Control raising</td><td>0.904</td><td>0.837</td><td>0.909</td><td>0.776</td><td>0.791</td><td>0.892</td><td>0.777</td><td>0.780</td><td>0.974</td><td>0.974</td><td>0.930</td><td>0.936</td></tr><tr><td>Det.-noun agr.</td><td>0.686</td><td>0.870</td><td>0.524</td><td>0.869</td><td>0.521</td><td>0.890</td><td>0.989</td><td>0.989</td><td>0.990</td><td>0.993</td><td>0.994</td><td>0.989</td></tr><tr><td>Ellipsis</td><td>-0.912</td><td>-0.766</td><td>-0.285</td><td>0.209</td><td>0.180</td><td>0.662</td><td>0.857</td><td>0.822</td><td>0.897</td><td>0.868</td><td>0.865</td><td>0.856</td></tr><tr><td>Filler gap</td><td>-0.765</td><td>-0.586</td><td>-0.554</td><td>-0.146</td><td>-0.200</td><td>0.209</td><td>-0.715</td><td>-0.722</td><td>-0.589</td><td>-0.602</td><td>-0.063</td><td>-0.031</td></tr><tr><td>Irregular forms</td><td>0.612</td><td>0.724</td><td>0.507</td><td>0.856</td><td>0.397</td><td>0.787</td><td>-0.116</td><td>-0.098</td><td>0.636</td><td>0.662</td><td>0.816</td><td>0.832</td></tr><tr><td>Island effects</td><td>-0.937</td><td>-0.840</td><td>-0.862</td><td>-0.657</td><td>-0.556</td><td>-0.214</td><td>-0.321</td><td>-0.334</td><td>0.473</td><td>0.418</td><td>-0.088</td><td>-0.094</td></tr><tr><td>NPI licensing</td><td>0.535</td><td>0.641</td><td>0.748</td><td>0.782</td><td>0.616</td><td>0.568</td><td>-0.425</td><td>-0.407</td><td>0.520</td><td>0.526</td><td>-0.069</td><td>-0.049</td></tr><tr><td>Quantifiers</td><td>-0.476</td><td>-0.202</td><td>0.059</td><td>0.299</td><td>0.547</td><td>0.848</td><td>0.789</td><td>0.767</td><td>0.779</td><td>0.770</td><td>0.716</td><td>0.695</td></tr><tr><td>Subj.-verb agr.</td><td>-0.386</td><td>-0.400</td><td>0.329</td><td>0.529</td><td>0.351</td><td>0.730</td><td>0.963</td><td>0.961</td><td>0.982</td><td>0.981</td><td>0.967</td><td>0.974</td></tr></table>
258
+
259
+ Table 3: Spearman-rank correlation scores between ordered accuracy scores for our lexical tasks and BLiMP paradigms
260
+
261
+ <table><tr><td>Tok.</td><td>Model</td><td>Params</td><td>BLiMP score</td></tr><tr><td rowspan="10">Subword (BPE)</td><td rowspan="6">Pythia</td><td>14M</td><td>65.86%</td></tr><tr><td>70M</td><td>73.30%</td></tr><tr><td>160M</td><td>77.50%</td></tr><tr><td>410M</td><td>81.63%</td></tr><tr><td>1B</td><td>82.21%</td></tr><tr><td>1.4B</td><td>81.92%</td></tr><tr><td>GPT-2</td><td>85M</td><td>77.80%</td></tr><tr><td rowspan="3">Llama</td><td>2.51M</td><td>59.80%</td></tr><tr><td>7.77M</td><td>64.55%</td></tr><tr><td>30.03M</td><td>64.56%</td></tr><tr><td rowspan="4">Character</td><td>GPT-2</td><td>85M</td><td>77.40%</td></tr><tr><td rowspan="3">Llama</td><td>0.49M</td><td>52.69%</td></tr><tr><td>3.73M</td><td>51.07%</td></tr><tr><td>21.94M</td><td>62.14%</td></tr></table>
262
+
263
+ Table 4: BLiMP scores for all models
264
+
265
+ differences between the curves are less pronounced in the low-frequency setting.
266
+
267
+ ![](images/1f4b6e7998db6c68f49eff9eb533695b2bba6b09a06fcbfc0aa4f2390189640f.jpg)
268
+ (a) High-frequency data
269
+ Figure 7: Average differences between surprisal values across pretraining
270
+
271
+ ![](images/fd84237e78265bc7dbe33561fec41bb2b63e13bded5300e3a3c40181362dde95.jpg)
272
+ (b) Low-frequency data
ACL/2025/Subword models struggle with word learning, but surprisal hides it/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:213aaf6b8c3fa45952b045f8f4d19a58c579a6622deb7fcca235d8880b7edd19
3
+ size 762960
ACL/2025/Subword models struggle with word learning, but surprisal hides it/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aedbf8bdc4747bfb187cab12b3055dc2c77096df70f0eed7830dc854060bd471
3
+ size 341437
ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/eb796f3f-9361-43c6-9a4d-d17ea0c8a87f_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04e23fa92a47ded3d220c0575320f2d0966a27b8b8c32e0fb40c505950fe3da5
3
+ size 67629
ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/eb796f3f-9361-43c6-9a4d-d17ea0c8a87f_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5baee0b924521df72dc2376a15273cf353f3f68bb8696f27508430581ba6fd29
3
+ size 86570
ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/eb796f3f-9361-43c6-9a4d-d17ea0c8a87f_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a231e85a412ab7ba36546d1c40b38a95f7e7a7b50b313553357eb26dcee4081
3
+ size 2146941
ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/full.md ADDED
@@ -0,0 +1,351 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SynWorld: Virtual Scenario Synthesis for Agentic Action Knowledge Refinement
2
+
3
+ Runnan Fang\*, Xiaobin Wang $^{\text{心}}$ , Yuan Liang\*, Shuofei Qiao\*, Jialong Wu $^{\text{心}}$ , Zekun Xi\*, Ningyu Zhang $^{\text{心*}}$ , Yong Jiang $^{\text{心*}}$ , Pengjun Xie $^{\text{心}}$ , Fei Huang $^{\text{心}}$ , Huajun Chen $^{\text{心*}}$
4
+
5
+ $\spadesuit$ Zhejiang University $\text{♥}$ Alibaba Group
6
+
7
+ *Zhejiang Key Laboratory of Big Data Intelligent Computing
8
+
9
+ {rolnan,zhangningyu}@zju.edu.cn
10
+
11
+ # Abstract
12
+
13
+ In the interaction between agents and their environments, agents expand their capabilities by planning and executing actions. However, LLM-based agents face substantial challenges when deployed in novel environments or required to navigate unconventional action spaces. To empower agents to autonomously explore environments, optimize workflows, and enhance their understanding of actions, we propose SynWorld, a framework that allows agents to synthesize possible scenarios with multi-step action invocation within the action space and perform Monte Carlo Tree Search (MCTS) exploration to effectively refine their action knowledge in the current environment. Our experiments demonstrate that SynWorld is an effective and general approach to learning action knowledge in new environments<sup>1</sup>.
14
+
15
+ # 1 Introduction
16
+
17
+ By leveraging decision-making capabilities to execute task-oriented actions within dynamic environments, Large Language Models (LLM) based agents demonstrate enhanced environmental interactivity and operational versatility (Song et al., 2023a; Liu et al., 2024b; Wu et al., 2025; Xi et al., 2025; Shi et al., 2025; Qu et al., 2025). In the real world, agents perform actions by leveraging tools like web search engines (Fan et al., 2024; Zhao et al., 2024a; Ning et al., 2025) or API calls (Liu et al., 2024a; Tao et al., 2024) to access feedback from the real world, which addresses the static knowledge limitations of LLMs, facilitating a deeper comprehension of the real world. It is crucial for agents to learn how to plan and execute actions in the environment. Nonetheless, as the complexity of tasks increases and novel environments emerge, manually annotated environment descriptions and a predefined action documents (Qu
18
+
19
+ ![](images/5d0c881a5ae254499ecca9a8601a09af9ca979945f310a602a2cc26de5396330.jpg)
20
+ Figure 1: Our method with exploration to refine action knowledge in Synthesized Scenario.
21
+
22
+ et al., 2024; Sri et al., 2024; Zhang et al., 2025) for agents are often not consistent with the actual environmental conditions and action usage (Liu et al., 2024d; Huang et al., 2024a). Refining well-defined and aligned descriptions of the environment and actions is time-consuming and labor-intensive.
23
+
24
+ Therefore, to master inexperienced action and complicated task requirements in new complex environments, refinement of the agentic action knowledge is essential. Previous studies have explored the acquisition of action knowledge through feedback in scenarios synthesized by LLMs. Similar to the way humans acquire skills through trial and error, agents can also optimize the descriptions of actions by leveraging feedback from simulated scenarios (Yuan et al., 2024; Du et al., 2024; Bouzenia et al., 2024). However, these methods exhibit two critical limitations: (1) The synthetic scenarios they utilize are often restricted to single-action, which hinders agents from learning workflows suitable for these tasks, and (2) The linear iterative optimization process lacks a clear direction for improvement, making it susceptible to stagnation and
25
+
26
+ quickly reaching its performance ceiling.
27
+
28
+ To address these limitations, we propose a new framework, SynWorld, designed to assist agents in learning unfamiliar actions in new environments as shown in Figure 1. SynWorld first synthesizes virtual scenarios involving multiple coordinated actions. Then through iterative MCTS optimization in the exploration of virtual scenarios, the framework enables more thorough and bidirectional refinement between action descriptions and workflow patterns, ensuring better alignment with environmental constraints. Experiments demonstrate that action knowledge can be learned in virtual environments and effectively generalized to the real world, with optimization through MCTS exploration.
29
+
30
+ # 2 Background
31
+
32
+ # 2.1 Agent Planning
33
+
34
+ An agent interacts with its environment by perceiving its state, selecting actions to achieve a goal, and learning from feedback in the form of rewards. Its framework consists of a state space $S$ that represents the environment's properties, an action space $\mathcal{A}$ that defines allowable interactions, and an observation space $\Omega$ for perceptual inputs. Progress toward task $T$ is measured through a reward function $\mathcal{R}$ . Central to decision-making is a planning mechanism $\mathcal{P}_{\theta}$ , where $\pi_{\theta}$ are fixed model weights. The agent's architecture is defined by the tuple:
35
+
36
+ $$
37
+ \mathcal {P} _ {\theta} = \pi_ {\theta} (\mathcal {S}, \mathcal {A}, \Omega , \mathcal {R}) \tag {1}
38
+ $$
39
+
40
+ This formula delineates the manner in which an agent assesses its current state and interprets environmental feedback to generate plans.
41
+
42
+ # 2.2 Action Knowledge
43
+
44
+ Action knowledge $\mathcal{A}\mathcal{K}$ serves as the strategic foundation governing an agent's adaptive behavior in dynamic and unfamiliar environments. It contains action description about the awareness of executable actions with cognitive workflows about task decomposition and action sequences.
45
+
46
+ # 3 Method
47
+
48
+ In this section, we begin by detailing how to utilize the action space to synthesize scenarios and specific objectives. Subsequently, we dive into the application of MCTS to explore and discover action knowledge within these synthesized scenarios. The SynWorld framework is shown in Figure 2.
49
+
50
+ # 3.1 Scenario Synthesis
51
+
52
+ To address generalization challenges in multistep tool operationalization, we propose a framework that synthesizes scenarios through tool-conditioned task generation. Our methodology formalizes scenario synthesis as:
53
+
54
+ $$
55
+ \mathcal {S} (t) = \left\{\left(\mathcal {B}, \mathcal {G}\right) \mid \forall t \subseteq T) \right\}, \tag {2}
56
+ $$
57
+
58
+ where a subset of tools $t$ selected by llm from the complete set of tools $T$ to design a scenario. Each scenario comprises two part: Background $\mathcal{B}$ : The contextual scenario specifying initial conditions and constraints; Goal $\mathcal{G}$ : The terminal objective requiring tool-mediated resolution. We provide examples using a few-shot approach to enable the llm to synthesize queries.
59
+
60
+ The mapping enforces that distinct tool combinations yield nontrivial scenario variations through systematic $\mathcal{B} - \mathcal{G}$ pairings. Each group of selected tools will generate 2-3 scenarios. To ensure data diversity, if the similarity of a newly generated scenario exceeds a threshold $\epsilon$ compared to already synthesized scenarios, it will be excluded. Through this process, we can obtain a large number of synthetic scenarios, where the selected tools will serve as the "gold tools" for completing the corresponding virtual scenario, which will later be used for evaluation purposes.
61
+
62
+ $$
63
+ d \left(\left(\mathcal {B} _ {i}, \mathcal {G} _ {i}\right), \left(\mathcal {B} _ {j}, \mathcal {G} _ {j}\right)\right) < \epsilon . \tag {3}
64
+ $$
65
+
66
+ # 3.2 Action Knowledge Exploration
67
+
68
+ Initialization The root node is initialized with predefined Action Knowledge, which serves as the foundation for task-solving logic. During the MCTS process, the UCB algorithm is used to select nodes, effectively balancing exploration and exploitation by choosing the node with the highest upper confidence limit.
69
+
70
+ Expansion Upon selecting node $N_{i}$ as the candidate, an optimization process is initiated that retraces $N_{i}$ to obtain insights from previous optimization experience $\mathcal{E}$ . Each of these past optimization experiences $\mathcal{E}$ is composed of three elements: the pre-optimization score $S_{before}$ , the post-optimization score $S_{after}$ , and the modification $\mathcal{M}$ of the optimization actions taken.
71
+
72
+ $$
73
+ \mathcal {E} = \left\{\left(S _ {\text {b e f o r e}} ^ {i}, S _ {\text {a f t e r}} ^ {i}, \mathcal {M} ^ {i}\right) \mid N _ {i} \in \operatorname {P a t h} \left(N, N _ {0}\right) \right\} \tag {4}
74
+ $$
75
+
76
+ ![](images/7a3795b8757cde584cf741d3eea3f107f5797326c9191c85e9ad41dd2db7ea4f.jpg)
77
+ Figure 2: The overall framework of SynWorld: we first extract composable tools from the toolkit to generate new scenes and tasks. Then, we allow agents to explore the synthesized virtual scenes using MCTS to optimize action knowledge, thereby learning how to execute actions and plan tasks.
78
+
79
+ ![](images/56838ec86cf884d122cb64d7e01364cffb550f82696d9aad3fc1a559b8c34d25.jpg)
80
+
81
+ Based on the optimization experiences and exploration trajectories $Tra$ from the past, the LLM-based agent $\pi$ will analyze the discrepancies between the existing Action Knowledge and the environment. It will then optimize these to produce an updated version of the Action Knowledge.
82
+
83
+ $$
84
+ \mathcal {A K} _ {n e w} = \pi_ {\theta} (\mathcal {A K} _ {o l d}, \mathcal {E}, T r a) \tag {5}
85
+ $$
86
+
87
+ Feedback Collection Once equipped with an optimized $\mathcal{A}\mathcal{K}$ , the agent $\pi$ can explore the environment to perform tasks. For each individual task $T$ , the agent interacts with the environment to receive feedback with the trajectory $Tra_{i}$ and the final reward scores $S_{i}$ . The score is related to the evaluation method of the task.
88
+
89
+ $$
90
+ T r a _ {i}, S _ {i} = E n v (\mathcal {A K}, \pi) \tag {6}
91
+ $$
92
+
93
+ # 4 Experiment
94
+
95
+ # 4.1 Experiment Setup
96
+
97
+ Datasets and Baselines To demonstrate the efficiency of our approach in optimizing action knowledge, we selected two datasets: ToolBench (Qin et al., 2024) and HotpotQA (Yang et al., 2018), each offering unique challenges for a comprehensive evaluation. Following Qu et al. (2024), several strong methods are selected as our baselines, including ReAct (Yao et al., 2023), Self-Refine (Madaan et al., 2023), Easy-Tool (Yuan et al., 2024), and DRAFT (Qu et al., 2024). See detailed setting and evaluation in Appendix B.
98
+
99
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Method</td><td colspan="2">ToolBench</td><td rowspan="2">HotpotQA</td></tr><tr><td>PASS</td><td>WIN</td></tr><tr><td rowspan="5">GPT-4-turbo</td><td>ReAct</td><td>50.67</td><td>67.00</td><td>54.61</td></tr><tr><td>Self-Refine</td><td>56.80</td><td>73.00</td><td>55.85</td></tr><tr><td>EasyTool</td><td>51.67</td><td>68.00</td><td>58.19</td></tr><tr><td>DRAFT</td><td>54.83</td><td>72.00</td><td>57.71</td></tr><tr><td>Ours</td><td>59.33</td><td>73.00</td><td>59.93</td></tr><tr><td rowspan="5">Qwen-long</td><td>ReAct</td><td>48.30</td><td>71.00</td><td>52.00</td></tr><tr><td>Self-Refine</td><td>53.70</td><td>77.00</td><td>56.10</td></tr><tr><td>EasyTool</td><td>50.80</td><td>63.00</td><td>58.34</td></tr><tr><td>DRAFT</td><td>54.20</td><td>79.00</td><td>53.23</td></tr><tr><td>Ours</td><td>57.20</td><td>81.00</td><td>59.91</td></tr><tr><td rowspan="5">Qwen2-72B-Instruct</td><td>ReAct</td><td>49.43</td><td>55.00</td><td>50.21</td></tr><tr><td>Self-Refine</td><td>54.33</td><td>65.00</td><td>52.59</td></tr><tr><td>EasyTool</td><td>52.97</td><td>58.00</td><td>54.94</td></tr><tr><td>DRAFT</td><td>56.43</td><td>69.00</td><td>57.57</td></tr><tr><td>Ours</td><td>58.52</td><td>73.00</td><td>58.70</td></tr></table>
100
+
101
+ Table 1: Main results of SynWorld compared to other baselines on ToolBench and HotpotQA. The best results of each model are marked in bold. PASS means the pass rate and WIN means the win rate of the trajectory compared to GPT-3.5-turbo in the method of ReAct.
102
+
103
+ # 4.2 Main Results
104
+
105
+ For the task ToolBench that requires the combined use of multiple tools, as shown in Table 1, our approach achieved a PASS score of 59.33 and a WIN score of 73.00, marking a significant improvement compared to other methods for iterative optimization, demonstrating the advantages of our method in terms of tool combination and task planning optimization. For the task HotpotQA that requires planning using a single tool and multi-step calls, in the scenario where only a single tool is used but requires continuous multi-hop calls, our method has achieved state-of-the-art results. This
106
+
107
+ ![](images/c64d5b1b6c5b0ba7b0f396de4593e73b0470d8fa5ded5304c24b215b9b585ab8.jpg)
108
+ Figure 3: The variation in the pass rate of agents on the ToolBench in relation to the number of exploration scenarios.
109
+
110
+ indicates that we have not only aligned tool descriptions with the environment, but also succeeded in generating a generalizable planning workflow.
111
+
112
+ # 4.3 Ablation Study
113
+
114
+ We observe that independently optimizing either the Workflow or the Tool Description using MCTS has its limitations in Table 2. We find that combining the optimization of both aspects leads to more effective results. An aligned Tool Description is beneficial for constructing a more reasonable Workflow, while a well-structured, general Workflow also enhances the exploration of tool usage. We believe that this synergy arises during the iterative optimization process, where the improved workflow can help identify tool usage that is closer to the correct trajectory, serving as strong negative examples to further refine the tool description. Conversely, a superior tool description enables the model to generate workflows that are more aligned with the environment.
115
+
116
+ <table><tr><td>Model</td><td>Method</td><td>Pass Rate</td></tr><tr><td rowspan="3">GPT-4-turbo</td><td>SynWorld</td><td>59.33</td></tr><tr><td>w/o. Workflow</td><td>56.33-3.00</td></tr><tr><td>w/o. Description</td><td>53.16-6.17</td></tr><tr><td rowspan="3">Qwen-long</td><td>SynWorld</td><td>57.20</td></tr><tr><td>w/o. Workflow</td><td>57.00-0.20</td></tr><tr><td>w/o. Description</td><td>53.83-3.37</td></tr></table>
117
+
118
+ Table 2: Ablation experiment results
119
+
120
+ # 4.4 Futher Analysis
121
+
122
+ More simulated data enable precise virtual scenario synthesis, optimized action knowledge, and ultimately improved agent performance. In our experiments, we explore action knowledge by synthesizing a varying number of virtual scenarios. As shown in Figure 3, we find that as the
123
+
124
+ ![](images/d4378e8019442984f175dbc26574900f3de0e96b65892b4dee03faee7a1e7789.jpg)
125
+ Figure 4: Changes in ToolBench pass rates in virtual and real-world scenarios with the number of iterative optimizations performed in the virtual environment.
126
+
127
+ number of scenarios synthesized increases, the performance of the Agent shows a corresponding upward trend. Specifically, within the range of 0 to 100 scenarios, the model's performance continues to improve with the increase in the scenarios, indicating that action knowledge is indeed learnable. Although the rate of performance improvement slows down as the number of scenarios increases, the model's performance remains on an upward trajectory. This phenomenon suggests that the process of learning action knowledge in the context of synthesized scenarios exhibits scalability.
128
+
129
+ Virtual scenario policies can be generalized to unseen environments and improved with iterations. By analyzing the relationship between action knowledge iterations and pass rates on Toolbench in both virtual and real environments, we find that the action knowledge gained in the virtual setting is generalizable and effective in real-world applications. Performance trends in both environments are similar in Figure 4. We observe a consistent upward trend in scores, particularly between 0 and 10 iterations, indicating that action knowledge can be optimized through environmental feedback. However, as iterations increase, the gains diminish, and we note slight declines in performance at times. This phenomenon is likely due to the limitations of exploring a fixed number of scenarios, where further iterations have less impact, and increasing complexity can hinder understanding.
130
+
131
+ # 5 Conclusion
132
+
133
+ In this paper, we propose SynWorld, a novel framework that synthesizes scenes that require multiple action steps and enhances agent action optimization through exploration in the synthetic virtual scenario. By systematically exploring diverse syn
134
+
135
+ thetic scenarios, our model achieves precise alignment between action descriptions and environmental contexts while identifying task-specific workflows suitable for tasks.
136
+
137
+ # Limitations
138
+
139
+ We initially conduct empirical validation on two benchmarks: Toolbench (involving multi-tool calling scenarios) and HotpotQA (requiring multi-step action execution). While these demonstrate our method's effectiveness, broader validation across diverse real-world applications remains valuable. Promising candidates include web-based search tasks, simulated environments and so on.
140
+
141
+ Our approach currently incurs non-trivial computational overhead due to the token-intensive virtual scenario synthesis process. The exploration phase further compounds this by exhaustively enumerating all possible scenarios. Future research should prioritize optimizing token efficiency through 1) developing more economical synthesis mechanisms for high-quality virtual scenarios and 2) establishing effective filtering criteria to identify the most pedagogically valuable scenarios.
142
+
143
+ The current action knowledge representation employs a purely text-based format. This presents opportunities to investigate alternative structured representations that could enhance reasoning capabilities, such as tabular organization of action parameters or executable code snippets encapsulating procedural knowledge.
144
+
145
+ # Acknowledgments
146
+
147
+ This work was supported by the National Natural Science Foundation of China (No. 62206246, No. NSFCU23B2055, No. NSFCU19B2027), the Fundamental Research Funds for the Central Universities (226-2023-00138), Yongjiang Talent Introduction Programme (2021A-156-G), CIPSC-SMP-Zhipu Large Model Cross-Disciplinary Fund, Ningbo Natural Science Foundation (2024J020), Information Technology Center and State Key Lab of CAD&CG, Zhejiang University. We gratefully acknowledge the support of Zhejiang University Education Foundation Qizhen Scholar Foundation.
148
+
149
+ # References
150
+
151
+ Islem Bouzenia, Premkumar Devanbu, and Michael Pradel. 2024. Repairagent: An autonomous, llmbased agent for program repair. arXiv preprint arXiv:2403.17134.
152
+
153
+ Huajun Chen. 2023. Large knowledge model: Perspectives and challenges. arXiv preprint arXiv:2312.02706.
154
+ Yu Du, Fangyun Wei, and Hongyang Zhang. 2024. Anytool: Self-reflective, hierarchical agents for largescale api calls. arXiv preprint arXiv:2402.04253.
155
+ Zane Durante, Qiuyuan Huang, Naoki Wake, Ran Gong, Jae Sung Park, Bidipta Sarkar, Rohan Taori, Yusuke Noda, Demetri Terzopoulos, Yejin Choi, et al. 2024. Agent ai: Surveying the horizons of multimodal interaction. arXiv preprint arXiv:2401.03568.
156
+ Wenqi Fan, Yujuan Ding, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, and Qing Li. 2024. A survey on RAG meeting llms: Towards retrieval-augmented large language models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2024, Barcelona, Spain, August 25-29, 2024, pages 6491-6501. ACM.
157
+ Sihao Hu, Tiansheng Huang, Fatih Ilhan, Selim Tekin, Gaowen Liu, Ramana Kompella, and Ling Liu. 2024. A survey on large language model-based game agents. arXiv preprint arXiv:2404.02039.
158
+ Jerry Huang, Prasanna Parthasarathi, Mehdi Rezagholizadeh, and Sarath Chandar. 2024a. Towards practical tool usage for continually learning llms. arXiv preprint arXiv:2404.09339.
159
+ Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Hao Wang, Defu Lian, Yasheng Wang, Ruiming Tang, and Enhong Chen. 2024b. Understanding the planning of llm agents: A survey. arXiv preprint arXiv:2402.02716.
160
+ Ziyan Jiang, Xueguang Ma, and Wenhu Chen. 2024. Longrag: Enhancing retrieval-augmented generation with long-context llms. arXiv preprint arXiv:2406.15319.
161
+ Yuanchun Li, Hao Wen, Weijun Wang, Xiangyu Li, Yizhen Yuan, Guohong Liu, Jiacheng Liu, Wenxing Xu, Xiang Wang, Yi Sun, et al. 2024. Personal llm agents: Insights and survey about the capability, efficiency and security. arXiv preprint arXiv:2401.05459.
162
+ Bang Liu, Xinfeng Li, Jiayi Zhang, Jinlin Wang, Tanjin He, Sirui Hong, Hongzhang Liu, Shaokun Zhang, Kaitao Song, Kunlun Zhu, Yuheng Cheng, Suyuchen Wang, Xiaojiang Wang, Yuyu Luo, Haibo Jin, Peiyan Zhang, Ollie Liu, Jiaqi Chen, Huan Zhang, Zhaoyang Yu, Haochen Shi, Boyan Li, Dekun Wu, Fengwei Teng, Xiaojun Jia, Jiawei Xu, Jinyu Xiang, Yizhang Lin, Tianming Liu, Tongliang Liu, Yu Su, Huan Sun, Glen Berseth, Jianyun Nie, Ian Foster, Logan Ward, Qingyun Wu, Yu Gu, Mingchen Zhuge, Xiangru Tang, Haohan Wang, Jiaxuan You, Chi Wang, Jian Pei, Qiang Yang, Xiaoliang Qi, and Chenglin Wu. 2025. Advances and challenges in foundation agents: From brain-inspired intelligence to evolutionary, collaborative, and safe systems. Preprint, arXiv:2504.01990.
163
+
164
+ Shilong Liu, Hao Cheng, Haotian Liu, Hao Zhang, Feng Li, Tianhe Ren, Xueyan Zou, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang, Jianfeng Gao, and Chunyuan Li. 2024a. Llava-plus: Learning to use tools for creating multimodal agents. In Computer Vision - ECCV 2024 - 18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part XLVII, volume 15105 of Lecture Notes in Computer Science, pages 126-142. Springer.
165
+ Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. 2024b. Agentbench: Evaluating llms as agents. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net.
166
+ Yanming Liu, Xinyue Peng, Yuwei Zhang, Jiannan Cao, Xuhong Zhang, Sheng Cheng, Xun Wang, Jianwei Yin, and Tianyu Du. 2024c. Tool-planner: Dynamic solution tree planning for large language model with tool clustering. arXiv preprint arXiv:2406.03807.
167
+ Zeyu Leo Liu, Shrey Pandit, Xi Ye, Eunsol Choi, and Greg Durrett. 2024d. Codeupdatearena: Benchmarking knowledge editing on API updates. CoRR, abs/2407.06249.
168
+ Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. 2023. Self-refine: Iterative refinement with self-feedback. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023.
169
+ Tula Masterman, Sandi Besen, Mason Sawtell, and Alex Chao. 2024. The landscape of emerging ai agent architectures for reasoning, planning, and tool calling: A survey. arXiv preprint arXiv:2404.11584.
170
+ Vaibhav Mavi, Anubhav Jangra, and Adam Jatowt. 2022. A survey on multi-hop question answering and generation. arXiv preprint arXiv:2204.09140.
171
+ Liangbo Ning, Ziran Liang, Zhuohang Jiang, Haohao Qu, Yujuan Ding, Wenqi Fan, Xiao-yong Wei, Shanru Lin, Hui Liu, Philip S Yu, et al. 2025. A survey of webagents: Towards next-generation ai agents for web automation with large foundation models. arXiv preprint arXiv:2503.23350.
172
+ Siqi Ouyang and Lei Li. 2023. Autoplan: Automatic planning of interactive decision-making tasks with large language models. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, Singapore, December 6-10, 2023, pages 3114-3128. Association for Computational Linguistics.
173
+
174
+ Shuofei Qiao, Ningyu Zhang, Runnan Fang, Yujie Luo, Wangchunshu Zhou, Yuchen Eleanor Jiang, Chengfei Lv, and Huajun Chen. 2024. Autoact: Automatic agent learning from scratch via self-planning. arXiv preprint arXiv:2401.05268.
175
+ Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, and Maosong Sun. 2024. Toollm: Facilitating large language models to master 16000+ real-world apis. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net.
176
+ Changle Qu, Sunhao Dai, Xiaochi Wei, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Jun Xu, and Ji-Rong Wen. 2024. From exploration to mastery: Enabling llms to master tools via self-driven interactions. CoRR, abs/2410.08197.
177
+ Xiaoye Qu, Yafu Li, Zhaochen Su, Weigao Sun, Jianhao Yan, Dongrui Liu, Ganqu Cui, Daizong Liu, Shuxian Liang, Junxian He, et al. 2025. A survey of efficient reasoning for large reasoning models: Language, multimodality, and beyond. arXiv preprint arXiv:2503.21614.
178
+ Shreyas Sundara Raman, Vanya Cohen, Eric Rosen, Ifrah Idrees, David Paulius, and Stefanie Telsex. 2022. Planning with large language models via corrective re-prompting. In NeurIPS 2022 Foundation Models for Decision Making Workshop.
179
+ Weizhou Shen, Chenliang Li, Hongzhan Chen, Ming Yan, Xiaojun Quan, Hehong Chen, Ji Zhang, and Fei Huang. 2024. Small llms are weak tool learners: A multi-llm agent. arXiv preprint arXiv:2401.07324.
180
+ Yucheng Shi, Wenhao Yu, Wenlin Yao, Wenhu Chen, and Ninghao Liu. 2025. Towards trustworthy gui agents: A survey. arXiv preprint arXiv:2503.23434.
181
+ Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté, Yonatan Bisk, Adam Trischler, and Matthew J. Hausknecht. 2021. Alfworld: Aligning text and embodied environments for interactive learning. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
182
+ Simranjit Singh, Andreas Karatzas, Michael Fore, Iraklis Anagnostopoulos, and Dimitrios Stamoulis. 2024. An llm-tool compiler for fused parallel function calling. arXiv preprint arXiv:2405.17438.
183
+ Chan Hee Song, Brian M. Sadler, Jiaman Wu, Wei-Lun Chao, Clayton Washington, and Yu Su. 2023a. Llmplanner: Few-shot grounded planning for embodied agents with large language models. In IEEE/CVF International Conference on Computer Vision, ICCV 2023, Paris, France, October 1-6, 2023, pages 2986-2997. IEEE.
184
+
185
+ Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M Sadler, Wei-Lun Chao, and Yu Su. 2023b. Llm-planner: Few-shot grounded planning for embodied agents with large language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2998-3009.
186
+ S Deepika Sri, Raja CSP Raman, Gopinath Rajagopal, S Taranath Chan, et al. 2024. Automating rest api postman test cases using llm. arXiv preprint arXiv:2404.10678.
187
+ Haotian Sun, Yuchen Zhuang, Lingkai Kong, Bo Dai, and Chao Zhang. 2023. Adaplanner: Adaptive planning from feedback with language models. Advances in neural information processing systems, 36:58202-58245.
188
+ Chunliang Tao, Xiaojing Fan, and Yahe Yang. 2024. Harnessing llms for api interactions: A framework for classification and synthetic data generation. arXiv preprint arXiv:2409.11703.
189
+ Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. 2024. A survey on large language model based autonomous agents. Frontiers of Computer Science, 18(6):186345.
190
+ Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. 2023a. Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models. arXiv preprint arXiv:2305.04091.
191
+ Ruoyao Wang, Peter A. Jansen, Marc-Alexandre Côté, and Prithviraj Ammanabrolu. 2022. Scienceworld: Is your agent smarter than a 5th grader? In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 11279-11298. Association for Computational Linguistics.
192
+ Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang. 2023b. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. CoRR, abs/2302.01560.
193
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837.
194
+ Jialong Wu, Wenbiao Yin, Yong Jiang, Zhenglin Wang, Zekun Xi, Runnan Fang, Deyu Zhou, Pengjun Xie, and Fei Huang. 2025. Webwalker: Benchmarking llms in web traversal. arXiv preprint arXiv:2501.07572.
195
+ Zekun Xi, Wenbiao Yin, Jizhan Fang, Jialong Wu, Runnan Fang, Ningyu Zhang, Jiang Yong, Pengjun Xie, Fei Huang, and Huajun Chen. 2025. Omnithink: Expanding knowledge boundaries in machine writing through thinking. Preprint, arXiv:2501.09751.
196
+
197
+ Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2369-2380. Association for Computational Linguistics.
198
+ Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. 2022. Webshop: Towards scalable real-world web interaction with grounded language agents. Advances in Neural Information Processing Systems, 35:20744-20757.
199
+ Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao. 2023. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net.
200
+ Siyu Yuan, Kaitao Song, Jiangjie Chen, Xu Tan, Yongliang Shen, Kan Ren, Dongsheng Li, and Deqing Yang. 2024. EASYTOOL: enhancing llmbased agents with concise tool instruction. CoRR, abs/2401.06201.
201
+ Xilin Zhang, Zhixin Mao, Ziwen Chen, and Shen Gao. 2024. Effective tool augmented multi-agent framework for data analysis. Data Intelligence, 6(4):923-945.
202
+ Zhuosheng Zhang, Yao Yao, Aston Zhang, Xiangru Tang, Xinbei Ma, Zhiwei He, Yiming Wang, Mark Gerstein, Rui Wang, Gongshen Liu, et al. 2025. Igniting language intelligence: The hitchhiker's guide from chain-of-thought reasoning to language agents. ACM Computing Surveys, 57(8):1-39.
203
+ Siyun Zhao, Yuqing Yang, Zilong Wang, Zhiyuan He, Luna K Qiu, and Lili Qiu. 2024a. Retrieval augmented generation (rag) and beyond: A comprehensive survey on how to make your llms use external data more wisely. arXiv preprint arXiv:2409.14924.
204
+ Suifeng Zhao, Tong Zhou, Zhuoran Jin, Hongbang Yuan, Yubo Chen, Kang Liu, and Sujian Li. 2024b. Awecita: Generating answer with appropriate and well-grained citations using llms. Data Intelligence, 6(4):1134-1157.
205
+ Yuyue Zhao, Jiancan Wu, Xiang Wang, Wei Tang, Dingxian Wang, and Maarten De Rijke. 2024c. Let me do it for you: Towards llm empowered recommendation via tool learning. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1796-1806.
206
+ Wangchunshu Zhou, Yuchen Eleanor Jiang, Long Li, Jialong Wu, Tiannan Wang, Shi Qiu, Jintian Zhang, Jing Chen, Ruipu Wu, Shuai Wang, et al. 2023. Agents: An open-source framework for autonomous language agents. arXiv preprint arXiv:2309.07870.
207
+
208
+ Wangchunshu Zhou, Yixin Ou, Shengwei Ding, Long Li, Jialong Wu, Tiannan Wang, Jiamin Chen, Shuai Wang, Xiaohua Xu, Ningyu Zhang, et al. 2024. Symbolic learning enables self-evolving agents. arXiv preprint arXiv:2406.18532.
209
+ Yuqi Zhu, Shuofei Qiao, Yixin Ou, Shumin Deng, Ningyu Zhang, Shiwei Lyu, Yue Shen, Lei Liang, Jinjie Gu, and Huajun Chen. 2024. Knowa-gent: Knowledge-augmented planning for llm-based agents. arXiv preprint arXiv:2403.03101.
210
+
211
+ # A Related Works
212
+
213
+ # A.1 Agent Planning
214
+
215
+ Recent studies has shown that in the realm of complex task-solving (Ouyang and Li, 2023; Sun et al., 2023; Liu et al., 2024c, 2025), the capacity for planning and refinement within large models has become increasingly pivotal. It has marked a transition from early Methods like CoT(Wei et al., 2022), Plan and Solve (Wang et al., 2023a), which tackle tasks sequentially, to the sophisticated agentic workflows of today, where model planning is instrumental in addressing a myriad of complex tasks, including question answering (QA) (Mavi et al., 2022), embodied interaction (Yao et al., 2022), tool invocation (Masterman et al., 2024), and long-form text generation (Jiang et al., 2024).
216
+
217
+ However, initial planning efforts are fraught with deficiencies due to the complexity of environments. When faced with unfamiliar environments, relying solely on human-written task descriptions without interaction with the environment often leads to plans that are misaligned with the actual tasks, or plans that seem reasonable but fail during execution due to a lack of accurate action knowledge. Consequently, there has been a surge in research (Song et al., 2023b; Wang et al., 2023b) focused on refining plans and workflows. These efforts typically leverage direct environmental feedback or design a reward score for end-to-end plan correction, but they often lack a medium for the intermediate processes, which obscures the transparency of the plan refinement process. Moreover, the refinement of plans and the collection of feedback are usually linear and iterative (Qu et al., 2024), resulting in low efficiency and a lack of diversity.
218
+
219
+ # A.2 Knowledge-Augmented Agents
220
+
221
+ LLMs, as agents interacting with specific environments, often need to provide action signals to these environments (Zhou et al., 2023, 2024; Durante et al., 2024). These action signals can be either restricted or open actions related to the environment (Wang et al., 2024; Li et al., 2024; Hu et al., 2024; Zhang et al., 2024). For instance, they might involve specific movements in embodied task scenarios or the use of various tools like search or OCR in tool invocations. By incorporating these actions, on one hand, the Agent gains the ability to interact with the environment, allowing LLMs to transcend mere textual output (Shridhar et al., 2021; Wang et al., 2022; Zhao et al., 2024b). On
222
+
223
+ the other hand, these external actions endow the Agent with a capability similar to humans using tools, compensating for the inherent limitations of LLMs, such as search tools that can alleviate issues of knowledge hallucination or obsolescence in LLMs (Singh et al., 2024; Zhao et al., 2024c; Shen et al., 2024; Chen, 2023).
224
+
225
+ Current methods for learning action knowledge are mainly divided into two categories: one involves creating a large amount of synthetic data to construct trajectories for executing actions to train the model (Qiao et al., 2024; Huang et al., 2024b; Zhu et al., 2024), which is costly and has poor generalizability across different tasks; the other relies on prompt engineering (Raman et al., 2022), placing explicit action knowledge about how to plan to execute actions within the prompt, and then using ICL (In-Context Learning) methods to enable the model to learn to invoke these actions. While convenient, these methods can be inaccurate as the artificially constructed planning knowledge may not accurately reflect the true state of the environment, leading to potential biases.
226
+
227
+ # B Setting
228
+
229
+ # B.1 Datasets
230
+
231
+ ToolBench contains tasks using over 16,000 RapidAPI tools. It assesses a model's ability to plan and execute complex workflows.
232
+
233
+ HotpotQA is a multi-hop QA dataset with questions requiring multiple steps to answer. We employ Goolge Search as the search engine in the experiment.
234
+
235
+ # B.2 Evaluation
236
+
237
+ ToolBench: Evaluated using pass rate and win rate. We record planning steps and tool invocations, then submit the trajectory for assessment. Win rate is compared to React's performance. HotpotQA: Evaluated using F1 score, comparing model answers to gold answers (reward 0-1). These datasets and metrics allow us to rigorously validate our approach across varied contexts.
238
+
239
+ # B.3Baselines
240
+
241
+ Several strong methods are selected as our baselines, including: ReAct (Yao et al., 2023) which interacts with environment to reason the next step, Self-Refine (Madaan et al., 2023) which uses the feedback from environment to refine the origin prompt, Easy-Tool (Yuan et al., 2024) which
242
+
243
+ uses lmm firstly to refine the tool description and then break down the tasks to complete them, and DRAFT (Qu et al., 2024) to synthesize tasks on a single tool for exploration to learn how to use the tool.
244
+
245
+ # B.4 Experiment Setup
246
+
247
+ The backend model used in our experiments is Qwen-Long-0916, while the version of GPT-4 is 0613. The token usage in our method is approximately 6-8 million tokens. We configured the width of MCTS to 3 and set the similarity threshold to 0.6. After balancing effectiveness and cost, we synthesized 200 scenes and conducted 15 iterations on them during the experiment.
248
+
249
+ # C Prompt Template
250
+
251
+ See in Table 3, 4
252
+
253
+ # D Algorithm
254
+
255
+ See in Algorithmic 1
256
+
257
+ Algorithm 1 Monte Carlo Tree Search (MCTS) for Action Knowledge Optimization
258
+ function MCTS(root_node)
259
+ Iteration $\leftarrow 0$
260
+ while Iteration < max_iteration do
261
+ $\triangleright$ Step 1: Selection with UCB algorithm and
262
+ num_child < 3
263
+ leaf_node $\leftarrow$ SELECT_NODE(root_node)
264
+ $\triangleright$ Step 2: Expansion
265
+ new_node $\leftarrow$ EXPAND(leaf_node)
266
+ $\triangleright$ Step 3: Simulation
267
+ simulation_result $\leftarrow$ SIMULATE(new_node)
268
+ $\triangleright$ Step 4: Backpropagation
269
+ BACKPROPAGATE(new_node, simulation_result)
270
+ Iteration $\leftarrow$ Iteration + 1
271
+ end while
272
+ end function
273
+ function SELECT_NODE(node)
274
+ while node.isfully Expanded() do
275
+ node $\leftarrow$ CHOOSE.best_CHILD(node, exploration_parameter)
276
+ end while
277
+ return node
278
+ end function
279
+ function EXPAND(node)
280
+ optimization $\leftarrow$ CHOOSE_UNTRIED_OPTIMIZATION(node)
281
+ new_node $\leftarrow$ APPLY_OPTIMIZATION(node, optimization)
282
+ ADD_CHILD(node, new_node)
283
+ return new_node
284
+ end function
285
+ function SIMULATE(node)
286
+ optimized_score $\leftarrow$ CALCULATE_SCORE(node.current Action knowledge)
287
+ reward $\leftarrow$ optimized_score - father_score
288
+ return reward
289
+ end function
290
+ function BACKPROPAGATE(node, result)
291
+ while node $\neq$ None do
292
+ UPDATE_STATISTICS(node, result)
293
+ node $\leftarrow$ node.parent
294
+ end while
295
+ end function
296
+ function CALCULATE_SCORE(action knowledge)
297
+ return EVALUATE(action knowledge)
298
+ end function
299
+
300
+ # Prompt for Tool Description in Action knowledge
301
+
302
+ Analyze the following tool execution trajectories to improve tool interface documentation. For all trajectories:
303
+
304
+ 1. Identify functional mismatches between original description and actual usage patterns
305
+ 2. Detect parameter inefficiencies (missing/underutilized fields)
306
+ 3. Extract implicit requirements from error patterns
307
+ 4. Generate enhanced documentation with:
308
+
309
+ Clear input specifications (required vs optional)
310
+
311
+ Contextual usage guidelines
312
+
313
+ Error prevention tips
314
+
315
+ Response format expectations
316
+
317
+ Here is an example.
318
+
319
+ Now it's your turn to analyze the following tool execution trajectories to improve tool interface documentation.
320
+
321
+ tool_name: tool_name
322
+
323
+ original_description: original_description
324
+
325
+ trajectory: trajectory
326
+
327
+ Please provide your Optimize Description for the tool. Just modify the description part and do not change the parameters description.
328
+
329
+ Make Sure your description is clear and concise.
330
+
331
+ Table 3: Prompt used for tool document refinement.
332
+
333
+ # Prompt for Workflow in Action knowledge
334
+
335
+ Analyze the provided interaction trajectory and existing workflow steps to derive a generalized, reusable workflow for similar tool calling tasks.
336
+
337
+ 1. Analyzing error patterns (authentication gaps, deprecated endpoints) and tool dependencies from interaction histories.
338
+ 2. Extracting implicit requirements (authentication, sorting logic) and mandatory parameters from error responses.
339
+ 3. Structuring a generic workflow with authentication validation, parameter checks, state management between API calls, and error backups.
340
+
341
+ Here is an example.
342
+
343
+ Now it's your turn.
344
+
345
+ Existing Workflow: workflow
346
+
347
+ Trajectory: trajectory
348
+
349
+ Please provide your Optimize Workflow for the task. And make sure your workflow is clear and concise and no longer than 200 words.
350
+
351
+ Table 4: Prompt used for workflow generation.
ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6262bdafa7b60cb42f5a00d626e3f3a21999d41781c8e5bdacd40e29d43a0626
3
+ size 379323
ACL/2025/SynWorld_ Virtual Scenario Synthesis for Agentic Action Knowledge Refinement/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f92b42bec411f035d60e10c62bcc56ba538f2e05a6288bc1044aa2c959128145
3
+ size 335678
ACL/2025/That doesn’t sound right_ Evaluating speech transcription quality in field linguistics corpora/91e9f449-d572-4370-add4-d86c8c7b9d45_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:292f2cb87e43ec40b01e546a10bf772f46e8b8a366cb7d150b2f9d609cf37a35
3
+ size 55300
ACL/2025/That doesn’t sound right_ Evaluating speech transcription quality in field linguistics corpora/91e9f449-d572-4370-add4-d86c8c7b9d45_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a49a62269f7f02980aafab924a53e9f815858073188c9f2e223c4f00ec22746d
3
+ size 68407
ACL/2025/That doesn’t sound right_ Evaluating speech transcription quality in field linguistics corpora/91e9f449-d572-4370-add4-d86c8c7b9d45_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb8bbc219ae80f71d471484556d56a0ecea9e431f82f62c347ff854342b6306e
3
+ size 702769
ACL/2025/That doesn’t sound right_ Evaluating speech transcription quality in field linguistics corpora/full.md ADDED
@@ -0,0 +1,223 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # That doesn’t sound right: Evaluating speech transcription quality in field linguistics corpora
2
+
3
+ Éric Le Ferrand<sup>1</sup>, Bo Jiang<sup>1</sup>, Joshua Hartshorne<sup>2</sup>, Emily Prud'hommeaux<sup>1</sup>
4
+
5
+ $^{1}$ Department of Computer Science, Boston College, USA
6
+
7
+ $^{2}$ MGH Institute of Health Professions, USA
8
+
9
+ # Abstract
10
+
11
+ Incorporating automatic speech recognition (ASR) into field linguistics workflows for language documentation has become increasingly common. While ASR performance has seen improvements in low-resource settings, obstacles remain when training models on data collected by documentary linguists. One notable challenge lies in the way that this data is curated. ASR datasets built from spontaneous speech are typically recorded in consistent settings and transcribed by native speakers following a set of well designed guidelines. In contrast, field linguists collect data in whatever format it is delivered by their language consultants and transcribe it as best they can given their language skills and the quality of the recording. This approach to data curation, while valuable for linguistic research, does not always align with the standards required for training robust ASR models. In this paper, we explore methods for identifying speech transcriptions in fieldwork data that may be unsuitable for training ASR models. We focus on two complimentary automated measures of transcription quality that can be used to identify transcripts with characteristics that are common in field data but could be detrimental to ASR training. We show that one of the metrics is highly effective at retrieving these types of transcriptions. Additionally, we find that filtering datasets using this metric of transcription quality reduces WER both in controlled experiments using simulated fieldwork with artificially corrupted data and in real fieldwork corpora.
12
+
13
+ # 1 Introduction
14
+
15
+ Automatic speech recognition (ASR) can support the creation of new linguistic resources for under-resourced and endangered languages, but such languages – which make up the vast majority of the world's $7000+$ languages – vary considerably in their quantity of transcribed speech data. While some languages have well-curated speech datasets
16
+
17
+ sourced from educational materials, mass media, or crowdsourcing efforts, many only have field recordings made by linguists as their primary source of speech data. Field linguists, whose work centers on describing languages and analyzing their linguistic properties, collect data primarily to support their academic research and the activities of the language community. Few linguists collect data with the goal of creating high-quality datasets for training speech technology models (Hanke, 2017; Le Ferrand, 2023). As a result, speech data from fieldwork may be only partially or unfaithfully transcribed due to issues of recording quality and the language skills of the linguist. Additionally, fieldwork transcripts often include ancillary information making it difficult to differentiate between transcription (word-level renderings of the speech) and annotation (glosses, translations, comments).
18
+
19
+ ASR models for widely-spoken languages are trained on enough professionally recorded and transcribed data that including a small number of inaccurately transcribed utterances is unlikely to significantly affect overall performance. For languages where data is scarce, however, even a small portion of low-quality data can severely degrade model performance. Detecting low quality transcripts can be done manually, but this process is tedious and time-consuming, underscoring the need for an automatic method to evaluate transcription quality.
20
+
21
+ In this paper, we explore two metrics for automatically assessing the transcription quality and accuracy of speech datasets: Phonetic Distance Match (PDM), a novel metric based on phoneme recognition, and the posterior probability of a Connectionist Temporal Classification (Graves and Graves, 2012) alignment (CTC). We evaluate the utility of these metrics for identifying poor transcriptions through experiments on clean datasets that we synthetically corrupt in ways that simulate common fieldwork data quality errors. We then demonstrate the ability of PDM in particular to
22
+
23
+ identify these types of errors. Finally we show that using our metrics to filter out these types of inaccurate transcripts can yield substantial improvements in ASR accuracy in both simulated and real-world fieldwork datasets.
24
+
25
+ # 2 Related work
26
+
27
+ Prior related work on filtering inaccurate transcripts has focused on leveraging ASR output itself, such as using the confidence score of an ASR model (Huang et al., 2013; Su and Xu, 2015; Koctür et al., 2016) or using multiple ASR models to generate multiple predictions for the same speech utterance (Fiscus, 1997; Cucu et al., 2014; Li et al., 2016; Jouvet and Fohr, 2014). These methods, while useful in high-resource settings, assume the existence of an ASR model for the target language, which is not applicable for our work, where no strong ASR models exist. Using universal speech recognition models shows more promise for under-resourced languages. Models such as XLS-R (Conneau et al., 2021) and MMS (Pratap et al., 2024) have demonstrated promising ASR results in low-resource settings (Macaire et al., 2022; Guillaume et al., 2022; Tapo et al., 2024; Romero et al., 2024; Jimerson et al., 2023). However, since these models are trained on raw speech and lack textual information, they cannot provide direct feedback on individual segments. A more suitable alternative is a universal phoneme recognizer (Li et al., 2020), which can generate phone transcriptions for any language.
28
+
29
+ There is a robust history of prior work in evaluating the acoustic quality of audio using measures of speech intelligibility derived from output of ASR or proto-ASR systems (Holube and Kollmeier, 1996; Sakoe and Chiba, 1978; Spille et al., 2018; Arai et al., 2019). While this work is also relevant for filtering audio for ASR datasets, it is orthogonal to our own work, which focuses on identifying poor quality transcripts rather than poor quality audio.
30
+
31
+ # 3 Data
32
+
33
+ We apply our metrics (see Section 4.1) to two distinct classes of datasets. The CURATED class consists of five well-curated, high-quality speech datasets, ranging in size from $3.5\mathrm{h}$ to $9\mathrm{h}$ , which we synthetically corrupt to simulate common fieldwork transcription errors (see Section 4.2). The languages include Bunun (bnn), Saisiyat (xsy), and Seediq (trv), three Taiwanese indigenous languages extracted from FormosanBank (Mohamed et al.,
34
+
35
+ 2024). For each language, we use a subset of the ePark (Aboriginal Language Research and Development Foundation, 2023b) and ILRDF (Aboriginal Language Research and Development Foundation, 2023a) corpora, which consist of read speech recorded by native speakers (Hartshorne et al., 2024). We also included Mboshi (mdw), a Bantu language from Congo-Brazzaville, part of the LIG-Aikuma project<sup>1</sup>, and Duoxu (ers), a critically endangered Sino-Tibetan language, included in the Pangloss collection (Michailovsky et al., 2014).
36
+
37
+ The second class consists of 2-hour fieldwork corpora from Pangloss (FIELDWORK) for Namakura (nmk), an Austronesian language spoken on Vanuatu, and Thulung Rai (tdh), a Sino-Tibetan language of Nepal. These consist exclusively of fieldwork recordings and include annotations and approximate transcripts. We use the FIELDWORK datasets to demonstrate the efficacy of our methods in a real-world fieldwork scenario.
38
+
39
+ All seven datasets $^2$ were partitioned into training $(70\%)$ , validation $(10\%)$ , and test $(20\%)$ sets. Dataset details are found in Table 1.
40
+
41
+ Several factors motivated our choice of these specific languages. First, the FormosanBank corpus contains an unusually large amount of high-quality data. These languages also posed an initial layer of complexity due to their orthographic conventions. For example, the glottal stop is typically represented with an apostrophe or straight single quote, and the voiceless alveolar affricate is denoted as $c$ . Mboshi includes two non-ASCII characters -ε and ω -while Duoxu features systematic tone marking using superscript numerals (e.g., $ja^{22}nje^{33}xe^{53}nje^{33}t\sigma i^{33}o$ ), adding another dimension of orthographic variation.
42
+
43
+ # 4 Method
44
+
45
+ # 4.1 Transcript evaluation metrics
46
+
47
+ We consider two metrics for evaluating transcription quality<sup>3</sup>. First, we present Phonetic Distance Match (PDM), a novel metric for evaluating orthographic transcriptions against their corresponding audio. PDM is calculated by transcribing an utterance recording using a phone-level transcription model and then measuring the edit distance between the resulting transcription and the manual reference transcription. Using Allosaurus (Li
48
+
49
+ et al., 2020) without fine-tuning, we automatically generate phone-level transcripts for each utterance in the corpus, which are then converted into their closest corresponding ASCII characters using the unidecode library.4 The orthographic reference transcripts are also converted to ASCII to ensure a shared character set. Finally, we compute the normalized Levenshtein distance between the two transcriptions and subtract from 1 to generate a similarity metric ranging from 0 to 1. The scoring process is illustrated in detail in Appendix Fig. 4.
50
+
51
+ The rationale behind using an ASCII-ized version of IPA is as follows. We begin with the observation that many languages currently being documented are traditionally oral. Their orthographies are often introduced by outsiders who tend adopt the Latin alphabet, with minor modifications. Although exceptions exist (e.g., Ainu written in Japanese katakana or Inuktitut written using Indigenous Canadian syllabics), the Latin script remains the prevalent standard.
52
+
53
+ While we recognize that linguists and community members who use the Latin alphabet are free to use the characters as they wish in their writing systems, these newly devised orthographies are not arbitrary. They are frequently influenced by existing Latin-based writing systems and the International Phonetic Alphabet (IPA). For example, a voiced velar nasal is typically represented as $\mathfrak{n}$ or $ng$ , and rarely as unrelated letters like $p$ or $r$ . Naturally, inconsistencies can occur—such as c representing /s/, /k/, or /ʃ/ in French, or /ts/ in Seediq, but overall, we expect the ASCII-ized forms to retain at least phonemic consistency.
54
+
55
+ There are two advantages to our approach. First, it does not require any prior knowledge about the language or its phonetic inventory, which might not be easily available for a poorly documented language. Second, it does not require additional effort or resources to create a rule-based or learned G2P transformation of the data. In short, the method can be applied to any language that uses at least a partially ASCII-based transcription system without requiring additional model training or in-depth research into the phonetic properties of the language.
56
+
57
+ The second metric is the Connectionist Temporal Classification (CTC) alignment posterior probability. We use a large wav2vec (Baevski et al., 2020) model<sup>5</sup> to extract a speech representation
58
+
59
+ from each utterance, again without fine-tuning; we then apply CTC alignment (Graves et al., 2006) between the speech features and the manual transcription and output the alignment posterior probability. It is entirely independent of the PDM metric.
60
+
61
+ # 4.2 Synthetic dataset corruption
62
+
63
+ To simulate a dataset containing typical fieldwork transcription errors, we arbitrarily select $20\%$ of each training set of the 5 CURATED datasets and introduce transcription errors using three different corruption methods: (1) Deleted: three random words are removed from the transcription; (2) Cropped: the final $50\%$ of words in the transcription are removed; (3)Swapped: the transcription is randomly replaced with another from the training set. For each language, we create three corrupted datasets, each containing $20\%$ of the utterances corrupted in one of these three ways. Each utterance/transcript pair in the three datasets is then scored with the two metrics described in Sec. 4.1. We then evaluate how accurate our metrics identified these corrupted utterances. Examples of corrupted utterances can be found in Table 2.
64
+
65
+ # 4.3 ASR model building
66
+
67
+ All experiments are conducted with XLSR-53 (Conneau et al., 2021), a multilingual model based on the wav2vec architecture. We train a CTC layer for 30 epochs, selecting the best model with the validation set. Decoding is performed using a trigram LM trained on the training set for each language and corruption setting. We follow the popular XLSR tutorial but do not freeze the feature extractor.
68
+
69
+ In our simulated scenario, we use the three corrupted versions of each CURATED dataset (cf. Sec. 4.2). For each corrupted dataset, as well as for the uncorrupted dataset, we train an ASR model to determine the impact of each corruption on WER. For each corrupted dataset, we then create three filtered datasets: one in which we filter out $20\%$ of the utterances according to the strength of the PDM metric; one where we do the same according to the CTC metric. When training data is limited, removing utterances from the training set can negatively impact performance. To ensure a fair comparison, we also evaluate performance using a dataset where the same percentage of utterances is removed from the training data at random.
70
+
71
+ In our real-world scenario, we calculate the two
72
+
73
+ ![](images/d3883386903f843a9411925e2906056c946f29d47c74347c65bc3749cfc7c07a.jpg)
74
+ Figure 1: WER across corruption configurations.
75
+
76
+ metrics on the utterances of the two FIELDWORK datasets. For each dataset, we train ASR models on the unfiltered dataset and on filtered datasets, removing $5\%$ , $10\%$ , and $20\%$ of the utterances using the more promising PDM metric and via random selection.
77
+
78
+ # 5 Results
79
+
80
+ # 5.1 Detecting corrupted transcripts
81
+
82
+ Figure 5 shows the full ROC curves and AUC values for all combinations of corruption type, dataset, and metric. We see that PDM achieves near-perfect AUC scores (0.89-0.98) for detecting utterances in the Swapped configuration, very high scores for Cropped (0.77-0.94), and strong scores for Deleted (0.64-0.85). In contrast, CTC is consistently and substantially less effective for all languages in all three corruption settings, with some AUC scores performing at chance in the Deleted and Cropped setting. Notably, Duoxu and Mboshi exhibit lower AUCs perhaps due to weak overlap in character set with the English wav2vec model used (cf. Table 1). The Deleted configuration appears to be the most challenging to detect for both metrics, but with PDM showing a clear advantage over CTC.
83
+
84
+ # 5.2 ASR evaluation: Simulated fieldwork
85
+
86
+ The baseline results for both the uncorrupted and corrupted CURATED datasets, shown in Figure 1, reveal a clear trend. The Deleted configuration causes the least degradation in WER. The Cropped configuration generally yields the second-worst results, except for Mboshi, where Deleted performs worse perhaps because of Mboshi's shorter average utterance length. Finally, the Swapped configuration consistently produces the weakest WER.
87
+
88
+ Figure 2 shows changes in WER in the corrupted datasets with and without filtering using the two metrics, PDM and CTC, as well as the random setting where $20\%$ of the data is removed from a corrupted dataset at random. In the Deleted setting,
89
+
90
+ PDM filtering has minimal impact, while CTC filtering generally degrades WER. In the Cropped setting, filtering with PDM improves WER except for Duoxu, while CTC filtering again generally degrades WER. In the Swapped setting, filtering with PDM systematically and often dramatically improves WER, while CTC filtering has little impact except for Bunun and Saisiyat. Overall, the superior performance of PDM filtering is quite consistent, yielding better results than CTC filtering in 14 out of 15 cases. The exception is Duoxu in the Cropped setting. As already noted, Duoxu's writing system contains many non-ASCII characters, which may limit the performance of the PDM metric in some cases.
91
+
92
+ In a few rare cases, ASR models trained on a corrupted dataset outperform those trained on data filtered using one of the two metrics. This typically occurs when the filtering metric lacks sufficient accuracy, as is observed in some languages with the Deleted and Cropped configurations (see Figure 5), leading to the unintended removal of clean data while allowing corrupted data to remain, ultimately degrading performance. The Deleted setting, which has a minor impact when utterances are relatively long, may also serve unintentionally as a form of corruption-based regularization.
93
+
94
+ # 5.3 ASR evaluation: Real-world fieldwork
95
+
96
+ Figure 3 shows the results of different thresholds of PDM filtering and random filtering on the two FIELDWORK real-world datasets. (We do not report results for CTC given the weak utility observed in the simulated fieldwork scenario both for corruption detection and as a filter.) For Thulung Rai, a $5\%$ filtering threshold proved the most effective, resulting in a decrease of several points in WER, while higher thresholds and random filtering resulted in WER increases. With the Namakura dataset, WER consistently decreased as more data was filtered using the PDM score, suggesting that a significant portion of the corpus may contain transcription errors. Filtering randomly for Namakura yielded slight random variations in WER.
97
+
98
+ To better understand the utility of the PDM metric for identifying poor transcripts, we manually inspected the transcriptions of the lowest and highest $5\%$ of utterances based on PDM scores for both corpora. In Thulung Rai, $61\%$ of the lowest scoring utterances showed no issues, while $14\%$ had mismatched transcriptions and $23\%$ contained cropped transcriptions. In contrast, $93\%$ of the top scoring
99
+
100
+ ![](images/ad38d32e1c082b297f129f8df3cb7b867010b0814fad38f9c8ef954346bef8ee.jpg)
101
+ (a) Deleted
102
+
103
+ ![](images/09b6f3a9fe494e174925c2c9be70396a553b74b38966df085ed3bfc48df74466.jpg)
104
+ (b) Cropped
105
+
106
+ ![](images/a73a92d2d9e1950cc334af40770601bb8b86b304835ba2e04391573059bf6431.jpg)
107
+ (c) Swapped
108
+
109
+ ![](images/10f28f7e24599056ee4a84055d982685c37606911bc1a426aeef5cc552e4c7f0.jpg)
110
+ Figure 2: WER for corrupted and filtered CURATED datasets in the simulated fieldwork scenario.
111
+
112
+ ![](images/af652c4adfb9b46e38ff6b0baa654a6ea1e11c3cec5b7a36ca18f5616526fbb2.jpg)
113
+ (a) PDM
114
+ (b) Random
115
+ Figure 3: WER for unfiltered and filtered FIELDWORK datasets in the real-world fieldwork scenario.
116
+
117
+ utterances had correct transcriptions, with $6\%$ missing a few words. For Namakura, only $11\%$ of the lowest scoring utterances had accurate transcriptions, with $55\%$ mismatched, $29\%$ cropped, $3\%$ with cropped audio, and $1\%$ missing some words. Conversely, the highest scoring utterances had $97\%$ correct transcriptions, with $1.5\%$ cropped and another $1.5\%$ missing words.
118
+
119
+ # 6 Conclusions and Future Work
120
+
121
+ This paper explores two metrics for identifying unsuitable and inaccurate speech transcriptions to improve ASR training from linguistic fieldwork data with the goal of supporting language documentation. We find that our novel PDM metric and, to a lesser extent, a CTC confidence metric are effective in identifying erroneous transcriptions in both simulated and real-world fieldwork datasets. Moreover, filtering data using the PDM metric consistently reduces WER in both simulated and real-world
122
+
123
+ fieldwork scenarios. In our future work, we plan to investigate additional methods for identifying poor transcriptions and to explore the relationship between audio quality and transcription quality.
124
+
125
+ # Limitations
126
+
127
+ Experimental results demonstrate that our PDM method is highly effective for languages with a limited number of non-ASCII characters. However, further experiments are needed to evaluate its performance on languages with a larger set of non-ASCII characters and non-Latin writing systems. The proposed metrics efficiently identify major errors, such as missing or mismatched transcripts, but are less likely to detect spelling mistakes or inconsistent transcription of specific speech sounds, which could also significantly impact WER. While these methods could be applied to high-resource languages like French or German, such languages may benefit more from approaches leveraging existing G2P models or pre-trained ASR systems trained specifically for these languages.
128
+
129
+ # Ethics Statement
130
+
131
+ Researchers must always be respectful of language community concerns about data ownership when working with Indigenous language data. All of our data is gathered from public sources. In the case of the Formosan languages, the two organizations providing the data, the Indigenous Languages Research and Development Foundation and the ePark educational research organization, actively seek out collaborations with computational researchers. The other datasets are also made available on the Web by their creators specifically with the goal of furthering research in these languages, both linguistic and computational. We have permission from the creators and owners to redistribute the data in the form of ASR datasets.
132
+
133
+ # Acknowledgments
134
+
135
+ The authors thank Yuyang Liu, Li-May Sung, and the Indigenous Languages Research and Development Foundation, especially Akiw and Lowking Nowbucyang, for generously providing data. This material is based upon work supported by the National Science Foundation under Grant #2319296. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
136
+
137
+ # References
138
+
139
+ Aboriginal Language Research and Development Foundation. 2023a. Online dictionary of aboriginal languages. https://e-dictionary.ilrdf.org.tw.
140
+ Aboriginal Language Research and Development Foundation. 2023b. yuanzhumin yuan leyuan (epark). https://web.klokeh.tw/.
141
+ Kenichi Arai, Shoko Araki, Atsunori Ogawa, Keisuke Kinoshita, Tomohiro Nakatani, Katsuhiko Yamamoto, and Toshio Irino. 2019. Predicting speech intelligibility of enhanced speech using phone accuracy of DNN-based ASR system. In *Interspeech*, pages 4275–4279.
142
+ Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33:12449-12460.
143
+ Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, and Michael Auli. 2021. Unsupervised cross-lingual representation learning for speech recognition. Interspeech 2021.
144
+ Horia Cucu, Andi Buzo, and Corneliu Burileanu. 2014. Unsupervised acoustic model training using multiple seed asr systems. In Spoken Language Technologies for Under-Resourced Languages.
145
+ Jonathan G Fiscus. 1997. A post-processing system to yield reduced word error rates: Recognizer output voting error reduction (ROVER). In 1997 IEEE Workshop on Automatic Speech Recognition and Understanding Proceedings, pages 347-354. IEEE.
146
+ Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pages 369-376.
147
+ Alex Graves and Alex Graves. 2012. Connectionist temporal classification. Supervised sequence labelling with recurrent neural networks, pages 61-93.
148
+
149
+ Severine Guillaume, Guillaume Wisniewski, Benjamin Galliot, Minh-Chau Nguyen, Maxime Fily, Guillaume Jacques, and Alexis Michaud. 2022. Plugging a neural phoneme recognizer into a simple language model: a workflow for low-resource setting. In Proceedings of Interspeech, pages 4905-4909.
150
+ Florian Hanke. 2017. Computer-Supported Cooperative Language Documentation. Ph.D. thesis, Ph.D. thesis, University of Melbourne.
151
+ Joshua K. Hartshorne, Éric Le Ferrand, Li-May Sung, and Emily Prud'hommeaux. 2024. Formosanbank and why you should use it. In Architectures and Mechanisms in Language Processing (AMLaP) Poster.
152
+ Inga Holube and Birger Kollmeier. 1996. Speech intelligibility prediction in hearing-impaired listeners based on a psychoacoustically motivated perception model. The Journal of the Acoustical Society of America, 100(3):1703-1716.
153
+ Yan Huang, Dong Yu, Yifan Gong, and Chaojun Liu. 2013. Semi-supervised GMM and DNN acoustic model training with multi-system combination and confidence re-calibration. In Interspeech, pages 2360-2364.
154
+ Robert Jimerson, Zoey Liu, and Emily Prud'hommeaux. 2023. An (unhelpful) guide to selecting the best ASR architecture for your under-resourced language. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL), pages 1008-1016.
155
+ Denis Jouvet and Dominique Fohr. 2014. About combining forward and backward-based decoders for selecting data for unsupervised training of acoustic models. In INTERSPEECH 2014, 15th Annual Conference of the International Speech Communication Association.
156
+ Tomáš Koctúr, Ján Staš, and Jozef Juhár. 2016. Unsupervised acoustic corpora building based on variable confidence measure thresholding. In 2016 International Symposium ELMAR, pages 31-34. IEEE.
157
+ Éric Le Ferrand. 2023. Leveraging Speech Recognition for Interactive Transcription in Australian Aboriginal Communities. Ph.D. thesis, Charles Darwin University.
158
+ Sheng Li, Yuya Akita, and Tatsuya Kawahara. 2016. Data selection from multiple ASR systems' hypotheses for unsupervised acoustic model training. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5875-5879. IEEE.
159
+ Xinjian Li, Siddharth Dalmia, Juncheng Li, Matthew Lee, Patrick Littell, Jiali Yao, Antonios Anastasopoulos, David R Mortensen, Graham Neubig, Alan W Black, et al. 2020. Universal phone recognition with a multilingual allophone system. In ICASSP 2020-2020 IEEE International Conference on Acoustics,
160
+
161
+ Speech and Signal Processing (ICASSP), pages 8249-8253. IEEE.
162
+ Cécile Macaire, Didier Schwab, Benjamin Lecouteux, and Emmanuel Schang. 2022. Automatic speech recognition and query by example for Creole languages documentation. In *Findings of the Association for Computational Linguistics: ACL* 2022.
163
+ Boyd Michailovsky, Martine Mazaudon, Alexis Michaud, Séverine Guillaume, Alexandre François, and Evangelia Adamou. 2014. Documenting and researching endangered languages: The Pangloss Collection. Language Documentation and Conservation, 8:119-135.
164
+ Wael Mohamed, Éric Le Ferrand, Li-May Sung, Emily Prud'hommeaux, and Joshua Hartshorne. 2024. Formosanbank. https://ai4commsci.gitbook.io/formosanbank.
165
+ Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, et al. 2024. Scaling speech technology to 1,000+ languages. Journal of Machine Learning Research, 25(97):1-52.
166
+ Monica Romero, Sandra Gómez-Canaval, and Ivan G Torre. 2024. Automatic speech recognition advancements for Indigenous languages of the Americas. Applied Sciences, 14(15):6497.
167
+ Hiroaki Sakoe and Seibi Chiba. 1978. Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, 26(1):43-49.
168
+ Constantin Spille, Stephan D Ewert, Birger Kollmeier, and Bernd T Meyer. 2018. Predicting speech intelligibility with deep neural networks. Computer Speech & Language, 48:51-66.
169
+ H Su and H Xu. 2015. Multi-softmax deep neural network for semi-supervised training. In Proceedings of Interspeech, pages 3239-3243.
170
+ Allahsera Tapo, Éric Le Ferrand, Zoey Liu, Christopher Homan, and Emily Prud'hommeaux. 2024. Leveraging speech data diversity to document Indigenous heritage and culture. In Proceedings of Interspeech 2024, pages 5088-5092.
171
+
172
+ # A Appendix
173
+
174
+ Table 1 shows the durations, token and type counts, and non-ASCII character proportions of each of the 7 datasets for reference purposes. As noted in the paper, we have released these corpora and their partitions for research purposes. We note that they are derived in their entirety from publicly available sources with licensing that permits redistribution in other formats.
175
+
176
+ Table 2 provides examples of the three types of corruption designed to mimic the kinds of errors observed in fieldwork transcripts.
177
+
178
+ Figure 2 provides a walk-through of the PDM calculation process with three example utterances.
179
+
180
+ Figure 5 plots all six ROC curves and reports AUC measures for using each of the two metrics, PDM and CTC, to identify corruptions for each CURATED dataset under of the three corruption settings.
181
+
182
+ Table 3 shows the WER results presented in the paper in graphical format in Figure 2.
183
+
184
+ <table><tr><td></td><td>Bunun</td><td>Duoxu</td><td>Mboshi</td><td>Saisiyat</td><td>Seediq</td><td>Namakura</td><td>Thulung Rai</td></tr><tr><td>Duration</td><td>8h34</td><td>7h57</td><td>3h28</td><td>8h14</td><td>8h53</td><td>1h53</td><td>2h18</td></tr><tr><td>Token</td><td>40166</td><td>61564</td><td>25671</td><td>39644</td><td>50123</td><td>18566</td><td>16296</td></tr><tr><td>Type</td><td>6846</td><td>2557</td><td>5621</td><td>4723</td><td>5608</td><td>1065</td><td>3965</td></tr><tr><td>Non-ASCII</td><td>0%</td><td>44%</td><td>23%</td><td>0%</td><td>2%</td><td>1%</td><td>8%</td></tr></table>
185
+
186
+ Table 1: Corpus size and token/type count for all datasets. We also provide the percentage of non-ASCII characters which may have an impact on the utility of the PDM metric.
187
+
188
+ <table><tr><td>Config.</td><td>Original</td><td>Corrupted</td></tr><tr><td>Deleted</td><td>wa adi pósá bábará wa kaá
189
+ kobhá epωrωbaá óyálá mwána anyωω</td><td>wa pósá wa
190
+ epωrωó yáála mwána</td></tr><tr><td>Cropped</td><td>maqasmav a abus malaitaz a savi
191
+ to seediq msgelu sa seediq mneyah alang kiya</td><td>maqasmav a abus
192
+ to seediq msgelu sa</td></tr><tr><td>Swapped</td><td>supah a samah sia humacia
193
+ tai hari niqan rebuq watan dao su trebuq hii</td><td>anak anak sa ia maupacia minhanglas
194
+ slii hini kanna nnapa namu bunga</td></tr></table>
195
+
196
+ Table 2: Examples of input utterances and their corruptions from the three corruption configurations.
197
+
198
+ <table><tr><td>IPA transcript</td><td>Orthographic transcription</td></tr><tr><td>mibiji:xbuəmūprɛskyrztəheə</td><td>Meyah bgihur, mqaras ka dheya</td></tr><tr><td>amiōikalawasuəkjietaɪgwan</td><td>&#x27;amilika&#x27; ra:waS ki taywan</td></tr><tr><td>tsuəbaɪaɪmaɪsəhumatɪə</td><td>Supah a samah sia humacia</td></tr><tr><td>conversion to ASCII</td><td>Orthography normalized</td></tr><tr><td>mibiji:xRu@rmoareskuzt@he@</td><td>meyahbgihurmqaraskadheya</td></tr><tr><td>amidikalawasu@kjetalgwan</td><td>&#x27;amilika&#x27;ra:waskitaywan</td></tr><tr><td>tsu@baRaSamaRc@humatSjl@</td><td>supahasamahsiahumacia</td></tr></table>
199
+
200
+ Figure 4: Demonstration of the PDM calculation method. In the upper left we see IPA transcripts generated from audio by Allosaurus. In the upper right we see the corresponding reference orthographic transcription for the three sample utterances. In the lower left are the phone-level transcripts converted to their ASCII equivalents often used to represent those IPA symbols (e.g., with Sampa). In the lower right, the reference orthographic transcriptions converted to ASCII, with spaces removed. We calculate normalized Levenshtein distance between the utterance in the lower panels and subtract from 1 to create the PDM metric.
201
+
202
+ ![](images/3b656e14b24c6b3fde028f3f5a91803ec90a02a9858687f2244aeca49769d801.jpg)
203
+ (a) Deleted PDM
204
+
205
+ ![](images/3eb87e13fe84456fcf87fb4bf6040541c875d8cc3ffd344f8a1c885be99c92b3.jpg)
206
+ (b) Cropped PDM
207
+
208
+ ![](images/bf5c080a952d7877e103d27633c27c27822b3d0324ce4bd5522cf8721f1f1fa5.jpg)
209
+ (c) Swapped PDM
210
+
211
+ ![](images/82331efbb152b3a7919c500a9b4fbe9d8ebc72559169e3804da4b3903f381188.jpg)
212
+ (d) Deleted CTC
213
+
214
+ ![](images/891ebea2bdae193b36a2d2eebda959c72474674d9e5de878f4b5904dc8310e63.jpg)
215
+ (e) Cropped CTC
216
+
217
+ ![](images/85f36c30eab216588c546f2c0e483e5e2d3e7d55f9b961d47cc77b0d301b319f.jpg)
218
+ (f) Swapped CTC
219
+ Figure 5: ROC curves comparing performance of PDM and CTC for retrieving corrupted transcriptions under the three corruption settings for all five of the CURATED datasets.
220
+
221
+ <table><tr><td>Corruption Setting</td><td>Filtering Method</td><td>Bunun</td><td>Duoxu</td><td>Mboshi</td><td>Saisiyat</td><td>Seediq</td></tr><tr><td rowspan="4">Deleted</td><td>Unfiltered</td><td>0.2573</td><td>0.4464</td><td>0.4717</td><td>0.2600</td><td>0.1899</td></tr><tr><td>Random</td><td>0.4620</td><td>0.5008</td><td>0.5121</td><td>0.3943</td><td>0.3075</td></tr><tr><td>PDM</td><td>0.2754</td><td>0.4478</td><td>0.4446</td><td>0.2779</td><td>0.1947</td></tr><tr><td>CTC</td><td>0.3025</td><td>0.4562</td><td>0.6048</td><td>0.3002</td><td>0.2325</td></tr><tr><td rowspan="4">Cropped</td><td>Unfiltered</td><td>0.3065</td><td>0.4514</td><td>0.4268</td><td>0.3700</td><td>0.2692</td></tr><tr><td>Random</td><td>0.3372</td><td>0.4911</td><td>0.5402</td><td>0.3428</td><td>0.2473</td></tr><tr><td>PDM</td><td>0.3062</td><td>0.5311</td><td>0.4198</td><td>0.2908</td><td>0.2049</td></tr><tr><td>CTC</td><td>0.3351</td><td>0.4806</td><td>0.7046</td><td>0.3387</td><td>0.2034</td></tr><tr><td rowspan="4">Swapped</td><td>Unfiltered</td><td>0.4743</td><td>0.5759</td><td>0.5562</td><td>0.4500</td><td>0.3356</td></tr><tr><td>Random</td><td>0.4584</td><td>0.6538</td><td>0.5513</td><td>0.3994</td><td>0.4881</td></tr><tr><td>PDM</td><td>0.2940</td><td>0.4691</td><td>0.4951</td><td>0.2106</td><td>0.2036</td></tr><tr><td>CTC</td><td>0.3014</td><td>0.5800</td><td>0.5396</td><td>0.3421</td><td>0.3207</td></tr></table>
222
+
223
+ Table 3: WER for each combination of simulated corruption setting and filtering method for each of the five CURATED datasets. This same information is visualized in bar graph format in Figure 2. The lowest WER in for each language/corruption is boldfaced.
ACL/2025/That doesn’t sound right_ Evaluating speech transcription quality in field linguistics corpora/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2be908474465788a108841bdf819f65b21e0b4bb4bac7b5da172d1aaa5303ace
3
+ size 478449
ACL/2025/That doesn’t sound right_ Evaluating speech transcription quality in field linguistics corpora/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c51ffab3bef04e59b32668a03267da6846523ae3a97e17f0a7bf4f3795c8429b
3
+ size 258497
ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/fd8ebbd1-41f1-4e17-ad72-97ad56e52db3_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5a2542c9e30eb38991c30a4ecf789571dfdfbd60bab6106da77988183d6c4f2
3
+ size 46618
ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/fd8ebbd1-41f1-4e17-ad72-97ad56e52db3_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:930bfb6dcc28938d6ec332b7f5c50f58a4847ce617b68027b36fea39a8321e6a
3
+ size 54749
ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/fd8ebbd1-41f1-4e17-ad72-97ad56e52db3_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0a1a6e3eb5ec912c52bf4fdfb2f1d67bba1ef290b4b78fea5d1da48eed4df46
3
+ size 185677
ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/full.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models
2
+
3
+ Zachary Houghton and Kenji Sagae and Emily Morgan
4
+ University of California, Davis / 1 Shields Ave, Davis, CA 95616
5
+ znhoughton@ucdavis.edu
6
+
7
+ # Abstract
8
+
9
+ To what extent do large language models learn abstract representations as opposed to more superficial aspects of their very large training corpora? We examine this question in the context of binomial ordering preferences involving two conjoined nouns in English. When choosing a binomial ordering (radio and television vs television and radio), humans rely on more than simply the observed frequency of each option. Humans also rely on abstract ordering preferences (e.g., preferences for short words before long words). We investigate whether large language models simply rely on the observed preference in their training data, or whether they are capable of learning the abstract ordering preferences (i.e., abstract representations) that humans rely on. Our results suggest that both smaller and larger models' ordering preferences are driven exclusively by their experience with that item in the training data. Our study provides further insights into differences between how large language models represent and use language and how humans do it, particularly with respect to the use of abstract representations versus observed preferences.
10
+
11
+ # 1 Introduction
12
+
13
+ Large language models have progressed at an incredible rate in the last few years. Their rise in popularity and sometimes surprising capabilities have raised many questions about what exactly these models learn and how they represent linguistic knowledge. One interesting question that has been examined is whether certain capabilities emerge once models reach a certain size. Although models of different sizes appear to generate fluent language, it is unclear to what extent different models rely on superficial characteristics of their immense training corpora, such as word frequency and co-occurrences, and to what extent they learn abstract representations that generalize in ways that are similar to what humans do with far less linguis
14
+
15
+ tic input. For example, in addition to learning that some binomial orderings are more frequent than others (e.g., bread and butter is more frequent than butter and bread), humans also learn abstract ordering preferences (e.g., short words before long words; Morgan and Levy, 2016a).
16
+
17
+ In the present study we examine binomial ordering preferences in English in eight large language models with number of parameters ranging from 124M to 70B. Specifically, we ask whether ordering preferences in these models are determined entirely by the observed preferences of binomials in corpus data, or whether the language models also learn abstract ordering preferences. Further, we examine whether large language models, similar to humans, show stronger effects of observed ordering preferences in high frequency items. If large language models are just reproducing superficial characteristics of the training data, we should see no effects of abstract ordering preferences, and only see effects of observed ordering preferences. On the other hand, if language models are doing more than just memorization, then we may see effects of abstract ordering preferences in addition to effects of observed ordering preferences, and these may change as a function of the binomial's frequency.
18
+
19
+ Our specific contribution is an investigation of how large language models use abstract knowledge vs. observed preferences through a binomial ordering preference task, along with a discussion about how this differs from language use by humans. We show that language models rely more on the surface-level statistics of their input (e.g., n-gram frequency) than humans do, adding to our understanding of how large language models represent and generate language.
20
+
21
+ # 1.1 Evidence for Abstractions in LLMs
22
+
23
+ Large language models have demonstrated incredible breakthroughs in the last few years, showing impressive capabilities across a wide variety of
24
+
25
+ tasks. Despite this, previous research has demonstrated mixed results with respect to their abilities to learn abstract representations (e.g., McCoy et al., 2023; LeBrun et al., 2022; Pan and Bergen, 2025). Specifically, it remains unclear to what extent large language models are simply copying their training data as opposed to learning something more abstract. For example, Haley (2020) demonstrated that many of the BERT models are not able to reliably determine the plurality of novel words at the same level as humans.
26
+
27
+ On the other hand, Wei et al. (2021) demonstrated that BERT can generalize well to novel subject-verb pairs. Specifically, they tested BERT's subject-verb agreement ability on novel sentences that it's never seen before and found that BERT seems to learn abstract representations of subject-verb agreement (as evidenced by the fact that it performs well on items it wasn't trained on).
28
+
29
+ Additionally, there's evidence that transformer models trained on an amount of data comparable to humans can also learn abstract knowledge about the language (Misra and Mahowald, 2024; Yao et al., 2025). For example, Misra and Mahowald (2024) examined whether a language model trained on a comparable amount of data as humans can learn article-adjective-numeral-noun expressions (a beautiful five days). Specifically, without having a great deal of experience with them, humans learn that a beautiful five days is perfectly natural, but a five beautiful days is not. Misra and Mahowald (2024) demonstrated that language models learn this even if they have no AANNs in their training data. They further demonstrated that they do this by generalizing across similar constructions, such as a few days.
30
+
31
+ Further, Yao et al. (2025) examined whether language models trained on a comparable amount to humans can learn the length and animacy preferences that drive dative alternations (e.g., give the ball to her vs give her the ball) in humans. Specifically, dative alternations show a length and animacy bias (Yao et al., 2025). In order to examine whether language models can learn these biases from other constructions, they manipulated the training data to remove the length and animacy bias from the dative alternations in the training data of the language model. They found that the model can learn these biases even without exposure to them in the dative alternation. These results suggest that in some cases language models can learn generalizations without a great amount of data.
32
+
33
+ In order to investigate large language models' ability to learn abstract representations, it is useful to compare them to human Psycholinguistic data. Unlike large language models, humans don't have access to corpora with trillions of tokens. Despite this, humans' capacity for language is unparalleled, in part due to our incredible ability to learn abstract representations (Berko, 1958; Kapatsinski, 2018).
34
+
35
+ # 1.2 Evidence for Abstractions in Humans
36
+
37
+ Humans are remarkable in our ability to learn and produce language, often producing and processing sentences that we've never encountered before. This is largely enabled by our unique ability to not simply memorize language, but to learn more abstract generalizations. For example, humans develop abstract ordering preferences for how to linearize the message we want to convey (i.e., deciding on which order to say the words that convey the meaning we want to express). One illustration of this comes from the literature on binomial constructions, where there are two conjoined nouns (e.g., cats and dogs, Morgan and Levy, 2015, 2016a,b; Benor and Levy, 2006). Binomial constructions often convey the same meaning regardless of the order (e.g., radio and television vs television and radio). Despite this, however, humans sometimes have very strong preferences for one order over the other (e.g., bread and butter overwhelmingly preferred over butter and bread).
38
+
39
+ While these preferences are driven in part by experience with the binomial (i.e., which binomial ordering is encountered more often), there are also other factors, such as phonological or semantic constraints, that affect ordering preferences. In other words, human ordering preferences are driven in part by observed preferences in corpus data (i.e., the observed preference in their previous language experience, Morgan and Levy, 2016a) and in part driven by abstract ordering preferences based on abstract constraints (e.g., a preference for short words before long words, or a preference for male-coded words before female-coded words, Benor and Levy, 2006).
40
+
41
+ In order to capture the abstract ordering preferences of humans across binomial constructions, Morgan and Levy (2016a) developed a model to quantify the abstract ordering preference of a given binomial in English. They demonstrated that the model's predicted abstract ordering preferences are not the same as the observed preferences in corpus data. The model combines multiple phonological
42
+
43
+ and semantic constraints that have been shown to affect binomial ordering preferences into a single abstract ordering preference value for each binomial. They further demonstrated that human ordering preferences for low-frequency items are primarily driven by this abstract ordering preference value, and preferences for high-frequency items are driven primarily by the observed preferences in corpus data. They operationalized frequency using the overall frequency of a binomial, i.e. the total frequency in both possible orders (i.e., the number of times the binomial occurs in alphabetical ordering plus the number of times the binomial occurs in nonalphabetical ordering). This provides a measure of expression frequency that is not confounded with the frequency of a specific order.
44
+
45
+ Since human ordering preferences deviate from the observed preferences (i.e., humans aren't simply reproducing binomials in the same order that they heard them; Morgan and Levy, 2024), ordering preferences thus present a useful test case for large language models. If large language models learn representations beyond simply memorizing the training dataset or superficially reproducing word co-occurrences, they may learn abstract ordering preferences similar to humans, and this may be reflected in their binomial ordering preferences.
46
+
47
+ # 2 Methods
48
+
49
+ # 2.1 Dataset
50
+
51
+ In order to examine the ordering preferences of binomial constructions in large language models, we use a corpus of binomials from Morgan and Levy (2015). The corpus contains 594 binomial expressions which have been annotated for various phonological, semantic, and lexical constraints that are known to affect binomial ordering preferences. The corpus also includes:
52
+
53
+ 1. The estimated abstract ordering preference for each binomial representing the ordering preference for the alphabetical ordering (a relatively unbiased reference form), estimated from the above constraints (independent of frequency). The abstract ordering preferences take a value between 0 and 1, with 0 being a stronger preference for the nonalphabetical form, and 1 being a stronger preference for the alphabetical form. The abstract ordering preferences were calculated using Morgan and Levy (2015)'s model.
54
+
55
+ 2. The observed binomial orderings which are the proportion of binomial orderings that are in alphabetical order for a given binomial, gathered from the Google $n$ -grams corpus (Lin et al., 2012). The Google $n$ -grams corpus is magnitudes larger than the language experience of an individual speaker and thus provides reliable frequency estimates. A value of 1 indicates the binomial occurs exclusively in the alphabetical ordering while a value of 0 indicates that the binomial occurs exclusively in the nonalphabetical ordering.
56
+ 3. The overall frequency of a binomial expression (the number of times the binomial occurs in either alphabetical or non-alphabetical order). Overall frequencies were also obtained from the Google $n$ -grams corpus (Lin et al., 2012).
57
+
58
+ # 2.2 Language Model Predictions
59
+
60
+ In order to derive predictions for large language models, we used the following models from the GPT-2 (Radford et al., 2019) family, the Llama2 (Touvron et al., 2023) family, Llama-3 family (https://github.com/meta-llama/llama3), and the OLMo (Groeneveld et al., 2024) family. From smallest to largest in number of parameters: GPT2 (124M parameters), OLMo 1B (1B parameters), GPT-2 XL (1.5B parameters), Llama-2 7B (7B parameters), OLMo 7B (7B parameters), Llama-3 8B (8B parameters), Llama-2 13B (13B parameters), and Llama-3 70B (70B parameters). For each model, we calculated the ordering preferences of the alphabetical form for each binomial in the dataset. The predicted probability of the alphabetical form was calculated as the product of the model's predicted probability of each word in the binomial. In order to accurately calculate the probability of the first word in the binomial, each binomial was prepended with the prefix "Next item": " . Thus the probability of the alphabetical form, $A$ and $B$ is:
61
+
62
+ $$
63
+ \begin{array}{l} P _ {a l p h a b e t i c a l} = P (A | N e x t i t e m:) \\ \times P (a n d | N e x t i t e m: A) \tag {1} \\ \times P (B | N e x t i t e m: A a n d) \\ \end{array}
64
+ $$
65
+
66
+ where $A$ is the alphabetically first word in the binomial and $B$ is the other word. Additionally, the probability of the nonalphabetical form, $B$ and $A$
67
+
68
+ is:
69
+
70
+ $$
71
+ \begin{array}{l} P _ {\text {n o n a l p h a b e t i c a l}} = P (B | \text {N e x t i t e m :}) \\ \times P (a n d | N e x t i t e m: B) \\ \times P (A | \text {N e x t i t e m}: B \text {a n d}) \tag {2} \\ \end{array}
72
+ $$
73
+
74
+ Finally, to get an overall ordering preference for the alphabetical form, we calculated the (log) odds ratio of the probability of the alphabetical form to the probability of the nonalphabetical form:
75
+
76
+ $$
77
+ \operatorname {L o g O d d s} (A a n d B) = \log \left(\frac {P _ {\text {a l p h a b e t i c a l}}}{P _ {\text {n o n a l p h a b e t i c a l}}}\right) \tag {3}
78
+ $$
79
+
80
+ # 2.3 Analysis
81
+
82
+ The data was analyzed using Bayesian linear regression models, implemented in brms (Bürkner, 2017) with weak, uninformative priors. For each model, the dependent variable was the log odds of the alphabetical form to the nonalphabetical form. The fixed-effects were abstract ordering preference (represented as AbsPrefbelow), observed preference (ObservedPref), overall frequency (Freq), an interaction between overall frequency and abstract ordering preference (Freq:AbsPref), and an interaction between overall frequency and observed preference (Freq:ObservedPref). The model equation is presented below:
83
+
84
+ $$
85
+ \begin{array}{l} \operatorname {L o g O d d s} (A \text {a n d} B) \sim \operatorname {A b s P r e f} \\ + O b s e r v e d P r e f \\ + F r e q \\ + F r e q: A b s P r e f \\ + F r e q: O b s e r v e d P r e f \tag {4} \\ \end{array}
86
+ $$
87
+
88
+ Frequency was logged and centered, and abstract ordering preference and observed preference were centered such that they ranged from -0.5 to 0.5 (instead of from 0 to 1). Note that since abstract ordering preference and observed preference are on the same scale, we can directly draw comparisons between the coefficient estimates for these fixed-effects in our regression model.
89
+
90
+ # 3 Results
91
+
92
+ Our full model results are presented in the appendix (Table 1) and visualized in Figure 1. For each model, the figure shows the values for each of the coefficients from the model in Equation 4, representing how strongly each language model relies on observed preference, abstract ordering preference, overall frequency, the interaction between
93
+
94
+ abstract ordering preference and overall frequency, and the interaction between observed preference and overall frequency.
95
+
96
+ Our results are similar across all the large language models we tested. Specifically, we find no effect of abstract ordering preferences and no interaction effect between abstract ordering preference and overall frequency. We do find an effect of observed preference suggesting that the models are mostly reproducing the ordering preferences found in their training. We also find an interaction effect between observed preference and overall frequency, suggesting that the effect of observed frequency is stronger for high-frequency items.
97
+
98
+ # 4 Conclusion
99
+
100
+ In the present study we examined the extent to which abstract ordering preferences and observed preferences drive binomial ordering preferences in large language models. We find that their ordering preferences are driven primarily by the observed preferences. Further, they rely more on observed preferences for higher frequency items than lower frequency items. Finally, they don't seem to be using abstract ordering preferences at all in their ordering of binomials.
101
+
102
+ Our results give us insight into the differences between humans and large language models with respect to the ways in which they trade off between abstract and observed preferences. For example, our dataset contains low-frequency binomials (e.g. alibis and excuses), including binomials that a college-age speaker would have heard only once in their life. Due to their low frequency, humans rely substantially on abstract ordering preferences to process these lower frequency items (Morgan and Levy, 2024). This is not the case, however, for large language models, which rely exclusively on observed preferences for these items. This is true even for the smallest models we tested, such as GPT-2. We conclude that, although large language models can produce human-like language, they accomplish this in a quantitatively different way than humans do: they rely on observed statistics from the input in at least some cases when humans would rely on abstract representations.
103
+
104
+ # 5 Limitations
105
+
106
+ There are a few important limitations in our study. The first limitation is that we don't know exactly how many times each of the large language models
107
+
108
+ ![](images/4a17c3557e8353807c8336cb9509f854f602d644c634f4ead4b2ecdc248ed887.jpg)
109
+ Figure 1: Results for each beta coefficient estimate from each model. Models are arranged from smallest to largest from left to right. The x-axis contains each coefficient and the y-axis contains the predicted beta coefficient of the respective model. Error bars indicate $95\%$ credible intervals.
110
+
111
+ has seen each binomial tested. We can approximate the binomial's frequency using corpus data, which gives us an indication of the frequency of the binomial in a language model's training set, but it is possible that the large language models saw the binomials more than we expect. Thus, the current study can't differentiate between a model that has learned abstract ordering preferences but doesn't use it for binomials that it has seen, and a model that simply hasn't learned abstract ordering preferences. Although, there is some hope with the recent development of open access large language models, such as OLMo (Groeneveld et al., 2024), where the training data is publicly available. We have future plans to examine the ordering preferences of novel binomials in the OLMo series of models to determine whether LLMs have learned ordering preferences at all.
112
+
113
+ Additionally, the binomials tested here are only 3 words and relatively fixed in the sense that variations such as bread and also butter are not very common. Thus these are potentially easier for the large language models to memorize compared to longer or less-fixed strings, which could be tested in future work.
114
+
115
+ Further, while we examined language models of
116
+
117
+ various sizes and determined that the number of parameters does not seem to play a role in whether these models employ abstract ordering preferences for binomials, our analysis was not designed to investigate the effect of training set size.
118
+
119
+ Finally, our experiments deal only with binomials in English.
120
+
121
+ # References
122
+
123
+ Sarah Bunin Benor and Roger Levy. 2006. The chicken or the egg? a probabilistic analysis of english binomials. Language, pages 233-278.
124
+ Jean Berko. 1958. The child's learning of english morphology. Word, 14(2-3):150-177.
125
+ Paul-Christian Bürkner. 2017. brms: An r package for bayesian multilevel models using stan. Journal of statistical software, 80:1-28.
126
+ Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshitaa Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, et al. 2024. Olmo: Accelerating the science of language models. arXiv preprint arXiv:2402.00838.
127
+ Coleman Haley. 2020. This is a bert. now there are several of them. can they generalize to novel words? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 333-341.
128
+
129
+ Vsevolod Kapatsinski. 2018. Changing minds changing tools: From learning theory to language acquisition to language change. MIT Press.
130
+ Benjamin LeBrun, Alessandro Sordoni, and Timothy J O'Donnell. 2022. Evaluating distributional distortion in neural language modeling. arXiv preprint arXiv:2203.12788.
131
+ Yuri Lin, Jean-Baptiste Michel, Erez Aiden Lieberman, Jon Orwant, Will Brockman, and Slav Petrov. 2012. Syntactic annotations for the google books ngram corpus. In Proceedings of the ACL 2012 system demonstrations, pages 169-174.
132
+ R Thomas McCoy, Paul Smolensky, Tal Linzen, Jianfeng Gao, and Asli Celikyilmaz. 2023. How much do language models copy from their training data? evaluating linguistic novelty in text generation using raven. Transactions of the Association for Computational Linguistics, 11:652-670.
133
+ Kanishka Misra and Kyle Mahowald. 2024. Language models learn rare phenomena from less rare phenomena: The case of the missing aanns. arXiv preprint arXiv:2403.19827.
134
+ Emily Morgan and Roger Levy. 2015. Modeling idiosyncratic preferences: How generative knowledge and expression frequency jointly determine language structure. In CogSci. Citeseer.
135
+ Emily Morgan and Roger Levy. 2016a. Abstract knowledge versus direct experience in processing of binomial expressions. Cognition, 157:384-402.
136
+ Emily Morgan and Roger Levy. 2016b. Frequency-dependent regularization in iterated learning. In *The Evolution of Language: Proceedings of the 11th international conference (EVOLANG 2016)*.
137
+ Emily Morgan and Roger Levy. 2024. Productive knowledge and item-specific knowledge trade off as a function of frequency in multiword expression processing. Language, 100(4):e195-e224.
138
+ Dingyi Pan and Ben Bergen. 2025. Are explicit belief representations necessary? a comparison between large language models and Bayesian probabilistic models. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 11483-11498, Albuquerque, New Mexico. Association for Computational Linguistics.
139
+ Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
140
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
141
+
142
+ Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models.
143
+ Jason Wei, Dan Garrette, Tal Linzen, and Ellie Pavlick. 2021. Frequency effects on syntactic rule learning in transformers. arXiv preprint arXiv:2109.07020.
144
+ Qing Yao, Kanishka Misra, Leonie Weissweiler, and Kyle Mahowald. 2025. Both direct and indirect evidence contribute to dative alternation preferences in language models. arXiv preprint arXiv:2503.20850.
145
+
146
+ # A Full Model Results
147
+
148
+ <table><tr><td>GPT-2</td><td colspan="4"></td><td colspan="4">GPT-2XL</td></tr><tr><td></td><td>Est.</td><td>Err.</td><td>2.5</td><td>97.5</td><td>Est.</td><td>Err.</td><td>2.5</td><td>97.5</td></tr><tr><td>Intercept</td><td>-0.10</td><td>0.10</td><td>-0.30</td><td>0.10</td><td>0.05</td><td>0.09</td><td>-0.13</td><td>0.23</td></tr><tr><td>AbsPref</td><td>-0.52</td><td>0.64</td><td>-1.81</td><td>0.69</td><td>-0.89</td><td>0.63</td><td>-2.17</td><td>0.29</td></tr><tr><td>Observed</td><td>4.62</td><td>0.50</td><td>3.66</td><td>5.59</td><td>5.34</td><td>0.46</td><td>4.45</td><td>6.25</td></tr><tr><td>Freq</td><td>-0.04</td><td>0.06</td><td>-0.15</td><td>0.07</td><td>-0.01</td><td>0.05</td><td>-0.11</td><td>0.09</td></tr><tr><td>AbsPref:Freq</td><td>0.10</td><td>0.39</td><td>-0.66</td><td>0.86</td><td>-0.17</td><td>0.36</td><td>-0.87</td><td>0.53</td></tr><tr><td>Observed:Freq</td><td>0.96</td><td>0.24</td><td>0.49</td><td>1.43</td><td>1.01</td><td>0.21</td><td>0.59</td><td>1.43</td></tr><tr><td>Llama-2 7B</td><td colspan="4"></td><td colspan="4">Llama-2 13B</td></tr><tr><td></td><td>Est.</td><td>Err.</td><td>2.5</td><td>97.5</td><td>Est.</td><td>Err.</td><td>2.5</td><td>97.5</td></tr><tr><td>Intercept</td><td>0.22</td><td>0.13</td><td>-0.03</td><td>0.47</td><td>0.12</td><td>0.08</td><td>-0.04</td><td>0.27</td></tr><tr><td>AbsPref</td><td>1.11</td><td>0.84</td><td>-0.40</td><td>2.91</td><td>0.32</td><td>0.54</td><td>-0.72</td><td>1.38</td></tr><tr><td>Observed</td><td>3.07</td><td>0.64</td><td>1.81</td><td>4.31</td><td>5.25</td><td>0.40</td><td>4.46</td><td>6.05</td></tr><tr><td>Freq</td><td>0.04</td><td>0.07</td><td>-0.10</td><td>0.17</td><td>-0.08</td><td>0.04</td><td>-0.16</td><td>0.01</td></tr><tr><td>AbsPref:Freq</td><td>-0.32</td><td>0.47</td><td>-1.24</td><td>0.59</td><td>-0.02</td><td>0.32</td><td>-0.64</td><td>0.60</td></tr><tr><td>Observed:Freq</td><td>0.23</td><td>0.28</td><td>-0.33</td><td>0.78</td><td>0.72</td><td>0.19</td><td>0.34</td><td>1.09</td></tr><tr><td>Llama-3 8B</td><td colspan="4"></td><td colspan="4">Llama-3 70B</td></tr><tr><td></td><td>Est.</td><td>Err.</td><td>2.5</td><td>97.5</td><td>Est.</td><td>Err.</td><td>2.5</td><td>97.5</td></tr><tr><td>Intercept</td><td>0.15</td><td>0.09</td><td>-0.03</td><td>0.33</td><td>0.04</td><td>0.05</td><td>-0.06</td><td>0.14</td></tr><tr><td>AbsPref</td><td>0.23</td><td>0.59</td><td>-0.92</td><td>1.42</td><td>0.10</td><td>0.38</td><td>-0.63</td><td>0.85</td></tr><tr><td>Observed</td><td>5.64</td><td>0.46</td><td>4.75</td><td>6.54</td><td>5.00</td><td>0.27</td><td>4.49</td><td>5.52</td></tr><tr><td>Freq</td><td>-0.07</td><td>0.05</td><td>-0.17</td><td>0.03</td><td>-0.05</td><td>0.03</td><td>-0.11</td><td>0.00</td></tr><tr><td>AbsPref:Freq</td><td>0.07</td><td>0.36</td><td>-0.63</td><td>0.78</td><td>-0.11</td><td>0.21</td><td>-0.52</td><td>0.30</td></tr><tr><td>Observed:Freq</td><td>0.60</td><td>0.22</td><td>0.18</td><td>1.03</td><td>0.65</td><td>0.12</td><td>0.41</td><td>0.89</td></tr><tr><td>OLMo 1B</td><td colspan="4"></td><td colspan="4">OLMo 7B</td></tr><tr><td></td><td>Est.</td><td>Err.</td><td>2.5</td><td>97.5</td><td>Est.</td><td>Err.</td><td>2.5</td><td>97.5</td></tr><tr><td>Intercept</td><td>0.06</td><td>0.08</td><td>-0.09</td><td>0.22</td><td>0.04</td><td>0.07</td><td>-0.10</td><td>0.18</td></tr><tr><td>AbsPref</td><td>0.69</td><td>0.54</td><td>-0.33</td><td>1.79</td><td>-0.86</td><td>0.51</td><td>-1.88</td><td>0.11</td></tr><tr><td>Observed</td><td>4.36</td><td>0.39</td><td>3.58</td><td>5.12</td><td>5.37</td><td>0.36</td><td>4.67</td><td>6.08</td></tr><tr><td>Freq</td><td>0.06</td><td>0.04</td><td>-0.02</td><td>0.14</td><td>0.01</td><td>0.04</td><td>-0.07</td><td>0.08</td></tr><tr><td>AbsPref:Freq</td><td>-0.12</td><td>0.31</td><td>-0.73</td><td>0.47</td><td>0.10</td><td>0.28</td><td>-0.47</td><td>0.64</td></tr><tr><td>Observed:Freq</td><td>0.81</td><td>0.19</td><td>0.44</td><td>1.17</td><td>0.70</td><td>0.17</td><td>0.37</td><td>1.04</td></tr></table>
149
+
150
+ Table 1: Model results for each language model. The Estimate is given in the "Est." column, the standard deviation of the posterior is given in the "Err." column. The columns labeled 2.5 and 97.5 represent the lower and upper confidence interval boundaries. AbsPref is the abstract ordering preferences, Observed is the observed preference in corpus data, and Freq is the overall frequency of the binomial.
151
+
152
+ # B Quantization Issue
153
+
154
+ In addition to these results, we did find a meaningful effect of abstract ordering preferences for a quantized model of Llama-2 13B (https://huggingface.co/TheBloke/ Llama-2-13B-GPTQ). However, upon further inspection, the model's preferences did not match the preferences of the non-quantized model. For example, the quantized model's strongest preference was for schools and synagogues which had an estimated log odds of over 33. Further, the estimated log odds for error and trial was about 1. In other words, the model had a slight preference
155
+
156
+ for error and trial over trial and error, and had a strong preference for schools and synagogues over synagogues and schools. Upon inspecting the non-quantized model, we found that the original model showed different (but expected) preferences, with a strong preference for trial and error (log odds of -15) and no real preference for schools and synagogues (log odds of 1).
157
+
158
+ Further, in assessing the quality of the quantized model, text-generation revealed poor performance. For example, given the Prompt: "Describe your dream house", the model returned this response:
159
+
160
+ <s> Tell me about your dream house. The
161
+
162
+ house I grew up in was on the edge of a forest. It was a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big,old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with abig, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house with a big, old house
163
+
164
+ Given the output of the quantized model, we suspected an issue occurred during the quantization process, resulting in a poorly performing model. We thus decided to exclude the quantized model
165
+
166
+ and use the results for the non-quantized model.
ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31eb54a1932380253dc5874f3e422e447073bc3856f19f89066bf7dd45c7d6d9
3
+ size 321143
ACL/2025/The Role of Abstract Representations and Observed Preferences in the Ordering of Binomials in Large Language Models/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4267305c558a2f0ca017afab93d9553abed8345483474e3837e92bfa5bcd02e7
3
+ size 161493
ACL/2025/TigerLLM - A Family of Bangla Large Language Models/17573e20-04bf-4561-9f01-9d5833ffb0bb_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:608ce6ab763bb14814e738220f943911d3a6aa49fedc3fa9e2a58e1db472a453
3
+ size 61044
ACL/2025/TigerLLM - A Family of Bangla Large Language Models/17573e20-04bf-4561-9f01-9d5833ffb0bb_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:548010b8920d197741625fe91447d0541103081c72bbe11bec5afd36a6b53fbf
3
+ size 81488
ACL/2025/TigerLLM - A Family of Bangla Large Language Models/17573e20-04bf-4561-9f01-9d5833ffb0bb_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fac06370c47e89d98433325903f1d4e70f7e0a5c9d3a49f51d1b634eb3f7e73e
3
+ size 836041
ACL/2025/TigerLLM - A Family of Bangla Large Language Models/full.md ADDED
@@ -0,0 +1,287 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TigerLLM - A Family of Bangla Large Language Models
2
+
3
+ Nishat Raihan
4
+
5
+ George Mason University
6
+
7
+ Fairfax, VA, USA
8
+
9
+ mraihan2@gmu.edu
10
+
11
+ Marcos Zampieri
12
+
13
+ George Mason University
14
+
15
+ Fairfax, VA, USA
16
+
17
+ mzampier@gmu.edu
18
+
19
+ # Abstract
20
+
21
+ The development of Large Language Models (LLMs) remains heavily skewed towards English and a few other high-resource languages. This linguistic disparity is particularly evident for Bangla - the $5^{th}$ most spoken language. A few initiatives attempted to create open-source Bangla LLMs with performance still behind high-resource languages and limited reproducibility. To address this gap, we introduce TigerLLM - a family of Bangla LLMs. Our results demonstrate that these models surpass all open-source alternatives and also outperform larger proprietary models like GPT3.5 across standard benchmarks, establishing TigerLLM as the new baseline for future Bangla language modeling.
22
+
23
+ # 1 Introduction
24
+
25
+ LLMs have fundamentally transformed NLP by achieving exceptional performance across a broad range of tasks (Brown et al., 2020; Chowdhery et al., 2022; Raihan et al., 2025c). While these models exhibit unprecedented capabilities in language understanding, generation, reasoning, and specialized applications, their advancements predominantly benefit high-resource languages (Alam et al., 2024). This inequality is particularly noticeable for Bangla. Despite having about 237 million native speakers, $^{1}$ Bangla remains quite underserved in modern NLP advancements.
26
+
27
+ This under-representation stems primarily from the limitation of high-quality training data. While proprietary models like GPT-4 (Brown et al., 2023) and Claude-3.5 (Bai et al., 2024) demonstrate reasonable Bangla capabilities, open-source alternatives consistently underperform. Recent multilingual models such as Gemma-2 (Gemma et al., 2024) and LLaMA 3.1 (Dubey et al., 2024), despite leveraging diverse training corpora and advanced tokenization systems like TikTokenizer
28
+
29
+ (Corso et al., 2024), also fail to deliver satisfactory performance for Bangla.
30
+
31
+ # 1.1 Limitations of Bangla LLM Initiatives
32
+
33
+ Training Recent attempts at developing Bangla LLMs (see Table 1) through continual pretraining (titu-Gemma) and model distillation approaches (Zehady et al., 2024) have yielded low and nonreproducible results (see Table 2), often performing worse than their base models. The absence of technical documentation and academic publications further compounds this issue by making result reproduction impossible. Our investigation into these models' performances reveals the need for improvement in the training process. While the unavailability of pretraining corpora limits our analysis of that phase, the finetuning approach demonstrates consistent problematic patterns.
34
+
35
+ Data Most Bangla LLM initiatives rely on translated versions of synthetic datasets like Alpaca-Instruct (Taori et al., 2023) and OpenOrca (Mitra et al., 2023), which are generated through model distillation (Hinton et al., 2015). This approach suffers from two fundamental limitations: (1) the datasets are generated by early GPT-3.5 (Brown et al., 2020) releases, a model with limited Bangla support, resulting in suboptimal instruction quality, and (2) these English datasets are translated to Bangla using machine translation systems like Google Translate with limited quality checks, further degrading the training data quality. These cascading compromises in training data ultimately result in poor model performance.
36
+
37
+ # 1.2 Contributions
38
+
39
+ To address the recurring challenges in Bangla LLM development, we introduce three fundamental contributions:
40
+
41
+ 1. The Bangla-TextBook corpus, comprising 10 million tokens of carefully curated educational
42
+
43
+ <table><tr><td></td><td>Base-LLM</td><td>Size</td><td>pt</td><td>corpora</td><td>ft</td><td>ft-dataset</td><td>Paper/Report?</td><td>Reproducibility?</td></tr><tr><td>titu-Gemma</td><td>Gemma-2</td><td>2B</td><td>4.4B</td><td>X</td><td>X</td><td>X</td><td>X</td><td>X</td></tr><tr><td>titu-LLaMA</td><td>LLaMA-3.1</td><td>3B</td><td>37B</td><td>X</td><td>X</td><td>X</td><td>X</td><td>X</td></tr><tr><td>Bangla-LLaMA</td><td>LLaMA-3.2</td><td>3B</td><td>✓</td><td>X</td><td>172K</td><td>Orca-translated</td><td>✓</td><td>X</td></tr><tr><td>G2B</td><td>Gemma-2</td><td>9B</td><td>X</td><td>X</td><td>145K</td><td>Alpaca-translated</td><td>X</td><td>X</td></tr><tr><td>Bangla-LLaMA</td><td>LLaMA-2</td><td>13B</td><td>✓</td><td>X</td><td>145K</td><td>Alpaca-translated</td><td>X</td><td>X</td></tr><tr><td>TigerLLM</td><td>LLaMA-3.2</td><td>1B</td><td>10M</td><td>Bangla-TextBook</td><td>100K</td><td>Bangla-Instruct</td><td>✓</td><td>✓</td></tr><tr><td>TigerLLM</td><td>Gemma-2</td><td>9B</td><td>10M</td><td>Bangla-TextBook</td><td>100K</td><td>Bangla-Instruct</td><td>✓</td><td>✓</td></tr></table>
44
+
45
+ Table 1: Comparative analysis of Bangla LLM initiatives and their methodological approaches. The pretraining $(pt)$ and finetuning $(ft)$ columns indicate corpus size in tokens and instruction count respectively.
46
+
47
+ content across multiple domains, prioritizing content quality over scale.
48
+
49
+ 2. A high-quality Bangla-Instruct dataset of 100 thousand instruction-response pairs, generated through self-instruct (Wang et al., 2023) and model distillation using state-of-the-art teacher models (GPT-4o and Claude-3.5-Sonnet).
50
+ 3. The Tiger-LLM family (1B and 9B parameters), featuring models pretrained and finetuned on our high-quality datasets, achieving $30 - 55\%$ performance improvements over existing benchmarks.
51
+
52
+ All components are open-sourced to establish robust foundations for future Bangla language modeling research.2
53
+
54
+ # 2 Related Work
55
+
56
+ Early transformer-based encoder-only pre-trained language models such as BERT (Devlin et al., 2019) concentrate on high-resource languages like English. Subsequent work adapts them to mid- and low-resource contexts through continued pre-training and task-specific finetuning. In Bangla, for instance, Sami et al. (2022) present BANGLABERT, demonstrating that a dedicated monolingual encoder markedly improves downstream classification and QA relative to multilingual baselines.
57
+
58
+ The shift to decoder-only models has produced large multilingual models - e.g. BLOOM (Le Scao et al., 2022), LLAMA3 (Dubey et al., 2024), and AYA (Ustun et al., 2024)—that cover dozens of under-represented languages. Yet empirical analyses reveal that these models still perform best when prompted in high-resource languages, with significant degradation for languages such as Bangla or Swahili (Raihan et al., 2025a; Jin et al., 2024).
59
+
60
+ As discussed in the previous section, dedicated Bangla decoder models remain scarce and fragmented. GPT2-Bangla (Bhattacharjee et al., 2023) continues GPT-2 pre-training on a 4GB Bangla corpus, while Bong-LLAMA (Zehady et al., 2024) and the titu-Gemma<sup>3</sup> checkpoint attempt instruction tuning on translated datasets. These efforts often lack rigorous evaluation protocols, transparent data curation, or reproducible training pipelines—as reflected in the inconsistent results summarized in Table 1. Consequently, a clear methodological gap persists in developing open, reproducible decoder-only LLMs that natively support Bangla and other low-resource languages.
61
+
62
+ # 3 Bangla-TextBook Corpus
63
+
64
+ Previous Bangla LLMs rely predominantly on corpora sourced from OSCAR (Ortiz Suárez et al.) and Common Crawl (Bhattacharjee et al., 2022; Zehady et al., 2024), despite quality control challenges. While alternative Bangla corpora have emerged (Bhattacharyya et al., 2023), the absence of curated educational content remains a critical gap. This emphasis on data quality is particularly significant given recent findings by Gunasekar et al. (2023) and Raihan et al. (2025b), which demonstrate that LLMs achieve superior performance through high-quality training data, even with reduced volume.
65
+
66
+ To bridge this gap, we present the Bangla-TextBook corpus, constructed exclusively from high-quality open-source educational materials published by the National Curriculum and Textbook Board of Bangladesh. We collect texts from 163 textbooks for Grades 6-12, resulting in a total of 9,897,623 tokens and 697,903 sentences.
67
+
68
+ ![](images/0e9b386ff12a7b64110a77498a994db981465d32f622bd56bc54c58dee0e590d.jpg)
69
+ Figure 1: The Bangla-Instruct generation pipeline. With 500 seed tasks, we employ a multi-step process using GPT-4o and Claude-3.5-Sonnet as teacher models to generate instruction-response pairs in Bangla.
70
+
71
+ # 4 Bangla-Instruct
72
+
73
+ To address the limitations described in Section 1.1, we introduce Bangla-Instruct, a collection of 100,000 native Bangla instruction-response pairs bootstrapped using self-instruct (Wang et al., 2023). While instruction datasets like Alpaca (Taori et al., 2023) and OpenOrca (Mitra et al., 2023) utilized GPT3 and GPT3.5 respectively, we significantly improve upon their approach by employing GPT4 and Claude-3.5-Sonnet as our teacher models, leveraging their superior instruction-following capabilities.
74
+
75
+ Our dataset creation begins with 500 diverse seed tasks carefully curated by a team of 50 undergraduate and graduate students from leading Bangladeshi universities (Appendix A.1). These volunteers, spanning various academic disciplines and geographical regions of Bangladesh, ensure our seed tasks capture authentic linguistic patterns and cultural contexts. Each seed task undergoes multiple rounds of peer review to maintain quality and cultural sensitivity. Further information on quality control is presented in Appendix (Appendix A.3).
76
+
77
+ Our generation pipeline consists of four primary steps, each designed to maintain data quality and cultural authenticity (see Figure 1).
78
+
79
+ (1) Seed & Instruction Generation: We begin with a human-curated seed pool $\mathcal{T}_s = \{t_1,\dots ,t_{500}\}$ drawn from 50 volunteers representing five academic disciplines across Bangladesh (see Appendix A.1). At every generation round $i$ , we sample $k = 8$ seed tasks and prompt CLAUDE to create a candidate batch of instructions $\mathcal{I}_n$ , ex
80
+
81
+ panding coverage of the ten seed categories $c_{1\dots 10}$ listed in Appendix A.2 while preserving authentic linguistic patterns.
82
+
83
+ (2) Task Typing: Each instruction $i \in \mathcal{I}_n$ is classified by GPT-4o into $\tau(i) \in \{\text{open-ended, classification, generation}\}$ , providing the expected answer style and the minimum-length threshold $l_{\min}(\tau)$ used in subsequent filtering.
84
+
85
+ (3) Response Drafting: Conditioned on $(i,\tau (i))$ CLAude produces a comprehensive response $r_i$ . We retain the highest-scoring draft according to an internal coherence metric $c(i,r)$ .
86
+ (4) Multi-stage Filtering: GPT-4o applies the four-criteria filter $\mathcal{F}$ — Language $(\mathcal{L})$ , Cultural $(\mathcal{C})$ , Quality $(\mathcal{Q})$ , and Novelty $(\mathcal{N})$ (see Appendix A.3). On average, $\sim 63\%$ of $(i,r)$ pairs pass $\mathcal{F}$ , yielding a balanced complexity mix (40% basic, 40% intermediate, 20% advanced). Valid pairs are appended to $\mathcal{T}_s$ , and the loop continues until 100K high-quality instruction-response pairs are reached.
87
+
88
+ By coupling two complementary LLMs with strict verification and a human-seeded, domain-balanced task pool, our pipeline mitigates error propagation and preserves cultural nuance—addressing shortcomings observed in earlier Bengali instruction datasets (see Appendix A for full statistics).
89
+
90
+ # 5 TigerLLM
91
+
92
+ As candidate base models, we consider 3 families of multilingual LLMs - LLaMA 3.2 (1B, 3B) (Dubey et al., 2024), Gemma-2 (2B, 9B) (Gemma et al., 2024) and Pangea (7B) (Yue et al., 2024).
93
+
94
+ ![](images/1668a568ab4563c9b2165c9bea6c8d10cd9f23a70a74b7fb7ab8eda9c4a326f1.jpg)
95
+ Evolution of TigerLLM Figure 2 depicts the final selection of the models and a high-level overview of the process.
96
+ Figure 2: Evolution of TigerLLM.
97
+
98
+ Upon the selection phase, we finalize two pretrained language models—LLaMA 3.2 (1B) and Gemma 2 (9B)—chosen for their robust foundational capacities. These models then undergo continual pretraining (see Figure 3) on a specialized Bangla-TextBook corpus, which infuses them with a richer understanding of the Bangla language, including its context-specific nuances, stylistic variations, and domain-specific terminology.
99
+
100
+ Pretraining We utilize a computing cluster with 8 NVIDIA A100 GPUs (40GB each), 512GB RAM, and 2TB storage. The distributed training setup enables efficient parallel processing, completing the pretraining in approximately 120 hours on this high-performance configuration with gradient checkpointing enabled.
101
+
102
+ Continual Pretraining We use the Bangla-TextBook corpus for the models to learn culture and language-specific nuances and gather sufficient and reliable knowledge from a set of high-quality texts. The pretraining phase has been carried out multiple times with empirical choices of hyper-parameters.
103
+
104
+ ![](images/560d8464f5262f39f06b2b30865f8bf9fde32e3c0e23739e66943746be18bce7.jpg)
105
+ Figure 3:Continual Pretraining -Loss per Steps.
106
+
107
+ Finetuning We conduct finetuning on a single NVIDIA A100 (40GB) through Google Colab<sup>4</sup>, supported by 80GB RAM and 256GB storage. The process completes in approximately 96 hours, proving sufficient for model adaptation and task-specific optimization with minimal computational overhead.
108
+
109
+ Model Distillation Following this continual pretraining step, the models are finetuned on a carefully curated Bangla-Instruct dataset (Figure 4). LoRA (Hu et al., 2021) is not used, we implement full finetuning for better learning. To speed up the training process, we utilize Flash Attention (Dao et al., 2022), we set key parameters: 2048 token maximum sequence length, batch size of 8, 4 gradient accumulation steps, and 3 epochs. Learning rate $(5 \times 10^{-5})$ , weight decay (0.02), and $10\%$ warm-up steps ensure stable convergence. Table 5 in Appendix B lists complete hyperparameters.
110
+
111
+ ![](images/8820e03b567750ba8343ae72c1c32f2f1bd933d07743966ed20a7558037d428c.jpg)
112
+ Figure 4: Finetuning - Loss per Steps.
113
+
114
+ By blending the foundational strengths of LLaMA and Gemma with specialized Bangla corpora and instruction-oriented finetuning, the final TigerLLM models emerge as optimized solutions capable of delivering high-quality, instruction-following re
115
+
116
+ <table><tr><td></td><td>MMLU-bn</td><td>PangBench-bn</td><td>BanglaQuaD</td><td>mHumanEval-bn</td><td>BEnQA</td><td>BanglaRQA</td></tr><tr><td></td><td>understanding</td><td>multitasking</td><td>question answering</td><td>coding</td><td>knowledge</td><td>reasoning</td></tr><tr><td>GPT3.5</td><td>0.55</td><td>0.55</td><td>0.50</td><td>0.56</td><td>0.50</td><td>0.49</td></tr><tr><td>Gemini-Flash1.5</td><td>0.66</td><td>0.57</td><td>0.62</td><td>0.58</td><td>0.56</td><td>0.61</td></tr><tr><td>GPT4o-mini</td><td>0.67</td><td>0.62</td><td>0.65</td><td>0.56</td><td>0.60</td><td>0.60</td></tr><tr><td>LLaMA3.2 (11B)</td><td>0.22</td><td>0.19</td><td>0.21</td><td>0.15</td><td>0.18</td><td>0.20</td></tr><tr><td>Gemma 2 (27B)</td><td>0.35</td><td>0.51</td><td>0.43</td><td>0.64</td><td>0.50</td><td>0.56</td></tr><tr><td>Pangea (7B)</td><td>0.18</td><td>0.15</td><td>0.17</td><td>0.10</td><td>0.14</td><td>0.16</td></tr><tr><td>Titu-LLM</td><td>0.06</td><td>0.19</td><td>0.08</td><td>0.02</td><td>0.17</td><td>0.21</td></tr><tr><td>Bong-LLaMA</td><td>0.05</td><td>0.12</td><td>0.08</td><td>0.02</td><td>0.15</td><td>0.13</td></tr><tr><td>Bangla-LLaMA</td><td>0.02</td><td>0.08</td><td>0.05</td><td>0.10</td><td>0.11</td><td>0.09</td></tr><tr><td>Bangla-Gemma</td><td>0.18</td><td>0.15</td><td>0.12</td><td>0.10</td><td>0.22</td><td>0.19</td></tr><tr><td>TigerLLM (1B)</td><td>0.61</td><td>0.55</td><td>0.68</td><td>0.61</td><td>0.59</td><td>0.62</td></tr><tr><td>TigerLLM (9B)</td><td>0.72</td><td>0.68</td><td>0.70</td><td>0.63</td><td>0.65</td><td>0.68</td></tr></table>
117
+
118
+ Table 2: Performance comparison of TigerLLM with other models on various Bangla-specific benchmarks. All values are reported as % in Pass@1, where higher scores indicate better performance.
119
+
120
+ spondes tailored to Bangla-language tasks.
121
+
122
+ # 6 Evaluation
123
+
124
+ Bangla LLM Benchmarks Although there has been limited research on Bangla LLMs, several benchmarks have been established to assess their performance. We focus on five benchmarks specifically curated to evaluate Bangla LLMs across a diverse set tasks. For multitask understanding, we use the Bangla subset of MMLU-Pro (Wang et al., 2024) and PangBench (Yue et al., 2024). For question answering, we consider BanglaQuaD (Rony et al., 2024), while for general knowledge, we use BEnQA (Shafayat et al., 2024). For reasoning tasks, we refer to BanglaRQA (Ekram et al., 2022).
125
+
126
+ As shown in the survey of Raihan et al. (2024), most coding benchmarks like HumanEval (Chen et al., 2021) do not support Bangla, so we utilize the Bangla subset of mHumanEval (Raihan et al., 2025a).
127
+
128
+ Results We present the results obtained by the two TigerLLM models compared to a variety of strong LLM baselines in Table 2. The performance comparison of various models on Bangla-specific benchmarks reveals a common trend. The finetuned models generally perform worse than their base counterparts across most tasks. In particular, the results reported by the authors are not reproducible, as mentioned in Section 1.1. However, TigerLLM is the only finetuned model, consistently outperforming both its base and fine-tuned variants across all tasks. Even the 1B variant does better than most models, falling short to only its
129
+
130
+ 9B counterpart, further validating our emphasis on high-quality data (Section 4).
131
+
132
+ Takeaways TigerLLM demonstrates that carefully curated, high-quality datasets can yield superior performance even with smaller model sizes. Our results show that the 1B parameter model outperforms larger alternatives across multiple benchmarks, emphasizing the importance of data quality over quantity. The success of our Bangla-TextBook corpus and Bangla-Instruct dataset establishes a new paradigm for low-resource language model development.
133
+
134
+ # 7 Conclusion and Future Work
135
+
136
+ This paper introduces TigerLLM, a family of state-of-the-art Bangla language models that outperforms existing alternatives across six benchmarks. TigerLLM's success stems from two key innovations: (1) the high-quality Bangla-TextBook corpus derived from educational materials and (2) the carefully curated Bangla-Instruct dataset generated using advanced teacher models.
137
+
138
+ The three resources introduced here (corpus, instruction dataset, and models) establish a robust foundation for future Bangla language modeling research. Together, they will contribute to speeding up advances in Bangla language modeling.
139
+
140
+ In future work we will conduct a deeper qualitative analysis of the model's behavior, broaden the corpus to cover a wider array of domains, scale the model to larger parameter counts without compromising quality, and devise richer evaluation metrics tailored specifically to Bangla tasks.
141
+
142
+ # Limitations
143
+
144
+ While TigerLLM delivers state-of-the-art performance, several limitations warrant acknowledgment. First, our Bangla-TextBook corpus, though carefully curated, is limited to educational materials from grades 6-12, potentially missing broader linguistic patterns present in other domains. The 10 million token size, while sufficient for our current models, may constrain scaling to larger architectures. Additionally, our Bangla-Instruct dataset, despite its quality-focused generation process, covers only a subset of possible instruction types and may not fully capture the complexity of real-world Bangla language use cases.
145
+
146
+ Furthermore, our models are currently limited to 1B and 9B parameters, primarily due to computational constraints and our emphasis on thorough experimentation with smaller computationally efficient architectures. While this approach enabled rapid iteration and quality-focused development, it may not fully exploit the potential benefits of larger model scales.
147
+
148
+ # Ethical Considerations
149
+
150
+ Our work prioritizes ethical considerations throughout the development process. The Bangla-TextBook corpus uses open-source publicly available educational materials from the National Curriculum and Textbook Board of Bangladesh. The volunteer-driven seed task creation process incorporated diverse perspectives while maintaining cultural sensitivity and avoiding harmful biases.
151
+
152
+ We implemented rigorous filtering mechanisms to ensure cultural appropriateness, gender neutrality, and religious sensitivity in our instruction dataset. The multi-stage review process, involving both automated checks and human verification, helps prevent the propagation of harmful stereotypes or biases. Additionally, our open-source approach promotes transparency and enables community oversight of model behavior.
153
+
154
+ We strongly recommend that users implement appropriate safeguards when deploying TigerLLM in production environments, particularly for applications involving sensitive information or critical decision-making.
155
+
156
+ # References
157
+
158
+ Firoj Alam, Shammur Absar Chowdhury, Sabri Boughorbel, and Maram Hasanain. 2024. Llms for
159
+
160
+ low resource languages in multilingual, multimodal and dialectal settings. In Proceedings of EACL.
161
+ Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Tyler Conerly, et al. 2024. Claude 3.5 sonnet technical report.
162
+ Abhik Bhattacharjee, Tahmid Hasan, Wasi Ahmad, Kazi Samin Mubasshir, and Md Saiful Islam. 2022. BanglaBERT: Language model pretraining and benchmarks for low-resource language understanding evaluation in Bangla. In Findings of the ACL (NAACL-2022).
163
+ Abhik Bhattacharjee, Tahmid Hasan, and Md Saiful Islam. 2023. Banglagpt: A gpt-2 language model continued pre-training for bangla.
164
+ Pramit Bhattacharyya, Joydeep Mondal, Subhadip Maji, and Arnab Bhattacharya. 2023. Vacaspati: A diverse corpus of bangla literature. In Proceedings of the 13th International Joint Conference on Natural Language Processing.
165
+ Tom Brown, Ben Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, et al. 2023. Gpt-4 technical report.
166
+ Tom Brown, Benjamin Mann, Nick Ryder, et al. 2020. Language models are few-shot learners. In Proceedings of NeurIPS.
167
+ Mark Chen, Jerry Tworek, Heewoo Jun, et al. 2021. Evaluating large language models trained on code. arXiv preprint, arXiv:2107.03374.
168
+ Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
169
+ Francesco Corso, Francesco Pierri, and Gianmarco De Francisci Morales. 2024. What we can learn from tiktok through its research api. In Proceedings of WebSci.
170
+ Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. FlashAttention: Fast and memory-efficient exact attention with IO-awareness. In Proceedings of NeurIPS.
171
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL.
172
+ Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, and et al. 2024. The Llama 3 herd of models. arXiv preprint arXiv:2407.21783.
173
+ Syed Mohammed Sartaj Ekram, Adham Arik Rahman, Md Sajid Altaf, Mohammed Saidul Islam, Mehrab Mustafy Rahman, Md Mezbaur Rahman, Md Azam Hossain, and Abu Raihan Mostofa Kamal. 2022. Banglarqa: A benchmark dataset for under-resourced bangla language reading comprehension-based question answering with diverse question-answer types. In Findings of the ACL (EMNLP-2022).
174
+
175
+ Team Gemma, Morgane Riviere, Shreya Pathak, et al. 2024. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118.
176
+ Suriya Gunasekar, Yi Zhang, Jyoti Aneja, et al. 2023. Textbooks are all you need. arXiv preprint arXiv:2306.11644.
177
+ Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop.
178
+ Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, et al. 2021. Lora: Low-rank adaptation of large language models. In Proceedings of ICLR.
179
+ Yiqiao Jin, Mohit Chandra, Gaurav Verma, Yibo Hu, Munmun De Choudhury, and Srijan Kumar. 2024. Better to ask in english: Cross-lingual evaluation of large language models for healthcare queries. In Proceedings of the ACM Web Conference 2024.
180
+ Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, et al. 2022. BLOOM: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100.
181
+ Arindam Mitra, Luciano Del Corro, Shweti Mahajan, et al. 2023. Orca 2: Teaching small language models how to reason. arXiv preprint arXiv:2311.11045.
182
+ Pedro Javier Ortiz Suárez, Laurent Romary, and Benoit Sagot. A monolingual approach to contextualized word embeddings for mid-resource languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.
183
+ Nishat Raihan, Antonios Anastasopoulos, and Marcos Zampieri. 2025a. mHumanEval - a multilingual benchmark to evaluate large language models for code generation. In Proceedings of NAACL.
184
+ Nishat Raihan, Christian Newman, and Marcos Zampieri. 2024. Code llms: A taxonomy-based survey. In Proceedings of IEEE BigData.
185
+ Nishat Raihan, Joanna C. S. Santos, and Marcos Zampieri. 2025b. MojoBench: Language modeling and benchmarks for proto. In *Findings of the ACL* (NAACL-2025).
186
+ Nishat Raihan, Mohammed Latif Siddiq, Joanna CS Santos, and Marcos Zampieri. 2025c. Large language models in computer science education: A systematic literature review. In Proceedings of SIGCSE.
187
+ Md. Rashad Al Hasan Rony, Sudipto Kumar Shaha, Rakib Al Hasan, Sumon Kanti Dey, Amzad Hossein Rafi, Ashraf Hasan Sirajee, and Jens Lehmann. 2024. Banglaquad: A bengali open-domain question answering dataset.
188
+
189
+ Abdullah As Sami, Nusrat Jahan Prottasha, Mohammad Shamsul Arefin, Pranab Kumar Dhar, and Takeshi Koshiba. 2022. Bangla-bert: transformer-based efficient model for transfer learning and language understanding. IEEE Access.
190
+ Sheikh Shafayat, H M Quamran Hasan, Minhajur Rahman Chowdhury Mahim, Rifki Afina Putri, James Thorne, and Alice Oh. 2024. Benqa: A question answering and reasoning benchmark for bengali and english.
191
+ Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, and Yann Dubois. 2023. Alpaca: A strong, replicable instruction-following model. *Stanford Center for Research on Foundation Models*.
192
+ Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, and Noah A Smith. 2023. Self-instruct: Aligning language models with self-generated instructions. In Proceedings of ACL.
193
+ Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, et al. 2024. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. arXiv preprint arXiv:2406.01574.
194
+ Xiang Yue, Yueqi Song, Akari Asai, Seungone Kim, et al. 2024. Pangea: A fully open multilingual multimodal llm for 39 languages. arXiv preprint arXiv:2410.16153.
195
+ Abdullah Khan Zehady, Safi Al Mamun, Naymul Islam, and Santu Karmaker. 2024. Bongllama: Llama for bangla language. arXiv preprint arXiv:2410.21200.
196
+ Ahmet Üstün, Viraat Aryabumi, Zheng-Xin Yong, and et al. 2024. Aya model: An instruction finetuned open-access multilingual language model. arXiv preprint arXiv:2402.07827.
197
+
198
+ # A Bangla-Instruct Curation
199
+
200
+ # A.1 Volunteer Information
201
+
202
+ The seed tasks were created by 50 undergraduate and graduate students from various universities across Bangladesh, ensuring geographical and academic diversity:
203
+
204
+ 15 students from Computer Science and Engineering.
205
+ - 10 students from Bengali Literature.
206
+ - 10 students from Business Administration.
207
+ - 8 students from Science and Engineering.
208
+ - 7 students from Social Sciences.
209
+
210
+ Each volunteer contributed 10 diverse instructions, resulting in our initial pool of 500 seed tasks. The distribution ensured coverage across multiple domains while preserving authentic Bengali linguistic patterns and cultural contexts.
211
+
212
+ # A.2 The Seed Dataset
213
+
214
+ Our seed dataset comprises 10 distinct categories, carefully chosen to cover a broad spectrum of tasks relevant to Bengali language and culture:
215
+
216
+ 1. Cultural Knowledge and Heritage $(c_{1})$ : Tasks focusing on Bengali traditions, festivals, folk tales, and historical events. These include explaining cultural practices, describing traditional ceremonies, and discussing historical significance of various customs.
217
+ 2. Academic Writing $(c_{2})$ : Structured writing tasks ranging from essay outlines to full academic compositions. Topics cover various academic disciplines while maintaining Bengali writing conventions and scholarly standards.
218
+ 3. Mathematical Problem Solving $(c_{3})$ : Tasks involving mathematical concepts explained in Bengali, including algebra, geometry, and arithmetic. Special attention is given to Bengali mathematical terminology and local problem-solving contexts.
219
+ 4. Programming and Technical $(c_{4})$ : Programming problems described in Bengali with solutions in standard programming languages. Includes algorithm explanation, code documentation, and technical concept elaboration in Bengali.
220
+ 5. Creative Writing $(c_{5})$ : Open-ended creative tasks including story writing, poetry composition, and descriptive passages. Emphasizes
221
+
222
+ Bengali literary devices, metaphors, and cultural storytelling elements.
223
+
224
+ 6. Scientific Explanation $(c_{6})$ : Tasks requiring clear explanation of scientific concepts in Bengali, focusing on making complex ideas accessible while maintaining technical accuracy. Covers physics, chemistry, biology, and environmental science.
225
+ 7. Business and Economics $(c_{7})$ : Professional writing tasks including business case analyses, market reports, and economic concept explanations. Incorporates local business contexts and Bengali business terminology.
226
+ 8. Social Issues Analysis $(c_{8})$ : Critical analysis tasks addressing contemporary social issues in Bangladesh and Bengali society. Includes problem identification, cause analysis, and solution proposition.
227
+ 9. Data Analysis and Statistics $(c_{9})$ : Tasks involving interpretation and analysis of data presented in Bengali, including statistical concepts explanation, data visualization description, and numerical analysis.
228
+ 10. Language and Translation $(c_{10})$ : Tasks focused on Bengali language mastery, including idiom explanation, translation between Bengali and English, and linguistic analysis of Bengali texts.
229
+
230
+ Each category accounts for approximately $10\%$ of the seed dataset ( $50 \pm 5$ tasks per category), ensuring balanced representation across domains. The tasks within each category vary in complexity level: $40\%$ basic, $40\%$ intermediate, and $20\%$ advanced, based on linguistic complexity and cognitive demand.
231
+
232
+ # A.3 Filtering Methodology
233
+
234
+ Our filtering process $\mathcal{F}:(\mathcal{I},\mathcal{R})\to \{0,1\}$ implements the following criteria:
235
+
236
+ # 1. Language Adherence $(\mathcal{L})$
237
+
238
+ Bengali Word Ratio: $\frac{|Bengali\ Words|}{|Total\ Words|}\geq$ 0.95
239
+ - Unicode Consistency: $\forall c \in \text{text}, c \in \text{Bengali-UTF8}$
240
+ - Grammar Check: Using GPT-4o's Bengali grammar scoring function $g(x) \geq 0.8$
241
+
242
+ # 2. Cultural Sensitivity $(\mathcal{C})$
243
+
244
+ - Religious Neutrality: $r(x) \in [-0.1, 0.1]$ on our bias scale
245
+ Regional Inclusivity: No specific region/dialect preference
246
+ - Gender Representation: Balanced pronouns and roles
247
+ - Political Neutrality: Avoidance of partisan content
248
+
249
+ # 3. Content Quality $(\mathcal{Q})$
250
+
251
+ - Minimum Length: $l(x) \geq l_{\min}(\tau)$ where $\tau$ is task type
252
+ - Coherence Score: $c(i, r) \geq 0.8$ between instruction $i$ and response $r$
253
+ - Factual Accuracy: Verified against Bengali Wikipedia
254
+ - Format Adherence: Proper paragraph breaks, lists, or code blocks
255
+
256
+ # 4. Novelty Verification $(\mathcal{N})$
257
+
258
+ - Similarity Threshold: $\forall j \in \mathcal{D}, \operatorname{sim}(i, j) \leq 0.7$
259
+ - Lexical Diversity: Minimum Token Ratio of 0.4
260
+ - Response Uniqueness: No duplicate responses within same category
261
+ - Task Format Variation: Ensure uniform distribution across formats
262
+
263
+ A pair $(i,r)$ is accepted if and only if:
264
+
265
+ $$
266
+ \mathcal {F} (i, r) = \mathbb {1} [ \mathcal {L} (i, r) \wedge \mathcal {C} (i, r) \wedge \mathcal {Q} (i, r) \wedge \mathcal {N} (i, r) ] = 1
267
+ $$
268
+
269
+ This rigorous filtering ensures the quality and diversity of our final dataset while maintaining Bengali linguistic and cultural authenticity.
270
+
271
+ # B Experimentation Details
272
+
273
+ # B.1 Pretraining HyperParameters
274
+
275
+ <table><tr><td>Hyperparameter</td><td>Value</td></tr><tr><td>Per device train batch size</td><td>64</td></tr><tr><td>Gradient accumulation steps</td><td>16</td></tr><tr><td>Number of training epochs</td><td>4</td></tr><tr><td>Learning rate</td><td>5 × 10-6</td></tr><tr><td>FP16</td><td>False</td></tr><tr><td>BF16</td><td>True</td></tr><tr><td>Dataloger num workers</td><td>8</td></tr><tr><td>Gradient checkpointing</td><td>True</td></tr><tr><td>Logging steps</td><td>1000</td></tr><tr><td>DDP find unused parameters</td><td>False</td></tr><tr><td>Max gradient norm</td><td>1.0</td></tr><tr><td>Warmup steps</td><td>1000</td></tr><tr><td>Evaluation strategy</td><td>steps</td></tr><tr><td>Evaluation steps</td><td>1,000</td></tr><tr><td>Save strategy</td><td>steps</td></tr><tr><td>Save steps</td><td>1,000</td></tr><tr><td>Save total limit</td><td>3</td></tr><tr><td>Load best model at end</td><td>True</td></tr><tr><td>Metric for best model</td><td>loss</td></tr><tr><td>Greater is better</td><td>False</td></tr></table>
276
+
277
+ # B.2 Finetuning Hyperparameters
278
+
279
+ Table 3: Final set of hyperparameters, chosen empirically after several iterations of trial and error, for pretraining on the Bangla-TextBook corpus.
280
+
281
+ <table><tr><td>Parameter</td><td>Value</td></tr><tr><td>Max Sequence Length</td><td>2048</td></tr><tr><td>Batch Size (Train/Eval)</td><td>16</td></tr><tr><td>Gradient Accumulation Steps</td><td>4</td></tr><tr><td>Number of Epochs</td><td>3</td></tr><tr><td>Learning Rate</td><td>1e-5</td></tr><tr><td>Weight Decay</td><td>0.02</td></tr><tr><td>Warmup Steps</td><td>10%</td></tr><tr><td>Optimizer</td><td>AdamW (8-bit)</td></tr><tr><td>LR Scheduler</td><td>Cosine</td></tr><tr><td>Precision</td><td>BF16</td></tr><tr><td>Evaluation Strategy</td><td>Steps</td></tr><tr><td>Evaluation Steps</td><td>50</td></tr><tr><td>Save Strategy</td><td>Steps</td></tr><tr><td>Save Steps</td><td>Varies</td></tr><tr><td>Seed</td><td>42</td></tr></table>
282
+
283
+ Table 4: Final set of hyperparameters, chosen empirically after several iterations of trial and error, for finetuning TigerLLM (1B).
284
+
285
+ <table><tr><td>Parameter</td><td>Value</td></tr><tr><td>Max Sequence Length</td><td>2048</td></tr><tr><td>Batch Size (Train/Eval)</td><td>32</td></tr><tr><td>Gradient Accumulation Steps</td><td>8</td></tr><tr><td>Number of Epochs</td><td>3</td></tr><tr><td>Learning Rate</td><td>1e-6</td></tr><tr><td>Weight Decay</td><td>0.04</td></tr><tr><td>Warmup Steps</td><td>15%</td></tr><tr><td>Optimizer</td><td>AdamW (8-bit)</td></tr><tr><td>LR Scheduler</td><td>Cosine</td></tr><tr><td>Precision</td><td>BF16</td></tr><tr><td>Evaluation Strategy</td><td>Steps</td></tr><tr><td>Evaluation Steps</td><td>250</td></tr><tr><td>Save Strategy</td><td>Steps</td></tr><tr><td>Save Steps</td><td>Varies</td></tr><tr><td>Seed</td><td>42</td></tr></table>
286
+
287
+ Table 5: Final set of hyperparameters, chosen empirically after several iterations of trial and error, for finetuning TigerLLM (9B).
ACL/2025/TigerLLM - A Family of Bangla Large Language Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b0418ed8e5e3fbc97395ffc96cefb1f8ad76d73c2e4b6f7666ac08f4fe66052
3
+ size 409899
ACL/2025/TigerLLM - A Family of Bangla Large Language Models/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:521fc7892c2e4b5c650b6c1b75f4c03ff7c81e430e540b6940c3ed7a2257b5c8
3
+ size 335069
ACL/2025/Towards Geo-Culturally Grounded LLM Generations/d3edfe9c-9bbe-4d2a-ae91-434e7c94ee08_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e9a975a307162437e18fa5e031a04f0c06b8648b70c5c19f6421676b38855e9
3
+ size 147122
ACL/2025/Towards Geo-Culturally Grounded LLM Generations/d3edfe9c-9bbe-4d2a-ae91-434e7c94ee08_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edf63ba3392d9591d4965ad75a9f01af240a82961aa88217500e494306523d0e
3
+ size 178271