text
string
source
string
a fixed set of pruning factors. As these computations are independent, they can be performed in parallel. The GA initializes and mutates chromosomes by selecting from this precomputed set. During fitness evaluation, each chromosome is used to instantiate a separate LLM with the corresponding matrices, enabling parallel evaluation of candidate solutions. 5 Evaluation We evaluate LLM-Sieve on three models of varying sizes—Phi-3-mini (3.8B), LLaMA-3.1 (8B), and LLaMA-3.1 (70B)—across three tasks: i) Generic RAG andii) Medical RAG , both of which involve answering a question using provided context passages, and iii) Sentiment Analysis , which involves classifying a given text as having either positive or negative sentiment. In all cases, the model’s predicted output is compared against a reference answer for evaluation. Datasets. For each task we use two different well known popular public datasets. For Generic RAG, we use HotpotQA [ 21] (Gen RAG-I) andNaturalQA [ 22] (Gen RAG-II) .For Medical RAG, we use PubMedQA [ 23] (Med RAG-I) and MedMCQA [ 24] (Med RAG-II) . For Sentiment Analysis we use IMDB [25] (Sentiment-I) and SST2 [26] (Sentiment-II) . Performance – Accuracy Measurement. We use GPT-4o-as-a-judge [ 27] to measure one-shot task accuracy since it is known to be a more robust approach than commonly used metrics like Exact 6 Figure 5: Pruning performance of LLM-Sieve for different models and tasks. On the right, the layer-wise fraction of parameters retained for Llama-3.1-70B on Sentiment-I shows a clear pattern. Figure 6: The fraction of parame- ters retained in each matrix of vari- ous models. (more details in A.4). Match, Quasi-Exact Match, and F1 [ 28]. To evaluate each prompt, GPT-4o is instructed in the query to judge each answer as either “correct” or “incorrect” by comparing the response answer to the ground-truth. The accuracy is measured as: Accuracy =Number of correct answers Number of prompts Calibration Datasets. For each task, calibration data was generated by randomly sampling prompts to total 200K tokens from Gen RAG-I, Med RAG-I and Sentiment-I. We use only 200k since our sensitivity analyses indicated that larger calibration datasets do not provide additional benefits (see Appendix A.1). Note that we do not use Gen RAG-II, Med RAG-II, or Sentiment-II as calibration datasets, as our goal is to use them to evaluate cross-dataset generalization performance. State-of-the-Art Comparisons. We compare LLM-Sieve-GA (differentiated pruning using a Genetic Algorithm) and LLM-Sieve-UP (uniform pruning using binary search) against the three best performing techniques we found. LASER [3] employs Singular Value Decomposition (SVD) to derive low-rank approximations of weights while SliceGPT [2] applies low-rank projections to input representations. LLM-Pruner [9], in contrast, uses gradient-based structural pruning to remove less important parameter groups. Both LLM-Pruner and SliceGPT use calibration data, and we replace their default datasets with our own task datasets. Platform Setup. LLM-Sieve is implemented on VLLM [ 29] for improved memory efficiency and scalable inference. Experiments run on 96 A100 GPUs across 12 VMs. To scale pruning, we enable pipeline parallelism but disable tensor parallelism, as it requires increased synchronization for collecting input-output pairs of each matrix. 5.1 How many parameters can LLM-Sieve remove? Figure
https://arxiv.org/abs/2505.18350v1
5 reports the percentage reduction in model size across three models and three datasets, for two tolerance thresholds ϵ= 1% (orange/red) and ϵ= 5% (blue). The baseline accuracy of the original uncompressed model is noted in parentheses. Since the performance of the Genetic Algorithm (GA)- based pruning is strictly superior to uniform pruning, the figure stacks their contributions to highlight the differential gain. The total reduction can be substantial—ranging from 20% to 75% depending on the model size, narrowness of the task (based on its input/output space and reasoning complexity), and tolerated performance degradation. Larger models and narrower tasks (e.g., sentiment analysis) tend to exhibit greater extent of pruning. For instance, the LLaMA-3.1-70B model on the Sentiment-I task—which only requires a binary true/false output—can be pruned by up to 75%. How Much Does Differentiated Pruning Help—and Why? Figure 5 shows that differentiated pruning consistently removes an additional 10–50% of the total parameters beyond what uniform achieves, as indicated by the dominant contribution of the dark red/blue regions. In the case of LLaMA-3.1-70B on Sentiment-I, the majority of the 75% reduction is due to differentiated pruning, while uniform contributes <5%. To understand this disparity, Figure 5 also shows the fraction of parameters retained per layer. Strikingly, layers 41–79 (the latter half of the model) retain less than 10% of their parameters on average, whereas layers 1–40 preserve over 30%. Some layers, including the first and last, retain as much as 70–80%, illustrating the uneven importance of different layers. Bottleneck Matrices. Figure 6 (more details in Appendix sec. A.4) presents a layer-by-layer and matrix-by-matrix heatmap of parameter retention for each matrix type for Sentiment-I across all three models as well as Llama-3.1-70B for Med RAG-I. Dark blue cells denote matrices that were retained almost entirely—these are the so-called bottleneck matrices . In LLaMA-3.1-70B for Sentiment-I and 7 Figure 7: LLM-Sieve compared to state-of-the-art pruning techniques. Figure 8: Effect of LoRA fine- tuning on for Llama-3.1-8B. Figure 9: Effect of LoRA fine- tuning on unseen datasets for LLaMA-3.1 8B. Med RAG-I, several matrices such as WQKV in the first and last few attention layers resist pruning. The presence of bottleneck matrices limits the effectiveness of uniform pruning, as it applies global thresholds indiscriminately across all matrices. In contrast, differentiated pruning can circumvent the bottlenecks by effectively ”pruning around” the bottlenecks. 5.2 How does LLM-Sieve compare to the state-of-the-art? As shown in Figure 7, both LLM-Sieve-GA and LLM-Sieve-UP significantly outperform the three state-of-the-art baselines—LASER, SliceGPT, and LLMPruner—all of which prune fewer than 5% of parameters. The strong performance of LLM-Sieve-UP suggests that jointly considering both the input and weight matrix subspaces enables the discovery of more effective task-specific projections. Furthermore, differentiated pruning via the Genetic Algorithm (GA) yields even greater parameter reduction by selectively pruning matrices that are less critical to downstream task performance. 5.3 Does Pruning Generalize to New Datasets of the Same Task? Table 1: Generalization of LLM-Sieve-GA across datasets for the same task (LLaMA-3.1-8B). Task Test Dataset Uncompressed LLM-Sieve-GA Accuracy [%] Accuracy [%] General RAG Gen RAG-II 66.8 67.0 Medical Q&A Med RAG-II
https://arxiv.org/abs/2505.18350v1
70.5 35.0 Sentiment Analysis Sentiment-II 91.2 83.5As seen in Table 1, LLM- Sieve generalizes well across datasets in the same task domain for Gen RAG-II (no loss) and Sentiment-II ( ≈7% loss). However, performance drops significantly in Med RAG-II. We found that this stems from a mismatch in output format: Med RAG-I requires True/False answers with rationale, whereas Med RAG-II involves multiple-choice selection. This suggests that task-specific pruning generalizes well when output structures are aligned; diverging formats may limit transferability. 5.4 Can LoRA Fine-Tuning Improve Accuracy of Pruned Models? LoRA fine-tuning [ 4] is commonly used to specialize LLMs for specific tasks. While it often improves accuracy, we find that over-reliance on LoRA can compromise generalization across datasets within the same task domain. Figure 8 shows the effect of LoRA when fine-tuned on the same dataset used for calibration. As seen from the figure, LLM-Sieve-GA’s performance improves by the use of LoRA similar to the improvement in the original model for all three tasks. In contrast, methods like LASER and LLM-Pruner, which show near-zero accuracy before LoRA, jump to 50–90% accuracy after LoRA fine-tuning—highlighting their strong dependence on post-pruning adaptation. As we discuss next, however, this over-reliance undermines dataset generalization. As shown in Figure 9, when pruned models are evaluated on unseen datasets within the same task category, LLM-Sieve shows minimal benefit from LoRA, yet its accuracy remains close to that of the original model on Gen RAG-II and Sentiment-II. This aligns with expectations, as LoRA fine-tuning tends to specialize models to the calibration dataset. Interestingly, on Med RAG-II, LoRA actually reduces LLM-Sieve’s accuracy. Upon inspection, we found that Med RAG-I uses true/false answers, while Med RAG-II requires multiple-choice responses—yet LoRA-tuned models continued producing 8 Figure 10: Accuracy vs. pruning % with and without 8-bit quantization. Figure 11: Reduction in parame- ters vis-a-vis reduction in inference time (speedup for LLM-Sieve). Figure 12: How genetic algorithm parameter reduction evolves across generations. true/false outputs despite prompt instructions. In contrast, methods like LASER and LLM-Pruner, which depend heavily on LoRA for recovery, fail to match LLM-Sieve’s task-level generalization. 5.5 How much does pruning reduce inference latency? Figure 11 shows the reduction in inference latency (wall-clock time) across three models and pruning levels, using the CUTLASS [ 30] library to accelerate tensor operations. All pruned matrix dimensions were constrained to powers of two. We observe that latency speedup scales approximately linearly with the fraction of parameters removed. 5.6 How does quantization affect LLM-Sieve? Quantization reduces memory and latency by encoding weights in lower-precision formats such as 8-bit integers. As shown in Figure 10, accuracy vs. model size curves for Gen RAG-I using LLM- Sieve-GA (with and without quantization) on Phi-3-3.8B, LLaMA-3.1-8B, and LLaMA-3.1-70B show a plateau followed by a sharp accuracy drop—indicating that many parameters are not task-critical. Notably, the quantized models (marked with “-Q”) closely track the unquantized ones, suggesting that quantization can be applied after pruning to halve memory costs with minimal accuracy loss. 5.7 Running time for LLM-Sieve Table 2: GPU Hours Spent Model Pruning Sieve-UP Sieve-GA Phi-3-3.8B 2 5 144 LLaMA-3.1-8B 3
https://arxiv.org/abs/2505.18350v1
8 270 LLaMA-3.1-70B 35 85 891Running LLM-Sieve incurs a one-time cost com- parable to LoRA fine-tuning. As shown in Ta- ble 2, pruning itself requires 1–36 GPU hours depending on the model. The search for pruning factors is more expensive, as it resembles hyper- parameter tuning and involves multiple pruning iterations. Uniform pruning via binary search (Sieve-UP) is more efficient, typically converging within 2–4 steps. As shown in Figure 12, the Genetic Algorithm (GA) used in differentiated pruning converges within 10–15 generations, with total cost ranging from 144–900 GPU hours. 6 Conclusion, Limitations & Future Work We introduced LLM-Sieve, a task-specific pruning framework that significantly reduces LLM size while preserving accuracy. By combining joint low-rank projections with a Genetic Algorithm for differentiated pruning, LLM-Sieve enables fine-grained compression tailored to task structure. It outperforms prior methods across models and tasks, and remains compatible with LoRA fine-tuning and quantization—supporting a practical pipeline for efficient, task-adapted LLMs. A key limitation is that LLM-Sieve retains the full model architecture, including layer count, which caps compression potential. In contrast, distillation can yield much smaller models (e.g., LLaMA-8B from LLaMA- 70B) but at significantly higher cost. Currently, pruning is done per-matrix to avoid backpropagation through non-linearities, and the Genetic Algorithm, while effective, may be replaced by faster pruning factor search methods. We believe LLM-Sieve also opens avenues for interpretability by helping researchers identify which components are essential for each task—potentially shedding light on how knowledge and reasoning are organized within LLMs. 9 References [1]Xunyu Zhu, Jian Li, Yong Liu, Can Ma, and Weiping Wang. A survey on model compression for large language models. Transactions of the Association for Computational Linguistics , 12: 1556–1577, 2024. [2]Saleh Ashkboos, Maximilian L Croci, Marcelo Gennari do Nascimento, Torsten Hoefler, and James Hensman. Slicegpt: Compress large language models by deleting rows and columns. arXiv preprint arXiv:2401.15024 , 2024. [3]Pratyusha Sharma, Jordan T Ash, and Dipendra Misra. The truth is in there: Improving reasoning in language models with layer-selective rank reduction. arXiv preprint arXiv:2312.13558 , 2023. [4]Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 , 2021. [5]Ofir Zafrir, Ariel Larey, Guy Boudoukh, Haihao Shen, and Moshe Wasserblat. Prune once for all: Sparse pre-trained language models. arXiv preprint arXiv:2111.05754 , 2021. [6]Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635 , 2018. [7]Elias Frantar and Dan Alistarh. Sparsegpt: Massive language models can be accurately pruned in one-shot. In International Conference on Machine Learning , pages 10323–10337. PMLR, 2023. [8]Mingjie Sun, Zhuang Liu, Anna Bair, and J Zico Kolter. A simple and effective pruning approach for large language models. arXiv preprint arXiv:2306.11695 , 2023. [9]Xinyin Ma, Gongfan Fang, and Xinchao Wang. Llm-pruner: On the structural pruning of large language models. Advances in neural information processing systems , 36:21702–21720, 2023. [10] Matan Ben Noach and Yoav Goldberg. Compressing pre-trained language models by matrix decomposition. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational
https://arxiv.org/abs/2505.18350v1
Linguistics and the 10th International Joint Conference on Natural Language Processing , pages 884–889, 2020. [11] Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323 , 2022. [12] Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han. Smoothquant: Accurate and efficient post-training quantization for large language models. InInternational Conference on Machine Learning , pages 38087–38099. PMLR, 2023. [13] Saleh Ashkboos, Amirkeivan Mohtashami, Maximilian Croci, Bo Li, Pashmina Cameron, Martin Jaggi, Dan Alistarh, Torsten Hoefler, and James Hensman. Quarot: Outlier-free 4-bit inference in rotated llms. Advances in Neural Information Processing Systems , 37:100213– 100240, 2024. [14] Zechun Liu, Changsheng Zhao, Igor Fedorov, Bilge Soran, Dhruv Choudhary, Raghuraman Krishnamoorthi, Vikas Chandra, Yuandong Tian, and Tijmen Blankevoort. Spinquant: Llm quantization with learned rotations. arXiv preprint arXiv:2405.16406 , 2024. [15] Amir Gholami, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael W Mahoney, and Kurt Keutzer. A survey of quantization methods for efficient neural network inference. In Low-Power Com- puter Vision , pages 291–326. Chapman and Hall/CRC, 2022. [16] Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. Journal of Machine Learning Research , 22(241):1–124, 2021. [17] Geoffrey Hinton. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 , 2015. 10 [18] A Vaswani. Attention is all you need. Advances in Neural Information Processing Systems , 2017. [19] Jimmy Lei Ba. Layer normalization. arXiv preprint arXiv:1607.06450 , 2016. [20] Biao Zhang and Rico Sennrich. Root mean square layer normalization. Advances in Neural Information Processing Systems , 32, 2019. [21] Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhut- dinov, and Christopher D Manning. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600 , 2018. [22] Natural quesetions - short. https://huggingface.co/datasets/cjlovering/ natural-questions-short . [Accessed 16-05-2025]. [23] Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W Cohen, and Xinghua Lu. Pubmedqa: A dataset for biomedical research question answering. arXiv preprint arXiv:1909.06146 , 2019. [24] Medmcqa. https://huggingface.co/datasets/openlifescienceai/medmcqa . [Ac- cessed 16-05-2025]. [25] Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies , pages 142–150, 2011. [26] Stanford sentiment treebank. https://huggingface.co/datasets/stanfordnlp/sst . [Accessed 16-05-2025]. [27] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems , 36:46595–46623, 2023. [28] C. J. Van Rijsbergen. Information Retrieval . Butterworth-Heinemann, 2nd edition, 1979. [29] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles , pages 611–626, 2023. [30] CUTLASS. https://docs.nvidia.com/cutlass/ . [Accessed 16-05-2025]. 11 A Appendix A.1 Calibration Dataset Sensitivity. To
https://arxiv.org/abs/2505.18350v1
assess the impact of calibration dataset size, we ran a sensitivity analysis on Phi-3-mini for all our tasks using LLM-Sieve configured with uniform pruning. As seen in Fig. 13, we observe that accuracy plateaus after approximately 150K tokens. Based on this, we use a fixed calibration set of 200K tokens for all our pruning experiments. /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000006/uni00000003/uni00000052/uni00000049/uni00000003/uni00000026/uni00000044/uni0000004f/uni0000004c/uni00000045/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000003/uni00000037/uni00000052/uni0000004e/uni00000048/uni00000051/uni00000056/uni00000003/uni0000000b/uni0000002e/uni0000000c/uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013/uni0000001b/uni00000013/uni00000014/uni00000013/uni00000013/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni0000000b/uni00000008/uni0000000c /uni0000002a/uni00000048/uni00000051/uni00000048/uni00000055/uni0000004c/uni00000046/uni00000003/uni00000035/uni00000024/uni0000002a/uni00000010/uni0000002c /uni00000030/uni00000048/uni00000047/uni0000004c/uni00000046/uni00000044/uni0000004f/uni00000003/uni00000035/uni00000024/uni0000002a/uni00000010/uni0000002c /uni00000036/uni00000048/uni00000051/uni00000057/uni0000004c/uni00000050/uni00000048/uni00000051/uni00000057/uni00000010/uni0000002c Figure 13: Sensitivity to different calibration dataset sizes for Phi-3-mini. The accuracy benefits start to plateau after 150K tokens. A.2 Compressibility of different matrix types. In our genetic algorithm (GA) hyperparameter search, we observed that different matrix types within the feedforward and attention components exhibit varying degrees of compressibility. Figure 14 presents the average fraction of parameters retained per matrix type, averaged across all layers, for the best-performing GA configuration. These results cover all model and dataset combinations. We find that for LLaMA-3.1 8B and 70B, feedforward matrices such as W1andW2are generally more compressible than their attention counterparts. This pattern is not observed in Phi-3-mini. w1 w2 wqkv wo Matrix Types0.00.10.20.30.40.50.6Fraction of parameters (a) Phi-3-mini+GenRAG-I. w1 w2 wqkv wo Matrix Types0.00.10.20.30.40.50.60.7Fraction of parameters (b) LLaMA-3.1-8B+GenRAG-I. w1 w2 wqkv wo Matrix Types0.00.10.20.30.4Fraction of parameters (c) LLaMA-3.1-70B+GenRAG-I. w1 w2 wqkv wo Matrix Types0.00.10.20.30.40.5Fraction of parameters (d) Phi-3-mini+MedRAG-I. w1 w2 wqkv wo Matrix Types0.00.10.20.30.40.50.6Fraction of parameters (e) LLaMA-3.1-8B+MedRAG-I. w1 w2 wqkv wo Matrix Types0.00.10.20.30.40.5Fraction of parameters (f) LLaMA-3.1-70B+MedRAG-I. Figure 14: Fraction of remaining parameters for each matrix type found using our Genetic Algorithm search (lower values = higher compression). 12 w1 w2 wqkv wo Matrix Types0.000.050.100.150.200.250.300.35Fraction of parameters(g) Phi-3-mini+Sentiment-I. w1 w2 wqkv wo Matrix Types0.000.050.100.150.200.250.300.350.40Fraction of parameters (h) LLaMA-3.1-8B+Sentiment-I. w1 w2 wqkv wo Matrix Types0.000.050.100.150.200.25Fraction of parameters (i) LLaMA-3.1-70B+Sentiment-I. Figure 14: (Continued) A.3 Compressibility of different Layers. Figure 15 presents the average fraction of parameters retained per layer. For larger models such as LLaMA-3.1-70B, we observe a trend where the second half of layers tends to be significantly more compressible than the first half, with the exception of the final few layers. This pattern is especially pronounced in narrower, more specialized tasks like sentiment analysis (Fig. 15i). In contrast, for smaller models such as Phi-3-mini and LLaMA-3.1-8B, no consistent compressibility trend emerges across the different layers. 0 5 10 15 20 25 30 Layer Number0.00.20.40.60.8Fraction of parameters (a) Phi-3-mini+GenRAG-I. 0 5 10 15 20 25 30 Layer Number0.00.20.40.60.8Fraction of parameters (b) LLaMA-3.1-8B + GenRAG-I. 010 20 30 40 50 60 70 80 Layer Number0.00.20.40.60.8Fraction of parameters (c) LLaMA-3.1-70B+GenRAG-I. 0 5 10 15 20 25 30 Layer Number0.00.10.20.30.40.50.60.70.8Fraction of parameters (d) Phi-3-mini+MedRAG-I. 0 5 10 15 20 25 30 Layer Number0.00.20.40.60.8Fraction of parameters (e) LLaMA-3.1-8B+MedRAG-I. 010 20 30 40 50 60 70 80 Layer Number0.00.20.40.60.81.0Fraction of parameters (f) LLaMA-3.1-70B+MedRAG-I. 0 5 10 15 20 25 30 Layer Number0.00.10.20.30.40.5Fraction of parameters (g) Phi-3-mini+Sentiment-I. 0 5 10 15 20 25 30 Layer Number0.00.10.20.30.40.5Fraction of parameters (h) LLaMA-3.1-8B+Sentiment-I. 010 20 30 40 50 60 70 80 Layer Number0.00.10.20.30.40.50.60.70.8Fraction of parameters (i) LLaMA-3.1-70B+Sentiment-I. Figure 15: Fraction of remaining parameters for each layer in the model after conducing a Genetic Algorithm search (lower numbers imply
https://arxiv.org/abs/2505.18350v1
higher compression). 13 A.4 Presence of Bottleneck Matrices. We perform a cross-population analysis of the top-performing individuals—defined as those within 20% of the best recorded fitness—during our genetic algorithm search. For each matrix in every layer, we compute the probability that it remains unpruned (i.e., receives zero compression) across these individuals. As seen in Fig. 16, for LLaMA-3.1-70B, we consistently find matrices that are never pruned across all top individuals (highlighted in red), indicating that pruning these matrices leads to significant performance degradation. We refer to these as bottleneck matrices . This phenomenon suggests that larger models tend to hyperspecialize certain matrices for critical sub-tasks and exhibit limited tolerance to pruning in those areas. (a) LLaMA-3.1-70B+GenRAG-I. (b) Phi-3-mini+GenRAG-I. (c) LLaMA-3.1-8B+GenRAG-I. Figure 16: Heat map showing the bottleneck matrices during GA for different models/datasets. Boxes highlighted in red indicate that the matrix is unprunable ( i.e., pruning results in significant loss). 14 (d) LLaMA-3.1-70B+MedRAG-I. (e) Phi-3-mini+MedRAG-I. (f) LLaMA-3.1-8B+MedRAG-I. (g) LLaMA-3.1-70B+Sentiment-I. (h) Phi-3-mini+Sentiment-I. (i) LLaMA-3.1-8B+Sentiment-I. Figure 16: (Continued) 15
https://arxiv.org/abs/2505.18350v1
arXiv:2505.18356v1 [cs.CL] 23 May 2025The Unreasonable Effectiveness of Model Merging for Cross-Lingual Transfer in LLMs Lucas Bandarkar* Nanyun Peng University of California, Los Angeles Abstract Large language models (LLMs) still struggle across tasks outside of high-resource languages. In this work, we investigate cross-lingual trans- fer to lower-resource languages where task- specific post-training data is scarce. Building on prior work, we first validate that the sub- sets of model parameters that matter most for mathematical reasoning and multilingual capa- bilities are distinctly non-overlapping. To ex- ploit this implicit separability between task and target language parameterization, we develop and analyze numerous modular frameworks to improve the composition of the two during fine- tuning. These methods generally employ freez- ing parameters or post hoc model merging to assign math and language improvement to dif- ferent key parts of the LLM. In the absence of in-language math data, we demonstrate that the modular approaches successfully improve upon baselines across three languages, four models, and two fine-tuning paradigms (full and LoRA). Furthermore, we identify the most consistently successful modular method to be fine-tuning separate language and math experts and model merging via Layer-Swapping (Bandarkar et al., 2025), somewhat surprisingly. We offer possi- ble explanations for this result via recent works on the linearity of task vectors. We further ex- plain this by empirically showing that reverting less useful fine-tuning updates after training often outperforms freezing them from the start. 1 Introduction Post-training large language models (LLMs) on la- beled text data is a critical step in developing mod- els for real-world applications. However, when these LLMs are fine-tuned for lower-resource lan- guages, significant challenges arise due to the pre- trained model’s limited capabilities. Although in recent years the broader scaling of pretraining and increased investment in additional languages (Dang *Correspondence: lucasbandarkar@cs.ucla.eduet al., 2024b; Llama et al., 2024) have led to ma- jor improvements, pretrained LLMs still struggle to understand and generate text in all but a few languages (Romanou et al., 2025; Qin et al., 2025). This pretraining disparity is further exacerbated by the lack of available high-quality multilingual fine-tuning data (Singh et al., 2024) and the sig- nificant cost to procure such annotated data (even through machine translation). For many of the capa- bilities developers target during post-training (e.g., instruction-following, reasoning, or safety) there are only sufficient open-source data available in English, Chinese, and a handful of other languages. This motivates the need for better cross-lingual transfer: the generalization of learned capacities from high-resource languages to lower-resource ones (Hu et al., 2020; Philippy et al., 2023). Despite recent releases of massive mixture-of- expert LLMs (Team, 2024b; DeepSeek-AI et al., 2025; Team, 2025), a large majority of modern LLMs are dense , meaning that all parameters are ac- tive during training and inference. However, even within dense LLMs, recent works have found sepa- rability in where and how varying capabilities are represented (Yin et al., 2024; Yao et al., 2024). For example, multilingual capabilities are typically con- centrated in the top and bottom transformer layers and multi-head attention parameters of an LLM (Chang et al., 2022; Choenni
https://arxiv.org/abs/2505.18356v1
et al., 2024). This no- tably contrasts mathematical reasoning capabilities being encoded mainly in the middle transformer layers (Hanna et al., 2023; Stolfo et al., 2023). In the context of cross-lingual transfer, this functional separation motivates modular approaches to fine- tuning, which distinct model components can be trained, swapped, or merged (Bengio et al., 2020; Pfeiffer et al., 2023) for efficient and flexible multi- objective optimization. In this work, we explore several modular ap- proaches for composing target task and target lan- guage capabilities in off-the-shelf dense LLMs. 1 Layer-Swap merging[baseline] Same Training, Same Parameters: Entire model is trained on a mixed dataset of English math & generic target language data [1] Separate Training, Same Parameters: Separate “experts” are fine-tuned and then merged [2] Separate Training, Separate Parameters: Separate “experts” are partially fine-tuned and then merged [3] Same Training, Separate Parameters: Two sets of allocated paramater sets are simultaneously fine-tuned on the two datasetsSimultaneously FT different parameters on each dataset SFT on tgt language SFT on English mathFT on mixed dataset SFT on English math SFT on tgt language Layer-Swap mergingFigure 1: Illustration of the three methods that induce modularity by imposing target language capabilities (brown) and mathematical reasoning (blue) on separate LLM parameters. [1]is from Bandarkar et al. (2025) Our goal is to induce modularity by exploiting the differences in parameters that are most rele- vant to mathematical reasoning versus multilingual capabilities. We focus on the prevalent scenario where task-specific data is scarce in the target lan- guage but readily available in English. We address this by working with two datasets; one English math dataset for supervised fine-tuning (SFT) and one general, multi-task SFT dataset in the target language. Using the target languages of Bengali, Swahili, and Telugu, we evaluate the methods on the multilingual math benchmark, MGSM (Shi et al., 2023). With these datasets, we evaluate numerous train- ing paradigms that incentivize the model, to vary- ing degrees, to learn multilingual or math capa- bilities in specific parameters. We organize the settings along two axes: (1) whether the models are optimized separately or together over the twoSFT datasets and (2) whether the same or sepa- rate model parameters are trained on the datasets. When the models are trained separately, we com- bine the learned capabilities using model merging methods such as variants of Layer-Swapping (Ban- darkar et al., 2025). To train separate model pa- rameters, we start by dividing all parameters into two partitions according to prior work: (1) one set allocated to target language training and (2) one set to English math. Only allocated parameters are fine-tuned, while the opposite partition is frozen. We additionally develop a method to train separate parameters in a single, joint training by frequently freezing and unfreezing parameters to simulate si- multaneous training. Despite the strong starting capabilities of the four LLMs and the data-constrained setting, our experimental results show that all of the modu- lar solutions outperform our baselines, despite be- ing subject to varying training constraints. This implies that intentional separation of parameters and/or training improves the compositionality of task and language
https://arxiv.org/abs/2505.18356v1
capabilities. Amongst our modular solutions, we surpris- ingly find that post hoc model merging via Layer- Swapping outperforms more coordinated multi- task fine-tuning approaches. To contextualize this counterintuitive result, we explore recent academic literature that help explain the phenomenon. We provide empirical evidence for training all model parameters, even if large portions will be discarded during Layer-Swapping. While these subsets of task vectors are unproductive, freezing them dur- ing fine-tuning leads to less optimal updates to the target parameters. Notably, we rationalize that the fine-tuning task vectors ( ∆s) are quite lin- ear within individual parameter blocks (Dai et al., 2025), meaning they can be added, scaled, or inter- polated as linear components (Adilova et al., 2024). Overall, we enumerate the following principal contributions of this work: •We develop and synthesize a number of mod- ular solutions that each increase composition- ality for cross-lingual transfer compared to non-modular baselines, demonstrated through extensive experiments. •Of the modular methods, we find that fine- tuning all parameters and then merging via Layer-Swapping performs best on average. •We provide a mix of theoretical and empirical explanations to explain the surprising success of Layer-Swapping relative to alternatives. 2 2 Background 2.1 Cross-Lingual Transfer The relative abundance of textual data available in English in comparison to other languages has long motivated research in developing methods to efficiently transfer learned capabilities across lan- guages (Koehn and Knight, 2002). Typically, some capabilities transfer naturally across languages, as evidenced by the superior performance of multilin- gual models on low-resource languages compared to monolingual models (Firat et al., 2016; Pires et al., 2019; Artetxe et al., 2020). In encoder mod- els, the text embedding could be aligned across lan- guages to improve transfer using methods such as contrastive learning (Mikolov et al., 2013; Artetxe et al., 2018; Muller et al., 2021). However, cross-lingual alignment in more mod- ern decoder-only models has become less method- ical because of the lack of universal embedding (Kargaran et al., 2024). Since most popular LLMs have been trained on a majority English corpora, recent works have examined how much intrinsic cross-lingual transfer occurs at different training stages (Choenni et al., 2023; Wang et al., 2024). These large models have broader generalization and robustness, but still fail to transfer much of their capabilities across languages (Philippy et al., 2023). Recent works have identified prompting methods (Shi et al., 2023; Zhang et al., 2024) or post-training data augmentation (Dang et al., 2024a; She et al., 2024; Lai et al., 2024) to help generalization. 2.2 Modularity in Multilingual NLP A major constraint for models being able to process many languages has been the number of parameters available to represent them. As a result, improving a language model in one language risks undermin- ing its knowledge of another, termed the curse of multilinguality (Conneau et al., 2020; Pfeiffer et al., 2022). Naturally, numerous methods have been proposed to increase the model’s parametric ca- pacity without increasing the inference cost, such as mixture-of-expert architectures (Fedus et al., 2022) that route tokens according to their language (NLLB et al., 2022). Methods that leverage
https://arxiv.org/abs/2505.18356v1
mod- ular parameters were developed to compose capa- bilities for transfer learning by inserting trainable adapters within model layers (Houlsby et al., 2019; Pfeiffer et al., 2021). These methods were modi- fied for multilinguality by allocating adapters for particular languages and switching them in or outdepending on the input (Bapna and Firat, 2019; Pfeiffer et al., 2020). Pfeiffer et al. (2022) extended these methods by pretraining an adapter-based mul- tilingual model from scratch. In decoder models, cross-lingual adapters have also been proposed at the token embedding level (Jiang et al., 2025). Even in dense LLMs, however, interpretability research has identified the emergence of effective modularity (Csordás et al., 2021) as LLM param- eters scale (Zhang et al., 2022; Qiu et al., 2024). Principally, numerous recent works have identified that just a few transformer layers at the top and bottom of English-centric LLMs are responsible for multilingual capabilities, notably by mapping input and output into a universal representation (Kojima et al., 2024; Wendler et al., 2024; Tang et al., 2024b; Alabi et al., 2024). 2.3 Model Merging Model merging is the practice of combining the weights of multiple checkpoints of the same model architecture into a singular model. While averag- ing models is a fundamental machine learning ap- proach to increase statistical robustness (Breiman, 1996), the averaging of model checkpoints, dubbed a model soup by Wortsman et al. (2022), has re- emerged in large-scale LLMs as a method to in- crease model robustness. More importantly, it also increases the search space of valid model variants at any given training step without additional costly training runs (Llama et al., 2024). However, simple weight averaging is vulnerable to negative trans- fer, or interference, between checkpoints so numer- ous methods have been presented to selectively merge parameters (Ilharco et al., 2023a; Yadav et al., 2023; Yu et al., 2024a). Surprisingly, training models on separate data and then merging can of- ten outperform a single training run on mixed data (Tang et al., 2024a; Aakanksha et al., 2024) and has shown to be highly effective in large-scale multi- lingual pretraining (Dang et al., 2024b). For cross- lingual transfer in particular, Ansell et al. (2022) showed that sparse fine-tuning can lead to better composition. Bandarkar et al. (2025) extended this by notably identifying that mathematical reasoning was concentrated in parameters different from mul- tilingual capabilities. As a result, model variants trained on English math data and multilingual data can be combined by Layer-Swapping, or swapping the transformer layers most important to each. 3 Training Description Base Model Partial LoRA Partial SFT LoRA Full SFT Math-only 19.0% 18.0% 19.5% 18.9% 19.6% Language-only 19.0% 19.2% 19.8% 19.7% 20.3% Data mixing 19.0% - - 19.7% 20.4% Simultaneous SFT 19.0% 20.4% 21.0% - - Layer-Swapping 19.0% 20.0% 20.4% 20.8% 21.5% Table 1: Summary Table of Results. Each value represents the average across four models, three languages, and multiple training runs on MGSM in 2-shot evaluations. The last row represents “Separate Training” while the “Partial” trainings correspond to “Separate Parameters” trainings. All results shown here and in all other tables of this paper
https://arxiv.org/abs/2505.18356v1
display exact-match (EM) accuracy ( ↑) as a percentage. 3 Experimental Setup 3.1 Evaluation Limited by the lack of task-specific benchmarks for medium- and low-resource languages, we focus on MGSM (Shi et al., 2023) as the target task of this project. MGSM is a mathematical reasoning benchmark parallel across 10 languages as a re- sult of high quality translations from the popular English benchmark, GSM8K (Cobbe et al., 2021). For MGSM, we report exact match accuracy in two-shot, as one- and zero-shot led to inconsistent results. More few-shot examples did not display substantial gain. For target languages, we choose the languages in MGSM where the four LLMs per- form the worst: Bengali, Telugu, and Swahili. In addition, the lack of open-source math SFT data available in these languages motivates the need for more effective cross-lingual transfer. For a given fine-tuned model, we also evaluate the two-shot MGSM performance in English to evaluate its math performance irrespective of target language capa- bility. Conversely, we use the multilingual MCQA benchmarks GLOBAL MMLU (Singh et al., 2025) andBELEBELE (Bandarkar et al., 2024) as pure lan- guage understanding signals, independent of math. 3.2 Models We run experiments on four state-of-the-art instruction-finetuned LLMs: FALCON 3 7B (Team, 2024a), QWEN 2.5 7B Instruct (Yang et al., 2024), LLAMA 3.1 8B Instruct (Llama et al., 2024), and AYAExpanse 8B (Dang et al., 2024b). All have similarly high performance on MGSM in English. LLAMA 3.1 and FALCON 3 are English-centric, QWEN 2.5 bilingual with Chinese, and AYAEx- panse explicitly multilingual. However, all offi- cially cover numerous other languages (up to 23 for AYA) and perform reasonably on such languages,which we verify using BELEBELE andGLOBAL MMLU. Bengali, Swahili, and Telugu are amongst the official languages for none of these models. As a result, the four models are all low-scoring in MGSM in these languages, with the exception of LLAMA on Swahili (See Appendix A.8). 3.3 Parameter Allocation To determine which parameters to “allocate” to each capability, we rely on a mix of interpretability papers and small-scale empirical tests. As men- tioned in Section 2.2, numerous papers have identi- fied the most important parameters for multilingual capabilities to be the first few and last few trans- former layers of LLMs. These works, however, typically discuss mostly English-centric models (such as LLAMA 3.1 and FALCON 3). We therefore need to evaluate this for bilingual and multilingual models like QWEN 2.5 and AYAExpanse. For math- ematical reasoning, we note that Bandarkar et al. (2025) identifies the middle and late-middle trans- former layers as being the most important. This work, and numerous others (V oita et al., 2019; Ma et al., 2021; Zhao et al., 2024), similarly identi- fies multi-head attention parameters as critical to multilingual capabilities, as opposed to multi-layer perceptron parameters. To empirically verify these assumptions on our selected models, we run SFT over our datasets with different subsets frozen. We evaluated numerous ways to partition the parameters and find a number of splits that enable improvements on English math and on language-specific signals (e.g. BELEBELE ). To validate that the good performance
https://arxiv.org/abs/2505.18356v1
when freez- ing parameters is because the trainable parameters are particularly useful for a target task, we also run experiments with the opposite allocation (e.g. mid- dle layers frozen during mathematical reasoning training) and find that it works poorly. 4 Parameters that are frozen or resetFrozen duringReset after base (no SFT) 78.4% 78.4% [Z]only top-4 and bottom-8 layers (inverse of intuition)78.2% 78.9% [A]all MHA parameters + MLP parameters in top-2 and bottom-6 layers79.4% 79.8% [B] only top-4 and bottom-8 layers79.8% 79.8% [C] only top-2 and bottom-6 layers79.7% 80.0% None 80.1% 80.1% Table 2: MGSM 2-shot results ( ↑) on the English split after SFT on the English math data averaged across four models. These results (1) validate that our intu- ition leading to our parameter allocations [A, B, C] is reasonable seeing as results are close to full fine-tuning and are significantly higher than the inverse allocation [Z]. Additionally, (2) these results demonstrate that full fine-tuning then reverting parameters (second column) is more effective than freezing those parameters from the start (first column). While the search space of which parameters to freeze is large, we settle on three partitions that show sufficient empirical success: •[A]All multi-head attention parameters allo- cated to the target language. Then, amongst the multi-layer perceptron parameters, those in the first six and last two transformer layers still allocated to language, while those in the rest of the 32- or 36-layer LLM for math. •[B]The first eight and last four transformer layers allocated to language, the rest for math. •[C]The first six and last two transformer lay- ers allocated to language, the rest for math. In these three settings, both mathematical rea- soning and target language capabilities improve similarly to full SFT with a fraction of trainable parameters (See Table 2 for results for math). We evaluate the three for each of our experimental set- tings and, unless noted, report the highest scoring. 3.4 Training For SFT data, we create four datasets, one for math in English and one instruction dataset for each of the three target languages. The math instruction dataset consists of English math word problemsfrom the Orca-Math synthetic dataset (Mitra et al., 2024). For the language datasets, we replicate the creation of “generic” instruction fine-tuning datasets from Bandarkar et al. (2025) by combin- ing samples from open-source instruction and task- specific datasets. Importantly, there are no math samples in these multi-task language datasets. We provide specific details and citations for these data collections in Appendix A.6. Due to constraints on the amount of verifiable- quality data available in each of the target lan- guages, our datasets are controlled at 80k samples, 2k of which is reserved for validation. Because of significantly diminishing returns exhibited by the validation loss and downstream evaluations, we only train for one epoch for each of our settings. We additionally duplicate all experiments us- ing Low-Rank Adapters (LoRA) (Hu et al., 2022). Specifically, we use rank-stabilized LoRA (Kala- jdzievski, 2023) applied to both multi-layer per- ceptron and multi-head attention parameters. In general, the adjustments of our methods to be com- patible with
https://arxiv.org/abs/2505.18356v1
LoRA were minor unless noted other- wise. With four models, three languages, and two fine-tuning approaches (full and LoRA), we have a total of 24 experimental settings. For each, we do hyperparameter search over several runs to ensure comparability (See Appendix A.4 for details). 4 Experiments We describe numerous methods that modularize off-the-shelf, dense LLMs in different ways. We describe separate training as when we conduct sep- arate SFT runs on different datasets, albeit starting from the same off-the-shelf model. As previously mentioned, the separately trained checkpoints are then merged via Layer-Swapping. Separate pa- rameters implies that only the partition of parame- tersallocated (See Section 3.3) to that dataset are trained while the rest remain frozen. 4.1 Baselines (Math-only and Language-only) For comparison, we evaluate a number of straight- forward SFT setups to serve as baselines. We do full-parameter training runs for each of the target language generic SFT datasets and the English math SFT dataset. For further baselines, we re- run the above when leaving only parameters allo- cated to that capability trainable, and the rest are frozen. In addition, we replicate both full training and partial training in LoRA, where parameters are 5 “frozen” if no adapter is added for that parameter. 4.2 Data Mixing (Same Training, Same Parameters) As an additional baseline, we randomly mix the two datasets together and jointly optimize over the two disjoint tasks with all parameters left trainable. 4.3 Layer-Swapping (Separate Training, Same Parameters) For this setting, we exactly recreate the method pre- sented by Bandarkar et al. (2025). Starting from the same base model, separate variants are trained on different tasks, dubbed “experts”. Concretely, one expert has been trained on the English math data, and the other on the target language instruction dataset. To recompose a single model, the top and bottom transformer layers from the target language expert replace those in the math expert, while the math experts’ middle layers remain. We addition- ally implement the equivalent of this methodology with LoRA, where the set of adapters is merged by combining the adapters corresponding to parame- ters that would be swapped. Note that we do not retrain these experts and simply use the checkpoints from our baseline trainings. 4.4 Layer-Swapping with Partial SFT (Separate Training, Separate Parameters) We modify Layer-Swapping so that only the pa- rameters involved in the model merging are trained, and all those eventually ignored are kept frozen dur- ing training. The idea for this is that no parameters are unnecessarily trained and we can incentivize the training to focus the learned capabilities into the desired parameters. Similar to above, we do not retrain experts and simply merge checkpoints from our frozen parameter baselines. 4.5 Simultaneous Partition SFT (Same Training, Separate Parameters) We design a methodology to “simultaneously" fine- tune two partitions of LLM parameters on two dif- ferent datasets. To do so, we apply a gradient step on a batch from one dataset on the corresponding partition of parameters. Then, we switch which parameters are frozen and sample a batch from the other dataset for the next gradient step.
https://arxiv.org/abs/2505.18356v1
This frequent back-and-forth is intended to ensure the coordination of parameter updates during multi- task optimization. The validation set contains an equal amount from each datasets.Switching We default to a single step before switching to best simulate fully simultaneous train- ing, but additionally experiment with more steps between. We set the effective batch size1to 64. At the end of each step, all parameters just updated are frozen for the next step and conversely, all frozen parameters are unfrozen. In addition, a flag for the data iterator is switched to ensure the next batch of data will be sampled from the appropriate dataset. For LoRA training, the same logic is implemented. Optimizer We consider numerous approaches to adapt the AdamW optimizer (Loshchilov and Hutter, 2019) used in all previous experiments. Al- though we technically employ a single optimizer initialized on all parameters during training, we configure it to function as two independent opti- mizers, each exclusively managing its own separate subset of parameters. Namely, when a subset of parameters Ais frozen, the corresponding AdamW optimizer states ΩA(momentum and variance es- timates) are also frozen in time. As a result, when the parameters in Aare unfrozen, the correspond- ing momentum and variance estimates of ΩAstill reflect only the gradients steps previously applied toA. However, the other parameters Achave been updated in the meantime, meaning ΩArisks being outdated given the modified loss landscape. To test the impact of this inconsistency, we ablate over different numbers of steps between switches and find that the differences are very negligible (See Appendix A.3). We conclude that the optimizer restarting on an outdated loss landscape is of min- imal concern, presumably because of the smooth- ness of the loss topology. Since there is a single optimizer, the learning rate schedule is the same for all (constant with warmup). And while the gradi- ents tend to be larger for the multilingual data, we set a maximum gradient norm of 1.0for clipping. 5 Results Our experimental setting was designed to replicate a real-world scenario where multilingual LLM de- velopers would take a post-trained LLM and are limited by the amount of in-language post-training data. This constrained scenario means only mod- est improvements are achievable. However, we do observe several conclusive patterns. Across our different four models and three languages (12 conditions ), we can summarize into 6 treatments 1Effective batch size is the product of the batch size per GPU, number of GPUs, and gradient accumulation steps. 6 Performance Comparison of Modular Solutions SFT Type Base FullSimultaneous SFT Layer-Swapping Full LoRA Full SFT LoRA Part. SFT Part. LoRA Swahili 23.5% 25.1% 25.9% 25.2% 26.7% 25.8% 25.1% 24.8% Bengali 25.6% 27.9% 27.9% 26.9% 28.7% 27.5% 27.0% 26.7% Telugu 7.9% 8.2% 9.3% 9.0% 9.2% 9.2% 9.0% 8.6% English 78.4% 80.4% 81.8% 80.5% 80.9% 80.8% 79.9% 80.0% sw,bn,te A VG 19.0% 20.4% 21.0% 20.4% 21.5% 20.8% 20.4% 20.0% Table 3: All values presented above are MGSM 2-shot EM accuracy ( ↑), averaged across four models. The baseline presented for comparison in the 3rd column is the full SFT on the mix of the two
https://arxiv.org/abs/2505.18356v1
datasets. discussed in Sections 4.1 to 4.5. Despite the small magnitude of differences, the rank-based Friedman test (non-parameteric) shows statistically signifi- cant differences between the treatments at the 0.05 significance level. In our setting, we find that only training on the language dataset is more effective in improving the target language MGSM score than only on the math dataset (details in Appendix A.1). This implies, perhaps, that what our four models need most, is improved Swahili, Bengali, or Telugu abilities as opposed to math improvement. We validate the lack of need for full-parameter training when doing both language adaptation and math SFT. Once the most useful parameters have been identified for such a skill, as discussed in Sec- tion 3.3, comparable performance to full SFT can be achieved with a fraction of the trainable param- eters. Beyond potentially contributing to compo- sitionality, this leads to faster and more memory- efficient training. More details on these baselines can be seen in Appendix A.1. We do note, how- ever, that in the absence of resource limitations, SFT with less trainable parameters converged a bit slower and full fine-tuning still performed best. This is also true for LoRA, which has much less trainable parameters by nature. A significant result is that all our modular solu- tions perform statistically-significantly better than the non-modular baselines, as can be seen in Ta- ble 1. This is strongly the case for Telugu and Swahili in the displayed four-model averages, but varies more by specific modular method for Ben- gali in comparison to the top baseline (data mixing) (See Appendix A.5 for per-language results). Within our modular solutions, however, we find numerous surprising results. First, freezing theunused parameters in training experts before Layer- Swapping does not improve upon full training. As detailed in the last four columns of Table 3, the difference in performance is better when all modules are being finetuned for both LoRA and full-parameter SFT (statistically significant). This is counter-intuitive because the layers eventually merged are potentially dependent on parameter changes that are being replaced. Second, Layer- Swapping surprisingly outperforms the simultane- ous SFT. This is surprising because in our simulta- neous SFT, the modularity is being imposed cohe- sively as opposed to the ad hoc merging of layers from separate training runs. We note, however, that the simultaneous SFT performs second-best. To validate results further, we also evaluate more expensive Continual Pretraining (CPT) for QWEN 2.5 in Bengali across the experimental de- signs and find agreement with our SFT results (See details in Appendix A.2, A.7). However, we limit discussion of these results because of the small scale of experimental results. We additionally analyze the composability of individual experts under Layer-Swapping. We de- fine a good merging indicator as an evaluation signal of an expert that correlates with the perfor- mance of the merged model. We find that perfor- mance on general NLU benchmarks— BELEBELE andGLOBAL MMLU—is a stronger indicator of alanguage expert’s merge quality than MGSM results in the target language. Similarly, MGSM performance in English is a better predictor for a math expert than
https://arxiv.org/abs/2505.18356v1
MGSM in the target language. This is notable because MGSM in the target lan- guage is the target task of course, yet results more directly related to the training data tends to be more important for proper task composition. 7 6 Discussion Given the rejection of our hypothesis that simulta- neous fine-tuning would most effectively compose task and language capabilities, we discuss potential explanations for this outcome. Train-then-Revert vs. Freeze-then-Train Intu- ition may dictate that fine-tuning parameters and then later reverting part of them should be less effective than simply freezing those parameters from the start. In the former, the fine-tuning is unaware of future edits while the latter provides hard constraints during optimization. However, empirically, we find that across models, training- then-resetting outperforms freezing-then-resetting. We display this for our English math fine-tuning in Table 3.3. This explains why Layer-Swapping with full training (Section 4.3) may be preferen- tial to solutions involving freezing parameters. We conclude that while a large portion of fine-tuning weight updates are not needed in the end, either because they are noisy or redundant (Yu et al., 2024b), they enable optimization in a very high- dimensional space. This is analogous to recent papers discussing the Lottery Ticket Hypothesis (Frankle and Carbin, 2019), where it has been con- cluded that training a full neural network and then pruning it leads to stronger models than the same pruning before training (Frankle et al., 2021). Concatenating Components in Layer-Swapping We seek to explain why concatenating transformer layers from separately fine-tuned “experts" is so seamless. Task vectors (Ilharco et al., 2023b) are the∆s that result from fine-tuning (i.e., θFT−θ0). Task vector linearity refers to the property that lin- ear combinations of such task vectors form a co- herent, effective model. Ortiz-Jimenez et al. (2023) identifies that linearized task vectors exhibit better mergeability. Meanwhile, when fine-tuning heavily post-trained models like those used in our experi- ments, recent works show that updates to individ- ual model layers exhibit significant linearity (Zhou et al., 2024; Razzhigaev et al., 2024; Dai et al., 2025). Furthermore, research on mode connectivity (Frankle et al., 2020; Garipov et al., 2018) shows individual transformer layers can be smoothly in- terpolated (Zhou et al., 2023; Adilova et al., 2024). These works provide explanation for why ad hoc Layer-Swapping is not more degradative. Further Considerations We note that model merging is convenient because the configuration(e.g., what parameters to swap), can be determined after training. This enables fast iteration through configurations without retraining. This flexibility is sacrificed for our “separate parameters" methods, which require fixing parameter allocations. How- ever, an inconvenience of merging methods is the need to train two experts, potentially doubling the amount of training runs for hyperparameter search. 7 Conclusions Our results demonstrate that imposing modularity into dense LLMs for cross-lingual transfer is quite effective in low-data scenarios. We empirically val- idate this with numerous ways to impose such mod- ularity through fine-tuning with frozen parameters or model merging, all of which prove more effec- tive than non-modular baselines. Furthermore, we discover the surprising success of Layer-Swapping over other modular
https://arxiv.org/abs/2505.18356v1
methods that fine-tune task and language together or do not ad hoc revert param- eter updates. We conjecture that the success of this ad hoc merging method is because the math and language experts, when represented as task vectors, exhibit a high degree of linearity. As a re- sult, this method benefits from more robust training over all parameters while also leading to effective compositionality. We also empirically demonstrate that the success of Layer-Swapping is in part due to frozen-parameter fine-tunings underperforming full fine-tunings followed by parameter resets. 8 Future Work We encourage further work in multilingual NLP that leverages implicit modularity in LLMs, in- duces it during training, or designs explicitly modu- lar architectures. Our parameter allocation strategy relied on previous interpretability work and limited empirical evidence, and the search space of modu- lar configurations is largely unexplored. With post hoc model merging, iterating through many abla- tions can be quick. Although we focused on math- ematical reasoning—due to limited multilingual task-specific datasets—future work should exam- ine other tasks that may warrant different parameter allocations. More broadly, these results underscore the importance of improving interpretability around how capabilities are parameterized in LLMs, such as multilinguality. If we can better localize and sep- arate parameters by function, our findings suggest that modularization may yield significant improve- ments. 8 Limitations Small ∆sOur decision to use the instruction fine- tuned version of each of the open-source LLMs for our experiments was a conscious one that came with many considerations. We prioritized replicat- ing a real-life practical scneario, where model de- velopers would start from already fine-tuned LLM versions because of their broader capabilities. How- ever, as a result, this meant that our fine-tuning ex- periments only led to relatively small performance improvements with respect to the starting check- point. Such checkpoints have undergone extensive post-training, notably with significant mathemati- cal reasoning samples and varying amounts of mul- tilingual samples. Therefore, possible model im- provements with these small datasets were small, risking results that were not statistically signifi- cant. Nevertheless, this allowed us to control for the amount of improvement on benchmarks that was simply a result of the LLMs’ improved abil- ity to follow instructions after SFT, in addition to reflecting a more practical setting. Acknowledgement The authors acknowledge the support provided by Tanmay Parekh and Mohsen Fayyaz for this project. References Aakanksha, Arash Ahmadian, Seraphina Goldfarb- Tarrant, Beyza Ermis, Marzieh Fadaee, and Sara Hooker. 2024. Mix data or merge models? op- timizing for diverse multi-task learning. Preprint , arXiv:2410.10801. Linara Adilova, Maksym Andriushchenko, Michael Kamp, Asja Fischer, and Martin Jaggi. 2024. Layer- wise linear mode connectivity. In The Twelfth Inter- national Conference on Learning Representations . Jesujoba Alabi, Marius Mosbach, Matan Eyal, Dietrich Klakow, and Mor Geva. 2024. The hidden space of transformer language adapters. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 6588–6607, Bangkok, Thailand. Association for Computational Linguistics. Alan Ansell, Edoardo Ponti, Anna Korhonen, and Ivan Vuli´c. 2022. Composable sparse fine-tuning for cross- lingual transfer. In Proceedings of the 60th
https://arxiv.org/abs/2505.18356v1
Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 1778–1796, Dublin, Ireland. Association for Computational Lin- guistics.Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural ma- chine translation. In International Conference on Learning Representations . Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of mono- lingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 4623–4637, Online. Association for Computational Linguistics. Lucas Bandarkar, Davis Liang, Benjamin Muller, Mikel Artetxe, Satya Narayan Shukla, Donald Husa, Naman Goyal, Abhinandan Krishnan, Luke Zettlemoyer, and Madian Khabsa. 2024. The belebele benchmark: a parallel reading comprehension dataset in 122 lan- guage variants. In Proceedings of the 62nd Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 749–775, Bangkok, Thailand. Association for Computational Linguistics. Lucas Bandarkar, Benjamin Muller, Pritish Yuvraj, Rui Hou, Nayan Singhal, Hongjiang Lv, and Bing Liu. 2025. Layer swapping for zero-shot cross-lingual transfer in large language models. In The Thirteenth International Conference on Learning Representa- tions . Ankur Bapna and Orhan Firat. 2019. Simple, scal- able adaptation for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP) , pages 1538– 1548, Hong Kong, China. Association for Computa- tional Linguistics. Yoshua Bengio, Tristan Deleu, Nasim Rahaman, Nan Rosemary Ke, Sebastien Lachapelle, Olexa Bi- laniuk, Anirudh Goyal, and Christopher Pal. 2020. A meta-transfer objective for learning to disentangle causal mechanisms. In International Conference on Learning Representations . Leo Breiman. 1996. Bagging predictors. Machine Learning , 24(2):123–140. Tyler Chang, Zhuowen Tu, and Benjamin Bergen. 2022. The geometry of multilingual language model repre- sentations. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 119–136, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Rochelle Choenni, Dan Garrette, and Ekaterina Shutova. 2023. How do languages influence each other? study- ing cross-lingual data sharing during LM fine-tuning. InProceedings of the 2023 Conference on Empiri- cal Methods in Natural Language Processing , pages 13244–13257, Singapore. Association for Computa- tional Linguistics. Rochelle Choenni, Ekaterina Shutova, and Dan Garrette. 2024. Examining modularity in multilingual LMs via 9 language-specialized subnetworks. In Findings of the Association for Computational Linguistics: NAACL 2024 , pages 287–301, Mexico City, Mexico. Associ- ation for Computational Linguistics. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. arXiv preprint arXiv:2110.14168 . Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics , pages 8440– 8451, Online. Association for Computational Lin- guistics. Róbert Csordás, Sjoerd van Steenkiste, and Jürgen Schmidhuber. 2021. Are neural nets modular? in- specting functional modularity through differentiable weight masks.
https://arxiv.org/abs/2505.18356v1
In International Conference on Learn- ing Representations . Rui Dai, Sile Hu, Xu Shen, Yonggang Zhang, Xinmei Tian, and Jieping Ye. 2025. Leveraging submodule linearity enhances task arithmetic performance in LLMs. In The Thirteenth International Conference on Learning Representations . John Dang, Arash Ahmadian, Kelly Marchisio, Julia Kreutzer, Ahmet Üstün, and Sara Hooker. 2024a. RLHF can speak many languages: Unlocking mul- tilingual preference optimization for LLMs. In Pro- ceedings of the 2024 Conference on Empirical Meth- ods in Natural Language Processing , pages 13134– 13156, Miami, Florida, USA. Association for Com- putational Linguistics. John Dang, Shivalika Singh, Daniel D’souza, Arash Ahmadian, Alejandro Salamanca, Madeline Smith, Aidan Peppin, Sungjin Hong, Manoj Govindassamy, Terrence Zhao, Sandra Kublik, Meor Amer, Viraat Aryabumi, Jon Ander Campos, Yi-Chern Tan, Tom Kocmi, Florian Strub, Nathan Grinsztajn, Yannis Flet- Berliac, and 26 others. 2024b. Aya expanse: Com- bining research breakthroughs for a new multilingual frontier. Preprint , arXiv:2412.04261. DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingx- uan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, and 181 others. 2025. Deepseek-v3 technical report. Preprint , arXiv:2412.19437. William Fedus, Barret Zoph, and Noam Shazeer. 2022. Switch transformers: scaling to trillion parameter models with simple and efficient sparsity. The Jour- nal of Machine Learning Research , 23(1).Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies , pages 866–875, San Diego, California. Association for Computational Linguistics. Jonathan Frankle and Michael Carbin. 2019. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations . Jonathan Frankle, Gintare Karolina Dziugaite, Daniel Roy, and Michael Carbin. 2020. Linear mode connec- tivity and the lottery ticket hypothesis. In Proceed- ings of the 37th International Conference on Machine Learning , volume 119 of Proceedings of Machine Learning Research , pages 3259–3269. PMLR. Jonathan Frankle, Gintare Karolina Dziugaite, Daniel Roy, and Michael Carbin. 2021. Pruning neural net- works at initialization: Why are we missing the mark? InInternational Conference on Learning Representa- tions . Leo Gao, Jonathan Tow, Baber Abbasi, Stella Bider- man, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, and 5 others. 2024. The language model evaluation har- ness. Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. 2018. Loss surfaces, mode connectivity, and fast ensembling of dnns. In Advances in Neural Information Processing Systems , volume 31. Curran Associates, Inc. Michael Hanna, Ollie Liu, and Alexandre Variengien. 2023. How does GPT-2 compute greater-than?: In- terpreting mathematical abilities in a pre-trained lan- guage model. In Thirty-seventh Conference on Neu- ral Information Processing Systems . Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer
https://arxiv.org/abs/2505.18356v1
learning for NLP. In Proceedings of the 36th International Conference on Machine Learning , volume 97 of Proceedings of Machine Learning Research , pages 2790–2799. PMLR. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen- Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations . Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 10 2020. XTREME: A massively multilingual multi- task benchmark for evaluating cross-lingual gener- alisation. In Proceedings of the 37th International Conference on Machine Learning , volume 119 of Proceedings of Machine Learning Research , pages 4411–4421. PMLR. Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Worts- man, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. 2023a. Editing models with task arithmetic. InThe Eleventh International Conference on Learn- ing Representations . Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Worts- man, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. 2023b. Editing models with task arithmetic. InThe Eleventh International Conference on Learn- ing Representations . Fan Jiang, Honglin Yu, Grace Chung, and Trevor Cohn. 2025. Franken-adapter: Cross-lingual adap- tation of llms by embedding surgery. Preprint , arXiv:2502.08037. Damjan Kalajdzievski. 2023. A rank stabilization scaling factor for fine-tuning with lora. Preprint , arXiv:2312.03732. Amir Hossein Kargaran, Ali Modarressi, Nafiseh Nikeghbal, Jana Diesner, François Yvon, and Hin- rich Schütze. 2024. Mexa: Multilingual evaluation of english-centric llms via cross-lingual alignment. Preprint , arXiv:2410.05873. Mohammed Safi Ur Rahman Khan, Priyam Mehta, Ananth Sankar, Umashankar Kumaravelan, Sumanth Doddapaneni, Suriyaprasaad B, Varun G, Sparsh Jain, Anoop Kunchukuttan, Pratyush Kumar, Raj Dabre, and Mitesh M. Khapra. 2024. IndicLLMSuite: A blueprint for creating pre-training and fine-tuning datasets for Indian languages. In Proceedings of the 62nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 15831–15879, Bangkok, Thailand. Association for Computational Linguistics. Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In Proceedings of the ACL-02 Workshop on Unsuper- vised Lexical Acquisition , pages 9–16, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Takeshi Kojima, Itsuki Okimura, Yusuke Iwasawa, Hit- omi Yanaka, and Yutaka Matsuo. 2024. On the multi- lingual ability of decoder-based pre-trained language models: Finding and controlling language-specific neurons. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (Volume 1: Long Papers) , pages 6919–6971, Mexico City, Mexico. Association for Computational Linguistics.Wen Lai, Mohsen Mesgar, and Alexander Fraser. 2024. LLMs beyond English: Scaling the multilingual ca- pability of LLMs with cross-lingual feedback. In Findings of the Association for Computational Lin- guistics: ACL 2024 , pages 8186–8213, Bangkok, Thailand. Association for Computational Linguistics. Hugo Laurençon, Lucile Saulnier, Thomas Wang, Christopher Akiki, Albert Villanova del Moral, Teven Le Scao, Leandro V on Werra, Chenghao Mou, Eduardo González Ponferrada, Huu Nguyen, Jörg Frohberg, Mario Šaško, Quentin Lhoest, Angelina McMillan-Major, Gérard Dupont, Stella Biderman, Anna Rogers, Loubna Ben allal, Francesco De Toni, and 35 others. 2022. The bigscience ROOTS corpus: A 1.6TB composite multilingual dataset. In Thirty- sixth Conference on Neural Information Processing Systems Datasets and
https://arxiv.org/abs/2505.18356v1
Benchmarks Track . Team Llama, Abhimanyu Dubey, Abhinav Jauhri, Abhi- nav Pandey, Abhishek Kadian andAhmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, and et al. 2024. The llama 3 herd of models. Meta Research . Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Confer- ence on Learning Representations . Weicheng Ma, Kai Zhang, Renze Lou, Lili Wang, and Soroush V osoughi. 2021. Contributions of trans- former attention heads in multi- and cross-lingual tasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers) , pages 1956–1966, Online. Association for Computa- tional Linguistics. Tomas Mikolov, Quoc V . Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. Preprint , arXiv:1309.4168. Arindam Mitra, Hamed Khanpour, Corby Rosset, and Ahmed Awadallah. 2024. Orca-math: Unlocking the potential of slms in grade school math. Preprint , arXiv:2402.14830. Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hai- ley Schoelkopf, Xiangru Tang, Dragomir Radev, Alham Fikri Aji, Khalid Almubarak, Samuel Al- banie, Zaid Alyafeai, Albert Webson, Edward Raff, and Colin Raffel. 2022. Crosslingual general- ization through multitask finetuning. Preprint , arXiv:2211.01786. Benjamin Muller, Yanai Elazar, Benoît Sagot, and Djamé Seddah. 2021. First align, then predict: Un- derstanding the cross-lingual ability of multilingual BERT. In Proceedings of the 16th Conference of the European Chapter of the Association for Computa- tional Linguistics: Main Volume , pages 2214–2231, Online. Association for Computational Linguistics. 11 Team NLLB, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Hef- fernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, and 20 others. 2022. No language left behind: Scaling human-centered machine translation. Meta Research . Guillermo Ortiz-Jimenez, Alessandro Favero, and Pas- cal Frossard. 2023. Task arithmetic in the tangent space: Improved editing of pre-trained models. In Advances in Neural Information Processing Systems , volume 36, pages 66727–66754. Curran Associates, Inc. Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, and Jimmy Ba. 2024. Openwebmath: An open dataset of high-quality mathematical web text. In The Twelfth International Conference on Learning Representations . Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe. 2022. Lifting the curse of multilinguality by pre-training modular transformers. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 3479–3495, Seattle, United States. Association for Computational Lin- guistics. Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2021. AdapterFusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Con- ference of the European Chapter of the Association for Computational Linguistics: Main Volume , pages 487–503, Online. Association for Computational Lin- guistics. Jonas Pfeiffer, Sebastian Ruder, Ivan Vuli ´c, and Edoardo Ponti. 2023. Modular deep learning. Transactions on Machine Learning Research . Survey Certification. Jonas Pfeiffer,
https://arxiv.org/abs/2505.18356v1
Ivan Vuli ´c, Iryna Gurevych, and Se- bastian Ruder. 2020. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. InProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 7654–7673, Online. Association for Computa- tional Linguistics. Fred Philippy, Siwen Guo, and Shohreh Haddadan. 2023. Towards a common understanding of con- tributing factors for cross-lingual transfer in multi- lingual language models: A review. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 5877–5891, Toronto, Canada. Association for Computational Linguistics. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceed- ings of the 57th Annual Meeting of the Association forComputational Linguistics , pages 4996–5001, Flo- rence, Italy. Association for Computational Linguis- tics. Libo Qin, Qiguang Chen, Yuhang Zhou, Zhi Chen, Yinghui Li, Lizi Liao, Min Li, Wanxiang Che, and Philip S. Yu. 2025. A survey of multilingual large language models. Patterns , 6(1):101118. Zihan Qiu, Zeyu Huang, and Jie Fu. 2024. Unlocking emergent modularity in large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 2638–2660, Mexico City, Mexico. Association for Computational Lin- guistics. Anton Razzhigaev, Matvey Mikhalchuk, Elizaveta Gon- charova, Nikolai Gerasimenko, Ivan Oseledets, Denis Dimitrov, and Andrey Kuznetsov. 2024. Your trans- former is secretly linear. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 5376– 5384, Bangkok, Thailand. Association for Computa- tional Linguistics. Angelika Romanou, Negar Foroutan, Anna Sotnikova, Sree Harsha Nelaturu, Shivalika Singh, Rishabh Maheshwary, Micol Altomare, Zeming Chen, Mo- hamed A. Haggag, Snegha A, Alfonso Amayue- las, Azril Hafizi Amirudin, Danylo Boiko, Michael Chang, Jenny Chim, Gal Cohen, Aditya Kumar Dalmia, Abraham Diress, Sharad Duwal, and 38 others. 2025. INCLUDE: Evaluating multilingual language understanding with regional knowledge. In The Thirteenth International Conference on Learning Representations . Shuaijie She, Wei Zou, Shujian Huang, Wenhao Zhu, Xiang Liu, Xiang Geng, and Jiajun Chen. 2024. MAPO: Advancing multilingual reasoning through multilingual-alignment-as-preference optimization. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 10015–10027, Bangkok, Thai- land. Association for Computational Linguistics. Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush V osoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, and Jason Wei. 2023. Language models are multi- lingual chain-of-thought reasoners. In The Eleventh International Conference on Learning Representa- tions . Shivalika Singh, Angelika Romanou, Clémentine Four- rier, David I. Adelani, Jian Gang Ngui, Daniel Vila-Suero, Peerat Limkonchotiwat, Kelly Marchi- sio, Wei Qi Leong, Yosephine Susanto, Raymond Ng, Shayne Longpre, Wei-Yin Ko, Sebastian Ruder, Madeline Smith, Antoine Bosselut, Alice Oh, Andre F. T. Martins, Leshem Choshen, and 5 others. 2025. Global mmlu: Understanding and addressing cul- tural and linguistic biases in multilingual evaluation. Preprint , arXiv:2412.03304. 12 Shivalika Singh, Freddie Vargus, Daniel D’souza, Börje Karlsson, Abinaya Mahendiran, Wei-Yin Ko, Herumb Shandilya, Jay Patel, Deividas Mataciu- nas, Laura O’Mahony, Mike Zhang, Ramith Het-
https://arxiv.org/abs/2505.18356v1
tiarachchi, Joseph Wilson, Marina Machado, Luisa Moura, Dominik Krzemi ´nski, Hakimeh Fadaei, Irem Ergun, Ifeoma Okoh, and 14 others. 2024. Aya dataset: An open-access collection for multilingual instruction tuning. In Proceedings of the 62nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 11521– 11567, Bangkok, Thailand. Association for Compu- tational Linguistics. Alessandro Stolfo, Yonatan Belinkov, and Mrinmaya Sachan. 2023. A mechanistic interpretation of arith- metic reasoning in language models using causal mediation analysis. In Proceedings of the 2023 Con- ference on Empirical Methods in Natural Language Processing , pages 7035–7052, Singapore. Associa- tion for Computational Linguistics. Anke Tang, Li Shen, Yong Luo, Nan Yin, Lefei Zhang, and Dacheng Tao. 2024a. Merging multi-task models via weight-ensembling mixture of experts. In Forty- first International Conference on Machine Learning . Tianyi Tang, Wenyang Luo, Haoyang Huang, Dong- dong Zhang, Xiaolei Wang, Xin Zhao, Furu Wei, and Ji-Rong Wen. 2024b. Language-specific neurons: The key to multilingual capabilities in large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 5701–5715, Bangkok, Thailand. Association for Computational Linguistics. Falcon-LLM Team. 2024a. The falcon 3 family of open models. Qwen Team. 2025. Qwen3. The Mosaic Research Team. 2024b. Introducing dbrx: A new state-of-the-art open llm. Mosaic AI Research . Atnafu Lambebo Tonja, Bonaventure FP Dossou, Jes- sica Ojo, Jenalea Rajab, Fadel Thior, Eric Peter Wairagala, Aremu Anuoluwapo, Pelonomi Moiloa, Jade Abbott, Vukosi Marivate, and 1 others. 2024. Inkubalm: A small language model for low-resource african languages. arXiv preprint arXiv:2408.17024 . Elena V oita, David Talbot, Fedor Moiseev, Rico Sen- nrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lift- ing, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics , pages 5797–5808, Florence, Italy. Association for Computational Linguistics. Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, Nathan Lambert, Shengyi Huang, Kashif Rasul, and Quentin Gal- louédec. 2020. Trl: Transformer reinforcement learn- ing. https://github.com/huggingface/trl .Hetong Wang, Pasquale Minervini, and Edoardo Ponti. 2024. Probing the emergence of cross-lingual align- ment during LLM training. In Findings of the As- sociation for Computational Linguistics: ACL 2024 , pages 12159–12173, Bangkok, Thailand. Association for Computational Linguistics. Chris Wendler, Veniamin Veselovsky, Giovanni Monea, and Robert West. 2024. Do llamas work in English? on the latent language of multilingual transformers. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 15366–15394, Bangkok, Thai- land. Association for Computational Linguistics. Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Mor- cos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, and Ludwig Schmidt. 2022. Model soups: averaging weights of multiple fine-tuned mod- els improves accuracy without increasing inference time. In Proceedings of the 39th International Con- ference on Machine Learning , volume 162, pages 23965–23998. PMLR. Prateek Yadav, Derek Tam, Leshem Choshen, Colin Raffel, and Mohit Bansal. 2023. Ties-merging: Re- solving interference when merging models. In Thirty- seventh Conference on Neural
https://arxiv.org/abs/2505.18356v1
Information Process- ing Systems . An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jian- hong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, and 22 oth- ers. 2024. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 . Yunzhi Yao, Ningyu Zhang, Zekun Xi, Mengru Wang, Ziwen Xu, Shumin Deng, and Huajun Chen. 2024. Knowledge circuits in pretrained transformers. In Advances in Neural Information Processing Systems , volume 37, pages 118571–118602. Curran Asso- ciates, Inc. Fangcong Yin, Xi Ye, and Greg Durrett. 2024. Lofit: Localized fine-tuning on LLM representations. In The Thirty-eighth Annual Conference on Neural In- formation Processing Systems . Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yong- bin Li. 2024a. Language models are super mario: Absorbing abilities from homologous models as a free lunch. In International Conference on Machine Learning . PMLR. Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yongbin Li. 2024b. Language models are super mario: Ab- sorbing abilities from homologous models as a free lunch. In ICML . Zhengyan Zhang, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2022. MoEfication: Transformer feed-forward layers are mixtures of 13 experts. In Findings of the Association for Com- putational Linguistics: ACL 2022 , pages 877–890, Dublin, Ireland. Association for Computational Lin- guistics. Zhihan Zhang, Dong-Ho Lee, Yuwei Fang, Wenhao Yu, Mengzhao Jia, Meng Jiang, and Francesco Barbieri. 2024. PLUG: Leveraging pivot language in cross- lingual instruction tuning. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 7025– 7046, Bangkok, Thailand. Association for Computa- tional Linguistics. Yiran Zhao, Wenxuan Zhang, Guizhen Chen, Kenji Kawaguchi, and Lidong Bing. 2024. How do large language models handle multilingualism? In The Thirty-eighth Annual Conference on Neural Informa- tion Processing Systems . Zhanpeng Zhou, Zijun Chen, Yilan Chen, Bo Zhang, and Junchi Yan. 2024. On the emergence of cross- task linearity in pretraining-finetuning paradigm. InForty-first International Conference on Machine Learning . Zhanpeng Zhou, Yongyi Yang, Xiaojiang Yang, Junchi Yan, and Wei Hu. 2023. Going beyond linear mode connectivity: The layerwise linear feature connectiv- ity. In Thirty-seventh Conference on Neural Informa- tion Processing Systems . 14 A Appendix A.1 Detailed Baseline Results Detailed Performance of Non-Modular Baselines SFT Dataset None Data-Mixing Math-Only Language-Only SFT Type Base Full LoRA Full LoRA Part. FT Full LoRA Part. FT Swahili 23.5% 25.1% 24.8% 25.2% 24.4% 25.0% 24.8% 23.8% 24.3% Bengali 25.6% 27.9% 26.0% 26.1% 24.8% 25.6% 28.3% 26.6% 26.9% Telugu 7.9% 8.2% 8.4% 7.4% 7.4% 8.0% 7.9% 8.6% 8.2% English 78.4% 80.4% 80.0% 81.3% 81.0% 80.6% 79.9% 78.8% 79.0% sw,bn,te A VG 19.0% 20.4% 19.7% 19.6% 18.9% 19.5% 20.3% 19.7% 19.8% Table 4: All values presented above are MGSM 2-shot EM accuracy ( ↑), averaged across four models. Generally, we find that data mixing is the most effective, but with very small difference in comparison to language-only SFT. We exclude Partial LoRA results for space considerations, but report here that the results were for all numbers,
https://arxiv.org/abs/2505.18356v1
0-1% lower than LoRA results. A.2 CPT Results for Q WEN2.5 in Bengali Detailed Performance of CPT Experiments SFT Dataset None Mix Math-Only Lang-Only Simult. Layer-Swapping SFT Type Base Full Full Part.FT Full Part.FT Part.FT Full Part.FT Bengali 37.6% 38.2% 33.2% 34.2% 37.6% 37.8% 38.8% 39.4% 38.8% English 76.8% 77.6% 80.0% 79.8% 74.0% 73.8% 80.2% 79.2% 79.6% Table 5: All values presented above are MGSM 2-shot EM accuracy ( ↑), averaged across two runs. We find that our main results from SFT mostly stand, but limit our conclusions as the small number of runs prevent the findings from being statistically significant. We note that CPT trainings more substantially degrade performance in the opposite capability than SFT. “Mix" is “Data-Mixing" and “Simult." is “Simultaneous FT", shortened for space. A.3 Number of Gradient Steps Between Switches Table 6: Ablation over the number of gradient steps to do on a single dataset and single partition of model parameters before switching back to the other data and parameters. All runs were controlled to have the same exact hyperparameter settings on QWEN 2.5 7B Instruct with the target language Swahili. Four upper layers and eight lower layers were allocated for the target language, and a learning rate 1.2e−06 Gradient Steps per SwitchStarting Validation LossEnding Validation Loss∆for MGSM, Swahili 1 2.301 1.605 +3.2% 5 2.301 1.612 +2.4% 10 2.301 1.613 +2.8% 50 2.301 1.613 +2.0% 200 2.301 1.602 +0.8% 500 2.301 1.565 +1.2% 1171 2.301 1.536 −1.2% 15 These results indicate no negligible differences between the tested step counts. This implies the concern discussed in Section 4.5 of the optimizer unfreezing with an outdated loss landscape is minimal. Or at least, it implies that the ability to do numerous steps without interruption in the same setting outweighs this concern. And while increasing the gradient steps per switch does provide no negligible difference on the validation loss, intuitively it leads to a training paradigm farther from a truly simultaneous training. We find that on the target task, MGSM in Swahili, performance goes down progressively as the gradient steps per switch is increased. This implies the composition of math and Swahili capabilities are working less effectively. A.4 Details for Reproducibility For reproducibility, we detail our implementation and hyperparameters for training. The datasets them- selves are outlined in Sections A.6 and A.7. • Training is run on a single cluster of A100s, typically with only one GPU per training run. •Training methods are developed using the trl python package (von Werra et al., 2020) and models accessed via HuggingFace. • Learning rate ranged across training runs, but was typically in the range [1.0,2.0]×10−6. •For LoRA, it ranged from [4.0,9.0]×10−6. Rank and Alpha parameters were either (64,16)or (32,8). •Sequence length was either 512 or 1024. Effective batch size was typically 32, except for effective batch size of 64 for simultaneous training, as described in Section 4.5. • Evaluation is performed using the Language Model Evaluation Harness (Gao et al., 2024). A.5 Bar Graph of Per-Language Results Figure 2: Per-language breakdown of the average performance gain seen during our different types of training,
https://arxiv.org/abs/2505.18356v1
averaged across four models. We see that while math-only SFT (green) does well for Swahili and mixed-data SFT (red) does well for Bengali, our two modular solutions work consistently well across the three languages. Note: the y-axis is a percentage because the evaluation score is accuracy, notbecause this table displays percent change. 16 A.6 SFT Datasets Table 7: Datasets used for supervised-fine-tuning (SFT) in this project Category Datasets URL Math Orca Math word problems dataset from Microsoft (Mitra et al., 2024)https://huggingface.co/datasets/microsoft/ orca-math-word-problems-200k TeluguAya Dataset from Cohere for AI (Singh et al., 2024)https://huggingface.co/datasets/CohereForAI/ aya_dataset NLLB English-Telugu translation data from FAIR (NLLB et al., 2022)https://huggingface.co/datasets/allenai/nllb Synthetic English instruction dataset, machine translated to Telugu by Telugu- LLM-Labshttps://huggingface.co/ collections/Telugu-LLM-Labs/ indic-alpaca-datasets-65f2a3687d5cdbce8880c581 BengaliAya Dataset by Cohere for AI (Singh et al., 2024)https://huggingface.co/datasets/CohereForAI/ aya_dataset NLLB English-Bengali translation data from FAIR (NLLB et al., 2022)https://huggingface.co/datasets/allenai/nllb IndicShareLlama dataset from AI4Bharat (Khan et al., 2024)https://huggingface.co/datasets/ai4bharat/ indic-align BongChat dataset from Lumatic AI https://huggingface.co/datasets/lumatic-ai/ BongChat-v1-253k SwahiliAya Dataset by Cohere for AI (Singh et al., 2024)https://huggingface.co/datasets/CohereForAI/ aya_dataset NLLB English-Swahili translation data from FAIR (NLLB et al., 2022)https://huggingface.co/datasets/allenai/nllb Inkuba dataset from Lelapa (Tonja et al., 2024)https://huggingface.co/datasets/lelapa/ Inkuba-instruct xP3 MT dataset from BigScience, with FLORESsamples removed (Muen- nighoff et al., 2022)https://huggingface.co/datasets/bigscience/ xP3mt All datasets listed above were verified to be used in compliance with their respective licenses. Each dataset was properly attributed according to its license requirements. A.7 CPT Datasets Table 8: Datasets used for continual pretraining (CPT) in this project Category Datasets URL Math Open Web mathematical texts collected by the University of Toronto and Cam- bridge (Paster et al., 2024)https://huggingface.co/datasets/ open-web-math/open-web-math Bengali The ROOTS corpus subset of Bengali Wikipedia from BigScience (Laurençon et al., 2022)https://huggingface.co/datasets/ bigscience-data/roots_indic-bn_wikisource All datasets listed above were verified to be used in compliance with their respective licenses. Each dataset was properly attributed according to its license requirements. 17 A.8 Off-the-shelf Model Results To motivate the use of our four models and the three target languages, we provide preliminary results of these models prior to any fine-tuning. Model Size MGSM BELEBELE EN SW BN TE EN SW BN TE LLAMA 3.1 8B 79.6% 52.0% 32.8% 11.2% 88.6% 56.1% 59.3% 53.6% QWEN 2.5 7B 76.8% 12.8% 37.6% 13.6% 91.1% 37.2% 64.7% 41.3% AYAExpanse 8B 78.8% 10.8% 21.6% 3.2% 81.6% 32.3% 42.3% 29.9% FALCON 3 7B 78.4% 14.4% 10.4% 3.6% 85.9% 36.3% 34.8% 30.1% Table 9: The results on the MGSM (2-shot, EM accuracy ( ↑)) and BELEBELE (0-shot accuracy ( ↑)) benchmarks for the four models used in our experiments. We note that for all models, we use the instruction-finetuned version. 18
https://arxiv.org/abs/2505.18356v1
arXiv:2505.18363v1 [cs.CL] 23 May 2025SchemaGraphSQL: Efficient Schema Linking with Pathfinding Graph Algorithms for Text-to-SQL on Large-Scale Databases AmirHossein Safdarian1, Milad Mohammadi1, Ehsan Jahanbakhsh1, Mona Shahamat Naderi2,Heshaam Faili1 1University of Tehran, Iran2Sharif University of Technology, Iran {a.safdarian, miladmohammadi, ehsan.jahanbakhsh, hfaili}@ut.ac.ir , mona.shahamat@sharif.edu Abstract Text-to-SQL systems translate natural language questions into executable SQL queries, and recent progress with large language models (LLMs) has driven substantial improvements in this task. Schema linking remains a criti- cal component in Text-to-SQL systems, reduc- ing prompt size for models with narrow con- text windows and sharpening model focus even when the entire schema fits. We present a zero- shot, training-free schema linking approach that first constructs a schema graph based on foreign key relations, then uses a single prompt to Gemini 2.5 Flash to extract source and des- tination tables from the user query, followed by applying classical path-finding algorithms and post-processing to identify the optimal se- quence of tables and columns that should be joined, enabling the LLM to generate more accurate SQL queries. Despite being simple, cost-effective, and highly scalable, our method achieves state-of-the-art results on the BIRD benchmark, outperforming previous special- ized, fine-tuned, and complex multi-step LLM- based approaches. We conduct detailed ab- lation studies to examine the precision–recall trade-off in our framework. Additionally, we evaluate the execution accuracy of our schema filtering method compared to other approaches across various model sizes. 1 Introduction Relational databases are foundational to modern data infrastructure, powering analytics, reporting, and decision-making across domains. Yet, query- ing these databases typically requires fluency in SQL—a barrier for many users. Text-to-SQL sys- tems aim to democratize access by translating nat- ural language (NL) questions into executable SQL queries (Zhu et al., 2024; Zhang et al., 2024). En- abled by large language models (LLMs), recent systems achieve impressive performance across complex cross-domain settings.Algorithm 1: Graph-Based Schema Linking Input: Question q; schema graph G; Output: Relevant table set T⋆⊆ T 1Step 1: Identify source/destination tables ; 2 (Tsrc,Tdst)←LLM _call(q) 3Step 2: Build candidate path set ; 4C ←∅; 5 foreach Tsrc∈ Tsrc,Tdst∈ Tdstdo 6 C ← C ∪ ShortestPaths( Tsrc, Tdst) 7Step 3: Build union path ; 8 U←S p∈Cp; 9 return U; However, bringing these systems to real-world applications introduces new challenges. Enterprise databases often contain hundreds of tables and thou- sands of columns—far beyond the scale of aca- demic benchmarks. Supplying the entire schema to the model risks exceeding token limits and intro- duces considerable noise, which can hinder SQL generation and inflate inference cost (Cao et al., 2024; Li et al., 2023c). In practice, user queries typically touch only a small subset of the schema, making it crucial to identify and extract the rele- vant part—a process known as schema linking (Lei et al., 2020). Schema linking aims to determine which ta- bles or columns are needed to answer a user ques- tion. While early methods relied on exact string matches (Yu et al., 2018), recent work has proposed neural linkers (Gan et al., 2023), retrieval-based modules (Pourreza and Rafiei, 2024), and prompt- based systems (Wang and Liu, 2025). These can
https://arxiv.org/abs/2505.18363v1
capture semantic signals beyond surface overlap, but typically require supervised training, complex multi-stage pipelines, or brittle prompt engineering. They also struggle with the core trade-off: being precise enough to reduce noise, yet broad enough not to miss critical context (Liu et al., 2024; Wang et al., 2025). In this work, we ask: Can we perform effec- tive schema linking without relying on specialized fine-tuned models or complex prompting strate- gies? Our answer is affirmative. We introduce SchemaGraphSQL , a zero-shot schema linking framework that revisits classical algorithmic tools. Our key idea is to model schema linking as a graph search problem. We treat the database schema as a graph where nodes are tables and edges reflect foreign-key connections. Given a user query, we make a single LLM call to pre- dict coarse-grained source and destination tables, then apply deterministic path-finding algorithms to enumerate all shortest join paths between them. The union of these paths forms a compact sub- schema—guaranteed to be connected and grounded in the query. This perspective is both simple and surprisingly powerful. To our knowledge, SchemaGraphSQL is the first Text-to-SQL system to rely exclusively on classical graph algorithms for schema link- ing, using LLMs only for coarse guidance. It requires no training, incurs minimal inference cost, and integrates easily into any downstream parser or LLM-based SQL generator. Empirical results on the BIRD benchmark show that SchemaGraphSQL achieves new state-of-the- art scores on recall-focused schema linking metrics and improves execution accuracy across multiple SQL generators. We also conduct ablations demon- strating that even this minimal linking method out- performs specialized neural or prompt-based sys- tems in robustness and cost-efficiency. Main Contributions: •We introduce a zero-shot schema linking approach that models database schemas as graphs and applies classical path-finding al- gorithms. Our method achieves state-of- the-art performance without requiring any training—either for fine-tuning or infer- ence—making it highly suitable for low- resource, real-world scenarios where training data is unavailable or difficult to obtain. •Our system uses only a single lightweight LLM call (Gemini 2.5 Flash) per query, with minimal token usage (averaging 4593 input and 14 output tokens), significantly reducing inference cost while maintaining ease of inte- gration and deployment. •We conduct comprehensive empirical evalu-ations, demonstrating superior schema link- ing performance compared to fine-tuned and specialized methods. Additionally, we per- form detailed ablation studies to examine pre- cision–recall trade-offs and assess the down- stream impact on Text-to-SQL execution accu- racy across a range of open-source and closed- source models. 2 Related Work Text-to-SQL systems aim to automatically trans- late natural language questions into executable SQL queries, thereby enabling non-experts to interact with relational databases. The advent of large lan- guage models (LLMs) has significantly advanced this task (Zhang et al., 2024; Zhu et al., 2024), with models like GPT-3.5/4, Gemini, and their open- source variants demonstrating impressive perfor- mance across benchmarks. However, as schema size increases, providing the entire schema as input may exceed the model’s context window, especially in large-scale databases. Even when using recent LLMs with extended context lengths, supplying the full schema can introduce noise and hinder the
https://arxiv.org/abs/2505.18363v1
model’s ability to focus on relevant elements. 2.1 Schema Linking in Text-to-SQL Schema linking—the process of aligning natural language mentions to corresponding tables and columns in a database—is a crucial component of Text-to-SQL systems (Lei et al., 2020; Liu et al., 2022; Li et al., 2023c). Early approaches relied on exact string matching or type-based heuris- tics (Yu et al., 2018), which struggled with syn- onyms, paraphrases, and complex cross-domain schemas. Recent methods have increasingly lever- aged pretrained LLMs and neural encoders to im- prove linking accuracy (Gan et al., 2023; Glass et al., 2025). Schema linking has proven partic- ularly important for LLM pipelines that operate on large or multi-database environments, where prompt space is limited and precision in schema fil- tering directly affects SQL generation quality (Cao et al., 2024; Liu et al., 2025). 2.2 Neural and Prompt-Based Linking Strategies Numerous methods have been proposed to handle schema linking within LLM-based Text-to-SQL systems. Some decouple schema linking as a sepa- rate module before SQL generation (Pourreza and Figure 1: Overview of our graph-based schema linking pipeline. Rafiei, 2024; Li et al., 2023a), while others in- corporate schema selection as a prompt-driven or retrieval-augmented step (Wang and Liu, 2025). Extractive methods, such as Glass et al. (2025), di- rectly prompt LLMs to list relevant schema items, trading generation flexibility for interpretability and control. RSL-SQL (Cao et al., 2024) pro- poses a bidirectional pruning mechanism with self- correction to boost recall, while Solid-SQL (Liu et al., 2025) augments training data to improve link- ing robustness. Despite variations in architecture, a common trend across these systems is the effort to balance schema coverage (recall) with relevance filtering (precision) to avoid overloading the LLM or omitting critical elements. 2.3 Graph-Based Approaches for Schema Linking A parallel line of work models the database schema as a graph structure, where tables and columns are nodes, and foreign-key or semantic relations form edges. These methods primarily leverage graph neural networks (GNNs) or relation-aware trans- formers to propagate information across schema components. RAT-SQL (Wang et al., 2020) pio- neered relation-aware attention over a joint ques- tion–schema graph, inspiring successors such as LGESQL (Cao et al., 2021) (line-graph encoding of meta-relations) and ShadowGNN (Chen et al., 2021) (delexicalised projection for cross-schema generalisation). Later hybrids integrate graph rea-soning directly into pretrained LMs, e.g. Graphix- T5 (Li et al., 2023b) and GRL-SQL (Gong and Sun, 2024). Most recently, SQLformer (Bazaga et al., 2024) embeds schema structure as induc- tive bias in a Transformer encoder and autoregres- sively generates SQL ASTs as graphs. While graph- enhanced models capture rich global relations, they typically require substantial fine-tuning or architec- tural changes—an obstacle in low-resource, real- time deployments. Graph-based schema linking methods have recently declined in popularity as LLM-driven approaches have become dominant. 2.4 Classical Graph Algorithms in Schema Linking In contrast to learned graph encoders, only a hand- ful of systems reuse classical graph algorithms to aid LLMs. DBCopilot (Wang et al., 2025) con- structs a directed schema graph and performs depth- first traversal to linearise the sub-schema passed to a lightweight “router”
https://arxiv.org/abs/2505.18363v1
model. Interactive- T2S (Xiong et al., 2024) equips an LLM agent with a FINDSHORTEST PATH tool that runs breadth- first search over the foreign-key graph to supply valid join chains during multi-turn dialogue. These works demonstrate the practicality of DFS/BFS as auxiliary helpers, but the graph search remains pe- ripheral—responsible only for join validation or routing—rather than serving as the core schema- linking engine. 2.5 Positioning Our Work While prior literature has thoroughly explored neu- ral and graph-enhanced architectures for schema linking, the explicit use of classical graph algo- rithms—particularly as the core mechanism for schema linking in LLM-based Text-to-SQL sys- tems—remains rare. Our approach, SCHEMA - GRAPH SQL , revisits this paradigm by operational- izing schema linking as a path-selection problem on the schema graph. To our knowledge, this is the first work to systematically evaluate and ablate clas- sic path-finding algorithms for schema linking in LLM-driven Text-to-SQL pipelines on real-world benchmarks. 3 Methodology Notation Databases. Arelational database is repre- sented as D=⟨T,A,K⟩, where: •T={T1, . . . , T n}: set of tables. •A(Ti): attributes (columns) of table Ti; A=S Ti∈TA(Ti)is the global set of attributes. •K ⊆ T × T : set of foreign key (FK) relations. Theschema graph is the undirected graph G= (T,K), with nodes as tables and edges as FK links. For sparse schemas (fewer than two edges), we further augment the schema graph by adding edges between tables that share a column containing “id” in its name, thus ensuring that the schema graph is suffi- ciently connected for path enumeration. Languages. •L: set of well-formed natural language questions. •S: set of valid SQL queries. Given q∈ L, the objective is to generate Q∈ S that answers qoverD. This section formalizes the schema linking prob- lem and describes our graph-based, training-freeapproach for selecting minimal connected sub- schemas to facilitate Text-to-SQL generation. We begin by introducing notation and the problem for- mulation, then present our graph-based schema linking procedure, and finally detail the configura- tion space of our approach. 3.1 Problem Formulation We first introduce the notation used throughout this paper: Definition 3.1 (Text-to-SQL) .Given qandD, Text-to-SQL seeks a function fNL2SQL :L × D −→ S that returns an executable SQL query Q= fNL2SQL (q,D)that answers the user question q on the database D. Definition 3.2 (Schema Linking) .LetG= (T,K) be the schema graph of D.Schema linking selects a connected sub-schema S=⟨T⋆,K⋆⟩withT⋆⊆ TandK⋆⊆ K sufficient to express the SQL query answering q. Formally, gSL:L ×G−→ P (T),T⋆=gSL(q, G) Here, K⋆={(Ti, Tj)∈ K | Ti, Tj∈ T⋆} The output sub-schema Sdefines the smallest set of tables and links needed to answer qwhile re- maining connected within the schema graph. 3.2 Graph-Based Schema Linking as Path Selection Step 1: Extracting Source and Destination Ta- bles. A single LLM call extracts two subsets of tables from the schema: •Ts(sources ): tables whose columns appear in query conditions or filtering predicates; •Td(destinations ): tables containing the columns requested as output. Both sets are guaranteed to be non-empty and may overlap, reflecting cases where the same table is used for both filtering
https://arxiv.org/abs/2505.18363v1
and output. We operationalize schema linking as a path- selection task on the schema graph G, which en- ables systematic and efficient sub-schema identifi- cation: This extraction is performed via a single call to Gemini 2.5 Flash, guided by a dedicated system prompt designed to elicit precise identification of source and destination tables from the question and schema. The full prompt is shown in Prompt 1. Prompt 1: System prompt for source and des- tination extraction ROLE & OBJECTIVE You are a senior data engineer who analyses SQL schemas and maps user questions precisely to source tables (filter- ing) and destination tables (final result columns). TASK Identify: •Source table(s) (src): contain columns used in filter- s/conditions. •Destination table(s) (dst): contain columns returned in the answer. INSTRUCTIONS 1. Internally inspect every table to determine • which tables participate in filtering, and • which tables supply the requested output columns. Briefly justify your choice internally butdo not include that justification in the final answer. 2.Output exactly one line in the following format: src=TableA,TableB ,dst=TableC,TableD Step 2: Candidate Path Enumeration. For ev- ery pair (Ts, Td)∈ T s× Td, we enumerate all shortest simple paths connecting them in G: SP(Ts, Td) =n p pis a simple path Ts; Td,|p|= dist G(Ts, Td)o This set SP(Ts, Td)contains all minimal-length paths in the schema graph between each source and destination table pair. The global candidate set and their union are de- fined as: C=[ Ts∈Ts[ Td∈TdSP(Ts, Td), U =[ p∈Cp Here,Cenumerates all candidate paths, and Uis the union of all tables appearing in any candidate path—representing the maximal connected sub- graph that could be relevant for the query. Step 3: Path Selection and Sub-schema Con- struction. Depending on the configuration (de- tailed below), the set Uis optionally appended to C. A second LLM call (or a deterministic rule) selects a candidate path p⋆∈ C, and we set T⋆:=p⋆as the chosen subset of relevant tables for downstream SQL generation.3.3 Configurations To provide flexibility and support empirical analy- sis, we define a family of selection strategies param- eterized by the following flags: let ks=|Ts|>0, kd=|Td|>0, LONGEST ∈ {false,true}, UNION∈ {false,true}. Table 3.3 summarizes the seven configura- tions we evaluate, spanning single-source/single- destination and union-based settings. #(ks, kd)LONGEST UNION 1(1,1) false true 2(1,∗) false true 3(∗,1) false true 4(∗,∗) false true 5(∗,∗) true true 6(∗,∗) false false 7(∗,∗) false always select U Here, ∗means any positive integer. Mode 5 chooses the longest among the shortest paths; Mode 6 excludes UfromC; Mode 7 bypasses path selection and deterministically returns the union U. This design enables ablation studies to assess the effect of schema coverage and path selection criteria on final Text-to-SQL accuracy. 3.4 End-to-End Objective Given configuration Θ, our full pipeline is: fΘ NL2SQL (q,D) =hGEN q, gΘ SL(q, G) where gΘ SLis our graph-based schema linker and hGEN is any downstream SQL generator, con- strained to use only the filtered schema T⋆. All pipeline steps operate in a single pass, are fully automatic, and require no training data or domain adaptation. 4 Experimental Setup 4.1 Dataset All experiments are conducted
https://arxiv.org/abs/2505.18363v1
on the BIRD de- velopment split, which comprises 1,534 natural- language questions over 11 heterogeneous rela- tional databases. For schema linking precision, recall, and exact match rate, we use the BIRD dev set gold queries by extracting the referenced ta- bles. For execution accuracy, we follow the official evaluation script provided by BIRD without modi- fication. 4.2 Compared Methods SchemaGraphSQL (Ours) Unless otherwise noted, results correspond to Mode 7 in Table 3.3, i.e., we deterministically return the union Uof all shortest paths connecting the LLM-identified source and destination tables (cf. Section 3.2). The src/dstextraction prompt (Prompt 1) is executed using google/gemini-2.5-flash-preview at temperature 0.2, while downstream SQL genera- tion is performed at temperature 0.3. LLM as Schema Linker (Baseline) A single Gemini 2.5 Flash call is prompted to list alltables that must appear in the FROM /JOIN clause given the user question. This mirrors prior “single-step” schema linking approaches while controlling for model and prompt length. DENSE RETRIEVER We embed each table name (along with its column names) using the multilingual- E5-large-instruct encoder. For each question, the top- ktables ( k= 1. . .6) retrieved via cosine similarity form the predicted schema. For completeness, we also include published BIRD dev results from recent schema-linking systems such as Extractive Schema Linking for Text-to-SQL (Glass et al., 2025) and L INKALIGN . (Wang and Liu, 2025) We did notre-run these sys- tems; hence, they are excluded from execution ac- curacy comparisons. 4.3 LLMs for SQL Generation Following schema filtering, we evaluate four LLMs for SQL generation: •google/gemini-2.5-flash-preview ; •google/gemma-3-27b-it ; •google/gemma-3-12b-it ; •google/gemma-3-4b-it . All calls are made through the respective provider APIs using identical configurations and prompting templates. 4.4 Evaluation Metrics Schema-level Metrics. LetGbe the gold table set and Pthe predicted set. •Precision: The percentage of predicted tables that are actually present in the gold SQL query: Precision =|P∩G| |P|•Recall: The percentage of gold tables that are successfully predicted: Recall =|P∩G| |G| •FβScore: The generalized F-score that weights recall βtimes more than precision: Fβ=(1 +β2)|P∩G| β2|G|+|P|, β ∈ {1,6} •Exact Match Rate (EMR): The percentage of examples where the predicted schema exactly matches the gold schema: EMR =1 NNX i=1I[Pi=Gi] End-to-End Metric Execution accuracy is com- puted using the official BIRD evaluation script: the generated SQL query is executed against the database, and its result must exactly match that of the reference query. 4.5 Implementation Notes All experiments are conducted via hosted API end- points; no on-premise hardware is used. Each query incurs (i) one Gemini 2.5 Flash call for schema linking, and (ii) one model call for SQL genera- tion (Gemini2.5 or Gemma3). Code, prompts, and outputs will be released to support reproducibility. 5 Results 5.1 Schema Linking Evaluation Table 1 shows that our primary configuration, SchemaGraphSQL force-union , attains Recall = 95.71 % and an F6=95.43 % on the BIRD devel- opment split—surpassing all published systems, including the previous recall-centric leader ExSL f (F6=93.92 % ). Prior work has argued that recall- weighted metrics such as F 6are the most reliable indicator of downstream success, because omitting a relevant table is
https://arxiv.org/abs/2505.18363v1
far more damaging than includ- ing extras (Glass et al., 2025). By pushing both recall and F 6to new highs without any supervised training, SchemaGraphSQL force-union establishes a new performance bar for zero-shot schema link- ing. Table 1: Schema Linking Results in Dev Mode Method Exact Match Rate (%) Precision (%) Recall (%) F1 (%) F6 (%) LLM as Schema Linker 75.88 91.79 89.90 90.83 89.95 Retrieval (Top1) 20.08 86.70 44.46 58.78 45.05 Retrieval (Top2) 26.79 66.59 67.80 67.19 67.77 Retrieval (Top3) 4.63 53.67 80.91 64.54 79.82 Retrieval (Top4) 1.24 45.79 87.64 60.15 85.52 Retrieval (Top5) 1.04 39.89 91.11 55.49 88.06 Retrieval (Top6) 1.04 35.43 93.31 51.36 89.37 DIN-SQL - 79.90 55.70 65.64 56.16 PET-SQL - 81.60 64.90 72.30 65.26 MAC-SQL - 76.30 56.20 64.73 56.60 MCS-SQL - 79.60 76.90 78.23 76.97 RSL-SQL - 78.10 77.50 77.80 77.52 LinkAlign Agent - 77.10 79.40 78.23 79.34 DTS-SQL - 95.07 92.74 93.89 92.80 Gen - 90.40 95.50 92.88 95.35 ExSL c - 95.86 93.94 94.89 93.99 ExSL f - 96.35 93.85 95.08 93.92 SchemaGraphSQL 1−1 71.06 94.89 84.02 89.12 84.28 SchemaGraphSQL force−union 76.60 86.21 95.71 90.71 95.43 For users who require a tighter schema, our bal- anced SchemaGraphSQL n-nvariant delivers the best F 1(92.93 %) with only a modest drop in re- call (95.10 %). Exact-match rate also improves over the single-step LLM baseline (75.88 %)—ris- ing to 78.29 % for n-nand 76.60 % for force- union—demonstrating that classical graph search repairs connectivity errors that an LLM alone often misses. 5.2 Ablation Insights The configuration sweep in Table 2 highlights two actionable lessons: •Union is essential. Removing the union step (no-union ) drops both F 1and EMR, confirming that coverage matters more than compactness. •Avoid unnecessary hops. Forcing the longest path ( force-longest ) harms all metrics, indicat- ing that extra intermediate tables add noise with- out benefit. Together, these results validate our design choice: merge all shortest paths for maximum re- call, then optionally down-select (e.g., n-n) when higher precision is required. 5.3 End-to-End Execution Accuracy Table 3 reports execution accuracy for four LLM generators. Across the board, Schema- GraphSQL yields gains of 6–12 % over theTable 2: Schema-linking results across graph settings on BIRD-Dev. Method EMR Prec. Rec. F1 F6 (%) (%) (%) (%) (%) SchemaGraphSQL 1−1 71.06 94.89 84.02 89.12 84.28 SchemaGraphSQL 1−n 78.16 93.29 91.55 92.41 91.60 SchemaGraphSQL n−1 78.23 90.99 94.86 92.89 94.76 SchemaGraphSQL n−n 78.29 90.87 95.10 92.93 94.98 SchemaGraphSQL force -longest 71.64 89.47 88.45 88.96 88.47 SchemaGraphSQL no-union 73.73 91.39 90.03 90.71 90.07 SchemaGraphSQL force -union 76.60 86.21 95.71 90.71 95.43 single-step baseline. Using Gemini-2.5-Flash, SchemaGraphSQL force−union attains 62.91 % total accuracy—only 1.5 % short of the oracle “ideal schema linking” setting, implying that most residual errors stem from SQL generation rather than linking. Improvements concentrate on the Moderate and Challenging subsets: Gemini-2.5-Flash sees a +15 % boost on challenging questions, reflect- ing SchemaGraphSQL’s advantage on multi-join queries. For every generator, the high-recall force-union variant outperforms the high-precision 1-1 variant on execution accuracy by 2–7 % (Dev) and 4–12 % (MiniDev). This affirms that omitting a table
https://arxiv.org/abs/2505.18363v1
is far more damaging than including extras—LLMs can ignore noise but cannot guess missing joins. Among schema metrics, F 6correlates best with Table 3: SQL Execution Accuracy Results - Dev LLM Method Simple (%) Moderate (%) Challenging (%) Total (%) Gemma-3-4BIdeal Schema Linking 42.49 21.94 16.67 33.83 Baseline 30.05 13.76 7.64 23.01 Retrieval 33.51 17.20 13.19 26.66 SchemaGraphSQL n−n 35.46 17.63 12.50 27.90 SchemaGraphSQL 1−1 28.76 11.61 8.33 21.64 SchemaGraphSQL force−union 35.35 18.92 20.83 29.01 Gemma-3-12BIdeal Schema Linking 58.38 41.08 29.86 50.46 Baseline 42.59 22.15 16.67 33.96 Retrieval 46.38 30.97 27.08 39.90 SchemaGraphSQL n−n 52.00 35.05 27.78 44.59 SchemaGraphSQL 1−1 50.59 29.03 23.61 41.53 SchemaGraphSQL force−union 54.38 35.27 26.39 45.96 Gemma-3-27BIdeal Schema Linking 63.14 47.96 38.19 56.19 Baseline 49.41 31.40 25.69 41.72 Retrieval 52.22 41.51 33.33 47.20 SchemaGraphSQL n−n 59.68 45.16 34.03 52.87 SchemaGraphSQL 1−1 58.38 41.08 31.94 50.65 SchemaGraphSQL force−union 61.19 44.73 37.50 53.98 Gemini-2.5-FlashIdeal Schema Linking 71.46 55.48 47.92 64.41 Baseline 59.35 41.08 34.72 51.50 Retrieval 64.11 50.97 45.83 58.41 SchemaGraphSQL n−n 68.22 53.33 44.44 61.47 SchemaGraphSQL 1−1 66.81 51.61 43.06 59.97 SchemaGraphSQL force−union 68.32 56.13 50.00 62.91 end-to-end success: the highest-F 6model is invari- ably the highest-accuracy model, whereas precision alone can be misleading. 5.4 Efficiency Our pipeline adds negligible latency: one Gemini- Flash call consumes on average 4.6 K input and 14 output tokens, and the subsequent O(|E|)shortest- path search completes in under 15 ms on commod- ity hardware. Thus SchemaGraphSQL is compat- ible with real-time database interfaces and low- resource deployments. 6 Conclusion We have presented SCHEMA GRAPH SQL , a lightweight, zero-shot schema linking framework that integrates classical path-finding algorithms into modern LLM-based Text-to-SQL systems. Un- like prior work that often relies on heavy prompting techniques or supervised fine-tuning, our method outperforms prior work in schema linking with min- imal computational overhead. Beyond accuracy gains, SCHEMA GRAPH SQL offers a transparent and interpretable mechanism for schema filtering,making it well-suited for practical deployment in real-world text-to-SQL systems. Limitation While SCHEMA GRAPH SQL delivers strong per- formance on large-scale databases with well- structured foreign key relations, it has several lim- itations. First, our approach is not optimized for deeply nested or compositional queries that require complex subquery reasoning. Second, on dense schema graphs with excessive or noisy foreign key links, the shortest-path enumeration may yield overly broad candidate sets, affecting precision. Lastly, we treat all join paths equally and do not incorporate heuristics or weights for foreign key importance or estimated join costs, which could further improve path selection and SQL execution quality. References Adrián Bazaga, Pietro Liò, and Gos Micklem. 2024. Sqlformer: Deep auto-regressive query graph generation for text-to-sql translation. Preprint , arXiv:2310.18376. Ruisheng Cao, Lu Chen, Zhi Chen, Yanbin Zhao, Su Zhu, and Kai Yu. 2021. LGESQL: Line graph enhanced text-to-SQL model with mixed local and non-local relations. In Proceedings of the 59th An- nual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers) , pages 2541–2555, Online. Association for Computational Linguistics. Zhenbiao Cao, Yuanlei Zheng, Zhihao Fan, Xiaojin Zhang, Wei Chen, and
https://arxiv.org/abs/2505.18363v1
Xiang Bai. 2024. Rsl- sql: Robust schema linking in text-to-sql generation. Preprint , arXiv:2411.00073. Zhi Chen, Lu Chen, Yanbin Zhao, Ruisheng Cao, Zihan Xu, Su Zhu, and Kai Yu. 2021. ShadowGNN: Graph projection neural network for text-to-SQL parser. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies , pages 5567–5577, Online. Association for Computa- tional Linguistics. Yujian Gan, Xinyun Chen, and Matthew Purver. 2023. Re-appraising the schema linking for text-to-SQL. In Findings of the Association for Computational Lin- guistics: ACL 2023 , pages 835–852, Toronto, Canada. Association for Computational Linguistics. Michael Glass, Mustafa Eyceoz, Dharmashankar Sub- ramanian, Gaetano Rossiello, Long Vu, and Alfio Gliozzo. 2025. Extractive schema linking for text-to- sql.Preprint , arXiv:2501.17174. Zheng Gong and Ying Sun. 2024. Graph reasoning enhanced language models for text-to-sql. In Pro- ceedings of the 47th International ACM SIGIR Con- ference on Research and Development in Information Retrieval , SIGIR ’24, page 2447–2451, New York, NY , USA. Association for Computing Machinery. Wenqiang Lei, Weixin Wang, Zhixin Ma, Tian Gan, Wei Lu, Min-Yen Kan, and Tat-Seng Chua. 2020. Re-examining the role of schema linking in text-to- SQL. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 6943–6954, Online. Association for Computational Linguistics. Haoyang Li, Jing Zhang, Cuiping Li, and Hong Chen. 2023a. Resdsql: Decoupling schema linking and skeleton parsing for text-to-sql. In AAAI Conference on Artificial Intelligence . Jinyang Li, Binyuan Hui, Reynold Cheng, Bowen Qin, Chenhao Ma, Nan Huo, Fei Huang, Wenyu Du, Luo Si, and Yongbin Li. 2023b. Graphix-t5: Mixing pre-trained transformers with graph-aware layers for text-to-sql parsing. Preprint , arXiv:2301.07507. Jinyang Li, Binyuan Hui, Ge Qu, Jiaxi Yang, Binhua Li, Bowen Li, Bailin Wang, Bowen Qin, Rongyu Cao, Ruiying Geng, Nan Huo, Xuanhe Zhou, Chenhao Ma, Guoliang Li, Kevin C. C. Chang, Fei Huang, Reynold Cheng, and Yongbin Li. 2023c. Can llmalready serve as a database interface? a big bench for large-scale database grounded text-to-sqls. Preprint , arXiv:2305.03111. Aiwei Liu, Xuming Hu, Li Lin, and Lijie Wen. 2022. Semantic enhanced text-to-sql parsing via iteratively learning schema linking graph. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Dis- covery and Data Mining , KDD ’22, page 1021–1030, New York, NY , USA. Association for Computing Machinery. Geling Liu, Yunzhi Tan, Ruichao Zhong, Yuanzhen Xie, Lingchen Zhao, Qian Wang, Bo Hu, and Zang Li. 2024. Solid-sql: Enhanced schema-linking based in-context learning for robust text-to-sql. Preprint , arXiv:2412.12522. Geling Liu, Yunzhi Tan, Ruichao Zhong, Yuanzhen Xie, Lingchen Zhao, Qian Wang, Bo Hu, and Zang Li. 2025. Solid-SQL: Enhanced schema-linking based in-context learning for robust text-to-SQL. In Pro- ceedings of the 31st International Conference on Computational Linguistics , pages 9793–9803, Abu Dhabi, UAE. Association for Computational Linguis- tics. Mohammadreza Pourreza and Davood Rafiei. 2024. Dts-sql: Decomposed text-to-sql with small large language models. Preprint , arXiv:2402.01117. Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL: Relation-aware schema encoding and linking for text- to-SQL parsers. In Proceedings of the 58th Annual Meeting of the
https://arxiv.org/abs/2505.18363v1
Association for Computational Lin- guistics , pages 7567–7578, Online. Association for Computational Linguistics. Tianshu Wang, Xiaoyang Chen, Hongyu Lin, Xian- pei Han, Le Sun, Hao Wang, and Zhenyu Zeng. 2025. Dbcopilot: Natural language querying over massive databases via schema routing. Preprint , arXiv:2312.03463. Yihan Wang and Peiyu Liu. 2025. Linkalign: Scal- able schema linking for real-world large-scale multi- database text-to-sql. Preprint , arXiv:2503.18596. Guanming Xiong, Junwei Bao, Hongfei Jiang, Yang Song, and Wen Zhao. 2024. Interactive-t2s: Multi- turn interactions for text-to-sql with large language models. Preprint , arXiv:2408.11062. Tao Yu, Zifan Li, Zilin Zhang, Rui Zhang, and Dragomir Radev. 2018. TypeSQL: Knowledge-based type- aware neural text-to-SQL generation. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Pa- pers) , pages 588–594, New Orleans, Louisiana. As- sociation for Computational Linguistics. Bin Zhang, Yuxiao Ye, Guoqing Du, Xiaoru Hu, Zhishuai Li, Sun Yang, Chi Harold Liu, Rui Zhao, Ziyue Li, and Hangyu Mao. 2024. Benchmark- ing the text-to-sql capability of large language models: A comprehensive evaluation. Preprint , arXiv:2403.02951. Xiaohu Zhu, Qian Li, Lizhen Cui, and Yongkang Liu. 2024. Large language model enhanced text-to-sql generation: A survey. Preprint , arXiv:2410.06011. A Prompts This section includes all system prompts used throughout the SchemaGraphSQL pipeline. These prompts are designed to be modular and reusable across different configurations and model sizes. •Prompt 2 : Selection of the most appropriate join path among candidate schema paths. •Prompt 3 : SQL query generation using the filtered schema and join path. •Prompt 4 : Baseline SQL generation prompt using the full schema without schema linking. These prompts are issued via Gemini 2.5 Flash with low temperature settings to ensure stability and determinism during inference. Prompt 2: System prompt for join path selec- tion ROLE & OBJECTIVE You are a database expert tasked with selecting the optimal join path to answer user questions using a provided SQL schema. TASK Choose the single most appropriate join path from a list of candidates that correctly connects the relevant tables. INSTRUCTIONS 1. Internally inspect each path to determine: • whether it connects all necessary tables, • whether joins are complete and valid, • and whether it satisfies the intent of the question. Briefly justify your decision internally butdo not in- clude any reasoning in the final output. 2.Output one line in the following format: Final Answer: path_id: <ID> Prompt 3: System prompt for SQLite query generation after schema linking ROLE & OBJECTIVE You are an expert in SQLite query generation. Your task is to generate a valid query to answer a user question based on the given schema and join path. INPUTS •Schema: {schema} •Join Path: {join_path_string} •Question Context: {evidence_string} INSTRUCTIONS 1.Use the provided schema and join path to construct a valid SQLite query. 2. Ensure the query correctly answers the user’s question. 3.Format the query clearly and confirm it adheres to SQLite syntax.Prompt 4: Baseline prompt for SQLite query generation ROLE & OBJECTIVE You are an expert in SQLite query generation. Your task is to produce a valid
https://arxiv.org/abs/2505.18363v1
query that answers a user’s question using the provided schema. INPUTS •Schema: {schema} •Question Context: {evidence_string} INSTRUCTIONS 1.Generate a correct SQLite query that answers the user question. 2.Ensure the query is syntactically valid and aligns with the schema. 3. Format the query clearly and cleanly. B Additional Results This section presents extended evaluation results that complement those in the main text. We report schema linking scores and execution accuracy on theMINIDEVsplit of the BIRD dataset to validate robustness and generalization. •Table 4 : Comparison of schema linking meth- ods on MiniDev, including LLM baselines, dense retrievers, and SchemaGraphSQL. •Table 5 : SchemaGraphSQL ablation re- sults across different graph configurations on MiniDev. •Table 6 : End-to-end SQL execution accuracy for all models and schema linking variants on MiniDev, broken down by question difficulty. These extended results reinforce the strong re- call and execution performance of SchemaGraph- SQL, especially on complex and multi-table SQL queries. Table 4: Schema Linking Results in MiniDev Dataset Method Exact Match Rate (%) Precision (%) Recall (%) F1 (%) F6 (%) LLM as Schema Linker 75.70 92.82 90.56 91.68 90.62 Retrieval (Top1) 14.40 86.40 41.24 55.83 41.83 Retrieval (Top2) 28.00 68.30 64.67 66.43 64.76 Retrieval (Top3) 4.80 55.00 77.73 64.42 76.88 Retrieval (Top4) 1.00 47.29 85.00 60.77 83.20 Retrieval (Top5) 0.80 41.52 89.64 56.75 86.92 Retrieval (Top6) 0.80 37.06 92.26 52.87 88.69 SchemaGraphSQL (Ours) 82.33 94.80 93.97 94.38 93.99 Table 5: Schema Linking Results Across Different Graph Settings (Minidev) Method Exact Match Rate (%) Precision (%) Recall (%) F1 (%) F6 (%) SchemaGraphSQL 1−1 64.86 96.47 79.67 87.27 80.05 SchemaGraphSQL 1−n 74.10 95.93 87.16 91.34 87.38 SchemaGraphSQL n−1 82.13 95.81 93.39 94.58 93.45 SchemaGraphSQL n−n 82.33 94.80 93.97 94.38 93.99 SchemaGraphSQL force−longest 72.29 92.97 86.19 89.45 86.36 SchemaGraphSQL no−union 74.90 95.16 87.94 91.41 88.12 SchemaGraphSQL force−union 80.72 89.36 94.75 91.97 94.59 Table 6: SQL Execution Accuracy Results - MiniDev LLM Method Simple (%) Moderate (%) Challenging (%) Total (%) Gemma-3-4BIdeal Schema Linking 47.97 21.37 18.63 28.71 Baseline 32.43 10.08 6.86 16.06 Retrieval 36.49 18.55 13.73 22.89 SchemaGraphSQL n−n 42.57 18.15 12.75 24.3 SchemaGraphSQL 1−1 31.08 10.08 6.86 15.66 SchemaGraphSQL force−union 41.89 18.95 15.69 25.1 Gemma-3-12BIdeal Schema Linking 63.51 45.56 34.31 48.59 Baseline 38.51 18.95 16.67 24.3 Retrieval 50.68 35.08 28.43 38.35 SchemaGraphSQL n−n 57.43 37.5 30.39 41.97 SchemaGraphSQL 1−1 54.73 29.03 23.53 35.54 SchemaGraphSQL force−union 60.14 42.34 33.33 45.78 Gemma-3-27BIdeal Schema Linking 72.97 53.63 43.14 57.23 Baseline 50.0 27.82 21.57 33.13 Retrieval 60.81 44.76 36.27 47.79 SchemaGraphSQL n−n 66.22 50.81 35.29 52.21 SchemaGraphSQL 1−1 61.49 38.71 27.45 43.17 SchemaGraphSQL force−union 68.92 52.02 44.12 55.42 Gemini-2.5-FlashIdeal Schema Linking 83.78 66.13 56.86 69.48 Baseline 58.78 43.95 36.27 46.79 Retrieval 75.0 53.63 53.92 60.04 SchemaGraphSQL n−n 77.03 58.87 50.98 62.65 SchemaGraphSQL 1−1 76.35 56.85 41.18 59.44 SchemaGraphSQL force−union 77.7 62.5 50.98 64.66
https://arxiv.org/abs/2505.18363v1
arXiv:2505.18366v1 [cs.IR] 23 May 2025Accepted in ACL 2025 Hard Negative Mining for Domain-Specific Retrieval in Enterprise Systems Hansa Meghwani*, Amit Agarwal*, Priyaranjan Pattnayak, Hitesh Laxmichand Patel ,Srikant Panda Oracle AI Correspondence: hansa.meghwani@oracle.com; amit.h.agarwal@oracle.com* Abstract Enterprise search systems often struggle to re- trieve accurate, domain-specific information due to semantic mismatches and overlapping terminologies. These issues can degrade the performance of downstream applications such as knowledge management, customer support, and retrieval-augmented generation agents. To address this challenge, we propose a scal- able hard-negative mining framework tailored specifically for domain-specific enterprise data. Our approach dynamically selects semantically challenging but contextually irrelevant docu- ments to enhance deployed re-ranking models. Our method integrates diverse embedding mod- els, performs dimensionality reduction, and uniquely selects hard negatives, ensuring com- putational efficiency and semantic precision. Evaluation on our proprietary enterprise corpus (cloud services domain) demonstrates substan- tial improvements of 15% in MRR@3 and 19% in MRR@10 compared to state-of-the-art base- lines and other negative sampling techniques. Further validation on public domain-specific datasets (FiQA, Climate Fever, TechQA) con- firms our method’s generalizability and readi- ness for real-world applications. 1 Introduction Accurate retrieval of domain-specific information significantly impacts critical enterprise processes, such as knowledge management, customer sup- port, and Retrieval Augmented Generation (RAG) Agents. However, achieving precise retrieval re- mains challenging due to semantic mismatches, overlapping terminologies, and ambiguous abbre- viations common in specialized fields like finance, and cloud computing. Traditional lexical retrieval techniques, such as BM25 (Robertson and Walker, 1994), struggle due to vocabulary mismatches, lead- ing to irrelevant results and poor user experience. *The authors contributed equally to this work.Recent dense retrieval approaches leveraging pre-trained language models, like BERT-based en- coders (Karpukhin et al., 2020; Xiong et al., 2020; Guu et al., 2020), mitigate lexical limitations by capturing semantic relevance. Nevertheless, their performance heavily relies on the negative sam- ples—documents incorrectly retrieved due to se- mantic similarity but lacking contextual relevance. Models trained with negative sampling methods (e.g., random sampling, BM25-based static sam- pling, or dynamic methods like ANCE (Xiong et al., 2020), STAR (Zhan et al., 2021)) either lack sufficient semantic discrimination or incur high computational costs, thus limiting scalability and practical enterprise deployment. For instance, given a query such as "Steps to deploy a MySQL database on Cloud Infrastructure," most negative sampling techniques select documents discussing non-MySQL database deployments. Conversely, our method strategically selects a hard negative dis- cussing MySQL deployment on-premises, which despite semantic overlap, is contextually distinct and thus poses a stronger training challenge for the retrieval and re-ranking models. Our proposed framework addresses these by in- troducing a novel semantic selection criterion ex- plicitly designed to curate high-quality hard nega- tives. By uniquely formulating two semantic con- ditions that effectively select negatives that closely resemble query semantics but remain contextually irrelevant, significantly minimizing false negatives encountered by existing techniques. The main con- tributions of this paper are: 1.A negative mining framework for dynamically selecting semantically challenging hard neg- atives, leveraging diverse embedding models and semantic filtering criteria to significantly improve re-ranking models in domain-specific retrieval scenarios. 2.Comprehensive evaluations demonstrating Accepted in ACL 2025 consistent and
https://arxiv.org/abs/2505.18366v1
significant improvements across both proprietary and publicly available datasets, verifying our method’s impact and broad applicability across domain-specific usecases. 3.In-depth analysis, of critical challenges in han- dling both short and long-form enterprise doc- uments, laying a clear foundation for targeted future improvements. Our work directly enhances the semantic dis- crimination capabilities of re-ranking models, re- sulting in 15% improvement in MRR@3 and 19% improvement in MRR@10 on our in-house cloud-services domain dataset. Further evaluations on public domain-specific benchmarks (FiQA, Cli- mate Fever, TechQA) confirm generalizability and tangible improvements of our proposed negative mining framework. 2 Related Work 2.1 Hard Negatives in Retrieval Models The role of hard negatives in training dense re- trieval models has been widely studied. Static negatives, such as BM25 (Robertson and Walker, 1994), provide lexical similarity but fail to capture semantic relevance, often leading to overfitting (Qu et al., 2020). Dynamic negatives, introduced in ANCE (Xiong et al., 2020) and STAR (Zhan et al., 2021), adapt during training to provide more chal- lenging contrasts but require significant computa- tional resources due to periodic re-indexing. Our framework addresses these limitations by dynam- ically identifying semantically challenging nega- tives using clustering and dimensionality reduction, ensuring scalability and adaptability. Further studies have explored advanced meth- ods for negative sampling in cross-encoder mod- els (Meghwani, 2024). Localized Contrastive Es- timation (LCE) (Guo et al., 2023) integrates hard negatives into cross-encoder training, improving the reranking performance when negatives align with the output of the retriever. Similarly, (Pradeep et al., 2022) demonstrated the importance of hard negatives even when models undergo advanced pre- training techniques, such as condenser (Gao and Callan, 2021). Our work builds on these efforts by offering a scalable approach, which can be applied to any domain-heavy enterprise data.2.2 Negative Sampling Strategies Effective negative sampling significantly affects the performance of the retrieval model by challenging the model to differentiate between relevant and irrelevant examples. Common strategies include: •Random Negatives: Efficient but lacking se- mantic contrast, leading to suboptimal perfor- mance (Karpukhin et al., 2020). •BM25 Negatives: Leverage lexical similar- ity, but often introduce biases, particularly in semantically rich domains (Robertson and Walker, 1994). •In-Batch Negatives: Computationally ef- ficient but limited to local semantic con- trasts, often underperforming in dense re- trieval tasks (Xiong et al., 2020). Our framework complements these approaches by dynamically generating negatives that balance semantic similarity and contextual irrelevance, avoiding the pitfalls of static or random methods. 2.3 Domain-Specific Retrieval Challenges Enterprise retrieval systems face unique challenges, such as ambiguous terminology, overlapping con- cepts, and private datasets (Meghwani, 2024). General-purpose methods such as BM25 or dense retrieval models (Qu et al., 2020) fail to capture domain-specific complexities effectively. Our ap- proach addresses these gaps by curating hard nega- tives that align with enterprise-specific semantics, improving retrieval precision and robustness for proprietary datasets. We further discuss negative sampling techniques in Appendix A.1. 3 Methodology To effectively train and finetune reranker models for domain-specific retrieval, it is essential to sys- tematically handle technical ambiguities stemming from specialized terminologies, overlapping con- cepts, and abbreviations prevalent within enterprise domains. We propose a
https://arxiv.org/abs/2505.18366v1
structured, modular framework that integrates diverse embedding models, dimen- sionality reduction, and a novel semantic criterion for hard-negative selection. Figure 1 illustrates the high-level pipeline, components and their interac- tions. The re-ranking models fine-tuned using the Accepted in ACL 2025 Figure 1: Overview of the methodology pipeline for training reranker models, including embedding generation, PCA-based dimensionality reduction and hard negative selection for fine-tuning. hard negatives generated by our framework are di- rectly deployed in downstream applications, such as RAG, significantly improving the resolution of customer queries through enhanced retrieval. Our approach begins by encoding queries and documents into semantically rich vector represen- tations using an ensemble of state-of-the-art bi- encoder embedding models. These embeddings are strategically selected based on multilingual sup- port, embedding quality, training data diversity, context length handling, and performance (details provided in Appendix A.2. To manage embed- ding dimensionality and improve computational efficiency, Principal Component Analysis (PCA) (Ma´ckiewicz and Ratajczak, 1993) is utilized to project the concatenated embeddings onto a lower- dimensional space, maintaining 95% of the original variance. We then define two semantic conditions (Eq. 5 and Eq. 6) to dynamically select high-quality hard negatives, addressing semantic similarity chal- lenges and minimizing false negatives. Together, these two equations ensure that the selected hard negative is not only close to the query (Eq. 5) but also contextually distinct from the true positive, minimizing the risk of selecting topic duplicates or noisy positives (Eq. 6). For example, a query about deploying MySQL on Oracle Cloud, PD is a guide on that topic, and D is a doc about MySQL on-premise — semantically close to Q, but distant from PD. Below we detail each methodological compo- nent, emphasizing their contributions to enhancing retrieval precision in domain-specific or enterprise retrieval tasks.Total Train Test < Q, PD > 5250 1000 4250 Table 1: Dataset distribution of queries (Q) and positive documents (PD). 3.1 Dataset Statistics Our experiments leverage a proprietary corpus con- taining 36,871 unannotated documents sourced from over 30 enterprise cloud services. Addition- ally, we prepared 5250 annotated query-positive document pairs ( < Q, PD > ) for training and testing. Notably, we adopted a non-standard train- test split (as summarized in Table 1), allocating four times more data to testing than training to rigorously evaluate model robustness against vary- ing training data volumes (additional analyses in Appendix A.4). To further validate generaliz- ability, we conduct evaluations on publicly avail- able domain-specific benchmarks: FiQA (finance) (TheFinAI, 2018), Climate Fever (climate science) (Diggelmann et al., 2021), and TechQA (technol- ogy) (Castelli et al., 2019). Detailed dataset statis- tics are provided in Appendix A.2.1. 3.2 Embedding Generation Embeddings for queries, positive documents, and the corpus are computed via six diverse, high- performance bi-encoder models E1, E2, . . . , E 6, each selected strategically for capturing comple- mentary semantic perspectives: Ek(x)∈Rdk (1) where dkis the embedding dimension of the kth model for textual input x. Concatenation of these Accepted in ACL 2025 embeddings yields a comprehensive representation: Xconcat = [e1(x);e2(x);. . .;e6(x)] (2) where Xconcat∈RP6 k=1dkrepresents the con- catenated embedding for the input
https://arxiv.org/abs/2505.18366v1
x. 3.3 Dimensionality Reduction To alleviate the computational overhead arising from high-dimensional concatenated embeddings, we apply PCA to reduce dimensionality while pre- serving semantic richness: XPCA=XconcatP, (3) where Prepresents the PCA projection matrix. We specifically select PCA due to its computational efficiency, and scalability, essential given our large enterprise corpus and high-dimensional embedding space. While we empirically evaluated nonlinear dimensionality reduction methods such as UMAP (McInnes et al., 2020) and t-SNE (Van der Maaten and Hinton, 2008), they offered negligible perfor- mance improvements over PCA but incurred sub- stantially higher computational costs, making them impractical for deployment at scale in enterprise systems. 3.4 Hard Negative Selection Criteria We propose two semantic criteria to identify high- quality hard negatives. PCA-reduced embeddings XPCAare organized around each query Q. For each query-positive document pair (Q, PD ), candidate documents Dfrom the corpus are evaluated via cosine distances: d(Q, PD ), d(Q, D ), d(PD, D ) (4) A document Dis selected as a hard negative only if it satisfies both criteria: d(Q, D )< d(Q, PD ) (5) d(Q, D )< d(PD, D ) (6) Equation (5)ensures that the candidate negative document is semantically closer to the query than the actual positive document, making it a challeng- ing negative example that potentially confuses the reranking model. Equation (6), ensures that the se- lected hard negative is not just query-confusing but also sufficiently dissimilar from the actual positive (avoiding near-duplicates or false negatives). The candidate document DHNwith minimal d(Q, D )satisfying these conditions is chosen asthe primary hard negative. Additional hard nega- tives can similarly be selected based on semantic proximity rankings. Figure 2: Hard negative selection on the first two PCA components (78% variance). Qact as centroids, PD guide selection of hard negatives; which are chosen based on semantic proximity. Figure 2 illustrates an example embedding space, clearly depicting the query Q, positive document PD, and selected hard negative DHN, visualizing the semantic selection criteria. In cases where no documents satisfy these conditions, no hard nega- tives are selected for that particular query. Further details on our embedding model & fine-tuning us- ing these hard negatives are provided in Appendix A.2. 4 Experiments & Results To evaluate the effectiveness of our proposed hard- negative selection framework, we conduct exten- sive experiments on our internal cloud-specific en- terprise dataset, as well as domain-specific open- source benchmarks. We systematically compare our approach against multiple competitive negative sampling methods and perform detailed ablation studies to understand the contribution of individual framework components. Complete details on exper- imental setups and hyperparameters are provided in Appendix A.3. 4.1 Results & Discussion Comparative Analysis of Negative Sampling Strategies Table 3 presents a detailed compar- ison of of our negative sampling technique against several established methods, including Random, BM25, In-batch, STAR, and ADORE+STAR. The Accepted in ACL 2025 Re-ranker (Fine-tuned w/)Internal FiQA Climate-FEVER TechQA MRR@3 MRR@10 MRR@3 MRR@10 MRR@3 MRR@10 MRR@3 MRR@10 Baseline (No Fine-tuning) 0.42 0.45 0.45 0.48 0.44 0.46 0.57 0.61 In-batch Negatives 0.47 0.52 0.46 0.52 0.44 0.47 0.57 0.62 STAR 0.53 0.56 0.51 0.54 0.47 0.49 0.61 0.63
https://arxiv.org/abs/2505.18366v1
ADORE+STAR 0.54 0.57 0.52 0.54 0.48 0.52 0.63 0.66 Our Proposed HN 0.57 0.64 0.54 0.56 0.52 0.55 0.65 0.69 Table 2: Comparative performance benchmarking of our in-house reranker across multiple domain-specific datasets. The reranker is fine-tuned (FT) with different negative sampling techniques, highlighting the effectiveness of our proposed hard-negative mining method (HN). Negative Sampling Method MRR@3 MRR@10 Baseline 0.42 0.45 FT with Random Neg 0.47 0.51 FT with BM25 Neg 0.49 0.54 FT with In-batch Neg 0.47 0.52 FT with BM25+In-batch Neg 0.52 0.54 FT with STAR 0.53 0.56 FT with ADORE+STAR 0.54 0.57 FT with our HN 0.57 0.64 Table 3: Comparison of negative sampling methods for fine-tuning(FT) in-house cross-encoder reranker model. The proposed framework achieves 15% and 19% im- provements in MRR@3 and MRR@10, respectively, over baseline methods. baseline is defined as the performance of our inter- nal reranker model without any fine-tuning. Our method achieves notable relative improvements of 15% in MRR@3 and 19% in MRR@10 over this baseline. The semantic nature of our hard nega- tives allows the reranker to distinguish contextually irrelevant but semantically similar documents effec- tively. In contrast, simpler baselines like Random or BM25 negatives suffer due to no semantic con- sideration, while advanced methods like STAR and ADORE+STAR occasionally miss subtle seman- tic nuances that our formulated selection criteria address effectively. Generalization Across Open-source Models To validate the robustness and versatility of our frame- work, we evaluated various open-source embed- ding and reranker models (Table 4), clearly demon- strating improvements across all models when fine- tuned using our proposed negative sampling com- pared to ADORE+STAR and baseline (no fine- tuning). Notably, rerankers with multilingual ca- pabilities, such as the BGE-Reranker and Jina Reranker, demonstrated pronounced improvements, likely benefiting from our embedding ensemble’s multilingual semantic richness. Similarly, larger models like e5-mistral exhibit significant gains, re-flecting their capacity to exploit nuanced semantic differences provided by our negative samples. This analysis underscores the general applicability and model-agnostic benefits of our approach. Model Baseline ADORE+STAR Ours Alibaba-NLP (gte-multilingual-reranker-base) 0.39 0.42 0.45 BGE-Reranker (bge-reranker-large) 0.44 0.47 0.52 Cohere Embed English Light (Cohere-embed-english-light-v3.0) 0.32 0.34 0.38 Cohere Embed Multilingual (Cohere-embed-multilingual-v3.0) 0.34 0.37 0.40 Cohere Reranker (rerank-multilingual-v2.0) 0.42 0.45 0.49 IBM Reranker (re2g-reranker-nq) 0.40 0.43 0.46 Infloat Reranker (e5-mistral-7b-instruct) 0.35 0.38 0.42 Jina Reranker v2 (jina-reranker-v2-base-multilingual) 0.45 0.48 0.53 MS-MARCO (ms-marco-MiniLM-L-6-v2) 0.41 0.43 0.46 Nomic AI Embed Text (nomic-embed-text-v1.5) 0.33 0.36 0.39 NVIDIA NV-Embed-v2 0.38 0.41 0.44 Salesforce SFR-Embedding-2_R 0.37 0.40 0.43 Salesforce SFR-Embedding-Mistral 0.36 0.39 0.42 T5-Large 0.41 0.44 0.47 Table 4: Performance benchmarking (MRR@3) of reranker and embedding models using the proposed hard negative selection framework, compared with ADORE+STAR and baseline methods. Effectiveness on Domain-specific Public Datasets We further tested our method’s adaptability across diverse public domain-specific datasets (FiQA, Climate-FEVER, TechQA), as shown in Table 2. Each dataset presents distinct retrieval challenges, ranging from technical jargon in TechQA to complex domain-specific reasoning in Climate-FEVER. Fine-tuning with our generated hard negatives consistently improved retrieval across these varied datasets. FiQA exhibited significant gains, likely due to the semantic differentiation required in finance-specific queries. These results demonstrate that our negative Accepted
https://arxiv.org/abs/2505.18366v1
in ACL 2025 sampling method is not only effective within our internal enterprise corpus but also valuable across diverse, domain-specific public datasets, indicating broad applicability and domain independence. Model MRR@3 MRR@10 Short DocumentsBaseline 0.481 0.526 FT w/ proposed HN 0.61 0.662 Long DocumentsBaseline 0.423 0.477 FT w/ proposed HN 0.475 0.521 Table 5: Performance comparison of the in-house reranker without fine-tuning (Baseline) versus fine- tuned (FT) with our proposed hard negatives (HN), eval- uated separately on short and long documents. Performance Analysis on Short vs. Long Docu- ments An explicit analysis of short versus long documents (Table 5) revealed differential perfor- mance gains. Short documents (under 1024 to- kens) experienced substantial performance im- provements (MRR@3 improving from 0.481 to 0.61), attributed to minimal semantic redundancy and tokenization constraints. Conversely, long documents showed more moderate improvements (MRR@3 from 0.423 to 0.475), primarily due to embedding truncation that causes loss of context and increased semantic complexity. Future re- search should focus explicitly on developing hi- erarchical or segment-based embedding methods to address these limitations. Ablation Studies To clearly understand the im- pact of the individual components of the frame- work, we conducted systematic ablation studies (Table 6). Training with positive documents alone produced only slight gains (+0.03 MRR@3), reaf- firming the critical role of high-quality hard nega- tives. Evaluating individual embedding models sep- arately indicated varying performance due to their differing semantic representations and underlying training. However, the concatenation of diverse embeddings provided significant performance im- provements (+0.15 MRR@3), clearly highlighting the advantages of capturing semantic diversity. Additionally, PCA-based dimensionality reduc- tion analysis identified the optimal variance thresh- old at 95%. Lower thresholds resulted in marked semantic degradation, reducing retrieval perfor- mance. This trade-off highlights PCA as an essen- tial efficiency-enhancing step for the framework. Collectively, these detailed analyses underscore our method’s strengths, limitations, and method-ological rationale, providing clear empirical justifi- cation for each design decision. #Proposed Strategies MRR@3 MRR@10 1Baseline 0.42 0.45 Positive Document (PD) Only 2Fine-tuning with PD Only 0.45 0.51 Hard Negative(HN) with Embedding Ek 3a HN with E1+ PD 0.45 0.51 3b HN with E2+ PD 0.47 0.53 3c HN with E3+ PD 0.51 0.55 3d HN with E4+ PD 0.45 0.52 3e HN with E5+ PD 0.48 0.51 3f HN with E6+ PD 0.49 0.52 3g HN with X concat + PD 0.57 0.64 XPCAVariance Impact + PD 4a HN with X PCA(99% Variance) 0.57 0.64 4b HN with X PCA(95% Variance) 0.57 0.64 4c HN with X PCA(90% Variance) 0.55 0.63 4d HN with X PCA(80% Variance) 0.51 0.58 4e HN with X PCA(70% Variance) 0.49 0.56 Table 6: Results of ablation study showing the impact of embeddings, PCA variance thresholds, and positive documents on MRR, on the in-house re-ranker model. 4.2 Case Studies: Examples of Hard Negative Impact Figure 3 shows how similar topics in the domain of cloud computing. To demonstrate the qualitative benefits of the proposed framework, we present two case studies where the baseline and fine-tuned models produce different ranking results. These examples highlight the significance of hard neg- atives in distinguishing semantically similar
https://arxiv.org/abs/2505.18366v1
but contextually irrelevant documents. Figure 3: Illustrations of similar topics in the domain of Cloud Computing Case Study 1: Disambiguating Technical Acronyms. •Query (Q): "What is VCN in Cloud Infras- tructure?" Accepted in ACL 2025 •Positive Document (PD): A document ex- plaining "Virtual Cloud Network (VCN)" in Cloud Infrastructure, detailing its setup and usage. •Hard Negative (HN): A document discussing "Virtual Network Interface Card (VNIC)" in the context of networking hardware. Baseline Result: The baseline model incorrectly ranks the hard negative above the positive docu- ment due to overlapping terms such as "virtual" and "network." Proposed Method Result: The fine-tuned model ranks the positive document higher, correctly iden- tifying the contextual match between the query and the description of VCN. This improvement is attributed to the triplet loss training with hard negatives. Case Study 2: Domain-Specific Terminology. •Query (Q): "How does the CI WAF handle incoming traffic?" •Positive Document (PD): A document ex- plaining the Web Application Firewall (WAF) in CI, its configuration, and traffic filtering mechanisms. •Hard Negative (HN): A document discussing general firewall configurations in networking. Baseline Result: The baseline model ranks the hard negative higher due to lexical overlap between the terms "firewall" and "traffic." Proposed Method Result: The proposed frame- work ranks the positive document higher, leverag- ing domain-specific semantic representations. These case studies illustrate the practical ad- vantages of training with hard negatives, espe- cially in domains with overlapping terminology or acronyms. Additional detailed analyses, illustrative prac- tical implications for enterprise applications, and explicit future directions are discussed in detail in A.4, and A.5. 5 Conclusion We introduced a scalable, modular framework lever- aging dynamic ensemble-based hard-negative min- ing to significantly enhance re-ranking models in enterprise and domain-specific retrieval scenarios.Our method dynamically curates semantically chal- lenging yet contextually irrelevant negatives, allow- ing re-ranking models to effectively discriminate subtle semantic differences. Empirical evaluations on proprietary enterprise data and diverse public domain-specific benchmarks demonstrated substan- tial improvements of up to 15% in MRR@3 and 19% in MRR@10 over state-of-the-art negative sampling techniques, including BM25, In-Batch Negatives, STAR, and ADORE+STAR. Our approach offers clear practical benefits in real-world deployments, benefiting downstream ap- plications such as knowledge management, cus- tomer support systems, and Retrieval-Augmented Generation (RAG), where retrieval precision di- rectly influences user satisfaction and Generative AI effectiveness. The strong performance and gen- eralizability across various domains further under- score the framework’s readiness for industry-scale deployment. Future work will focus on extending our frame- work to handle incremental updates of enterprise knowledge bases and exploring real-time negative sampling strategies for continuously evolving cor- pora, further enhancing the adaptability and robust- ness required in practical industry settings. 6 Limitations While our approach advances the state of hard negative mining and encoder-based retrieval, sev- eral limitations remain that open avenues for fu- ture research. One key challenge is the perfor- mance disparity between short and long documents. Addressing this requires more effective document chunking strategies and the development of hier- archical representations to preserve context across segments. Additionally, the retrieval of long doc- uments is complicated by semantic redundancy and truncation, warranting deeper
https://arxiv.org/abs/2505.18366v1
analysis of their structural complexity. Our current use of embed- ding concatenation for ensembling could also be refined—future work should evaluate alternative fusion techniques such as weighted averaging or attention-based mechanisms. Moreover, extending the retrieval framework to support cross-lingual and multilingual scenarios would enhance its utility in globally distributed applications. References AMIT AGARWAL. 2021. Evaluate generalisation & robustness of visual features from images to video. Accepted in ACL 2025 ResearchGate . Available at https://doi.org/10. 13140/RG.2.2.33887.53928 . Amit Agarwal, Srikant Panda, and Kulbhushan Pachauri. 2024a. Synthetic document generation pipeline for training artificial intelligence models. US Patent App. 17/994,712. Amit Agarwal, Srikant Panda, and Kulbhushan Pachauri. 2025. FS-DAG: Few shot domain adapting graph net- works for visually rich document understanding. In Proceedings ofthe31st International Conference on Computational Linguistics: Industry Track , pages 100–114, Abu Dhabi, UAE. Association for Com- putational Linguistics. Amit Agarwal, Hitesh Patel, Priyaranjan Pattnayak, Srikant Panda, Bhargava Kumar, and Tejaswini Ku- mar. 2024b. Enhancing document ai data genera- tion through graph-based synthetic layouts. arXiv preprint arXiv:2412.03590. Jina AI. 2023. jina-reranker-v2-base-multilingual. Arian Askari, Mohammad Aliannejadi, Evangelos Kanoulas, and Suzan Verberne. 2023. Generating synthetic documents for cross-encoder re-rankers: A comparative study of chatgpt and human experts. Jiaqi Bai, Hongcheng Guo, Jiaheng Liu, Jian Yang, Xinnian Liang, Zhao Yan, and Zhoujun Li. 2023. Griprank: Bridging the gap between retrieval and generation via the generative knowledge improved passage ranking. Preprint, arXiv:2305.18144. Vittorio Castelli, Rishav Chakravarti, Saswati Dana, An- thony Ferritto, Radu Florian, Martin Franz, Dinesh Garg, Dinesh Khandelwal, Scott McCarley, Mike McCawley, Mohamed Nasr, Lin Pan, Cezar Pen- dus, John Pitrelli, Saurabh Pujar, Salim Roukos, An- drzej Sakrajda, Avirup Sil, Rosario Uceda-Sosa, Todd Ward, and Rong Zhang. 2019. The techqa dataset. Preprint, arXiv:1911.02984. Cohere. 2023a. Cohere-embed-multilingual-v3.0. Available at: https://cohere.com/blog/ introducing-embed-v3 . Cohere. 2023b. Reranker model. Available at: https://docs.cohere.com/v2/docs/ reranking-with-cohere . Gabriel de Souza P. Moreira, Radek Osmulski, Mengyao Xu, Ronay Ak, Benedikt Schifferer, and Even Oldridge. 2024. Nv-retriever: Improving text em- bedding models with effective hard-negative mining. Preprint, arXiv:2407.15831. Thomas Diggelmann, Jordan Boyd-Graber, Jannis Bu- lian, Massimiliano Ciaramita, and Markus Leip- pold. 2021. Climate-fever: A dataset for veri- fication of real-world climate claims. Preprint , arXiv:2012.00614. Karan Dua, Praneet Pabolu, and Mengqing Guo. 2024. Generating templates for use in synthetic document generation processes. US Patent App. 18/295,765.Karan Dua, Praneet Pabolu, and Ranjeet Kumar Gupta. 2025. Generation of synthetic doctor-patient conver- sations. US Patent App. 18/495,966. Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2022. Language- agnostic bert sentence embedding. Preprint , arXiv:2007.01852. Luyu Gao and Jamie Callan. 2021. Condenser: a pre- training architecture for dense retrieval. EMNLP 2021 -2021 Conference onEmpirical Methods in Natural Language Processing, Proceedings , pages 981–993. Michael Glass, Gaetano Rossiello, Md Faisal Mahbub Chowdhury, Ankita Naik, Pengshan Cai, and Al- fio Gliozzo. 2022. Re2G: Retrieve, rerank, gen- erate. In Proceedings ofthe2022 Conference oftheNorth American Chapter oftheAssociation forComputational Linguistics: Human Language Technologies , pages 2701–2715, Seattle, United States. Association for Computational Linguistics. Biyang Guo, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan Ding, Jianwei Yue, and Yupeng Wu. 2023. How close is chatgpt to human experts? comparison corpus,
https://arxiv.org/abs/2505.18366v1
evaluation, and detection. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: Retrieval- augmented language model pre-training. EK Jasila, N Saleena, and KA Abdul Nazeer. 2023. An efficient document clustering approach for devising semantic clusters. Cybernetics andSystems , pages 1–18. Vladimir Karpukhin, Barlas O ˘guz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen Tau Yih. 2020. Dense passage retrieval for open-domain question answering. EMNLP 2020 -2020 Conference onEmpirical Methods in Natural Language Processing, Proceedings ofthe Conference, pages 6769–6781. Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. 2024. Nv-embed: Improved techniques for training llms as generalist embedding models. arXiv preprint arXiv:2405.17428. Fulu Li, Zhiwen Xie, and Guangyou Zhou. 2024. Theme-enhanced hard negative sample mining for open-domain question answering. In ICASSP 2024 - 2024 IEEE International Conference onAcoustics, Speech and Signal Processing (ICASSP) , pages 12436–12440. Xianming Li and Jing Li. 2023. Angle-optimized text embeddings. arXiv preprint arXiv:2309.12871. Ye Liu, Kazuma Hashimoto, Yingbo Zhou, Semih Yavuz, Caiming Xiong, and Philip S. Yu. 2021. Dense hierarchical retrieval for open-domain ques- tion answering. In Conference onEmpirical Methods inNatural Language Processing. Accepted in ACL 2025 Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. 2019. Cedr: Contextual- ized embeddings for document ranking. SIGIR 2019 -Proceedings ofthe42nd International ACM SIGIR Conference onResearch andDevelopment in Information Retrieval, pages 1101–1104. Andrzej Ma ´ckiewicz and Waldemar Ratajczak. 1993. Principal components analysis (pca). Computers & Geosciences, 19(3):303–342. Leland McInnes, John Healy, and James Melville. 2020. Umap: Uniform manifold approximation and projection for dimension reduction. Preprint , arXiv:1802.03426. Hansa Meghwani. 2024. Enhancing retrieval perfor- mance: An ensemble approach for hard negative min- ing. Preprint, arXiv:2411.02404. Vivek Mehta, Mohit Agarwal, and Rohit Kumar Kaliyar. 2024. A comprehensive and analytical review of text clustering techniques. International Journal ofData Science andAnalytics, pages 1–20. Rui Meng, Ye Liu, Shafiq Rayhan Joty, Caiming Xiong, Yingbo Zhou, and Semih Yavuz. 2024a. Sfr- embedding-2: Advanced text embedding with multi- stage training. Rui Meng, Ye Liu, Shafiq Rayhan Joty, Caiming Xiong, Yingbo Zhou, and Semih Yavuz. 2024b. Sfr- embedding-mistral: Enhance text retrieval with trans- fer learning. Salesforce AI Research Blog. Thanh-Do Nguyen, Chi Minh Bui, Thi-Hai-Yen Vuong, and Xuan-Hieu Phan. 2022. Passage-based bm25 hard negatives: A simple and effective negative sam- pling strategy for dense retrieval. Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with bert. Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. 2019. Multi-stage document ranking with bert. Zach Nussbaum, John X. Morris, Brandon Duderstadt, and Andriy Mulyar. 2024. Nomic embed: Training a reproducible long context text embedder. Preprint , arXiv:2402.01613. Praneet Pabolu, Karan Dua, and Sriram Chaudhury. 2024a. Multi-lingual natural language generation. US Patent App. 18/318,315. Praneet Pabolu, Karan Dua, and Sriram Chaudhury. 2024b. Multi-lingual natural language generation. US Patent App. 18/318,327. Srikant Panda, Amit Agarwal, Gouttham Nambirajan, and Kulbhushan Pachauri. 2025a. Out of distribution element detection for information extraction. US Patent App. 18/347,983.Srikant Panda, Amit Agarwal, and Kulbhushan Pachauri. 2025b. Techniques of information extraction for se- lection marks. US Patent App. 18/240,344. Hitesh Laxmichand Patel, Amit Agarwal, Arion Das, Bhargava
https://arxiv.org/abs/2505.18366v1
Kumar, Srikant Panda, Priyaranjan Pat- tnayak, Taki Hasan Rafi, Tejaswini Kumar, and Dong- Kyu Chae. 2025. Sweeval: Do llms really swear? a safety benchmark for testing limits for enterprise use. In Proceedings ofthe2025 Conference ofthe Nations oftheAmericas Chapter oftheAssociation forComputational Linguistics: Human Language Technologies (V olume 3:Industry Track) , pages 558–582. Hitesh Laxmichand Patel, Amit Agarwal, Bhargava Kumar, Karan Gupta, and Priyaranjan Pattnayak. 2024. Llm for barcodes: Generating diverse syn- thetic data for identity documents. arXiv preprint arXiv:2411.14962. Priyaranjan Pattnayak, Amit Agarwal, Hansa Megh- wani, Hitesh Laxmichand Patel, and Srikant Panda. 2025a. Hybrid ai for responsive multi-turn on- line conversations with novel dynamic routing and feedback adaptation. In Proceedings ofthe4th International Workshop onKnowledge-Augmented Methods forNatural Language Processing , pages 215–229. Priyaranjan Pattnayak, Hitesh Laxmichand Patel, and Amit Agarwal. 2025b. Tokenization matters: Im- proving zero-shot ner for indic languages. Preprint , arXiv:2504.16977. Priyaranjan Pattnayak, Hitesh Laxmichand Patel, Amit Agarwal, Bhargava Kumar, Srikant Panda, and Te- jaswini Kumar. 2025c. Clinical qa 2.0: Multi-task learning for answer extraction and categorization. Preprint, arXiv:2502.13108. Ronak Pradeep, Yuqi Liu, Xinyu Zhang, Yilin Li, An- drew Yates, and Jimmy Lin. 2022. Squeezing wa- ter from a stone: A bag of tricks for further im- proving cross-encoder effectiveness for reranking. InLecture Notes inComputer Science (including subseries Lecture Notes inArtificial Intelligence and Lecture Notes inBioinformatics) , volume 13185 LNCS, pages 655–670. Springer Science and Busi- ness Media Deutschland GmbH. Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2020. Rocketqa: An optimized train- ing approach to dense passage retrieval for open- domain question answering. Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal ofMachine Learning Research , 21(140):1–67. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. Accepted in ACL 2025 InProceedings ofthe2019 Conference onEmpirical Methods inNatural Language Processing. S. E. Robertson and S. Walker. 1994. Some Simple Effective Approximations tothe2-Poisson Model forProbabilistic Weighted Retrieval , pages 232–241. Springer London. Saba Sturua, Isabelle Mohr, Mohammad Kalim Akram, Michael Günther, Bo Wang, Markus Krimmel, Feng Wang, Georgios Mastrapas, Andreas Koukounas, An- dreas Koukounas, Nan Wang, and Han Xiao. 2024. jina-embeddings-v3: Multilingual embeddings with task lora. Preprint, arXiv:2409.10173. TheFinAI. 2018. Fiqa: A financial question answering dataset. Available at Hugging Face. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal ofmachine learning research, 9(11). Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, and Furu Wei. 2023. Improving text embeddings with large language models. arXiv preprint arXiv:2401.00368. Svante Wold, Kim H. Esbensen, Kim H. Esbensen, Paul Geladi, and Paul Geladi. 1987. Principal component analysis. Chemometrics andIntelligent Laboratory Systems, 2:37–52. Shitao Xiao, Zheng Liu, Peitian Zhang, and Niklas Muennighoff. 2023. C-pack: Packaged resources to advance general chinese embedding. Preprint , arXiv:2309.07597. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neigh- bor negative contrastive learning for dense text re-
https://arxiv.org/abs/2505.18366v1
trieval. Zhen Yang, Zhou Shao, Yuxiao Dong, and Jie Tang. 2024. Trisampler: A better negative sampling princi- ple for dense retrieval. Preprint, arXiv:2402.11855. Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, and Shaoping Ma. 2021. Optimizing dense retrieval model training with hard negatives. SIGIR 2021 -Proceedings ofthe44th International ACM SIGIR Conference onResearch andDevelopment in Information Retrieval, pages 1503–1512. Dun Zhang. 2024. stella-embedding-model-2024. Xin Zhang, Yanzhao Zhang, Dingkun Long, Wen Xie, Ziqi Dai, Jialong Tang, Huan Lin, Baosong Yang, Pengjun Xie, Fei Huang, et al. 2024. mgte: General- ized long-context text representation and reranking models for multilingual text retrieval. arXiv preprint arXiv:2407.19669.A Appendix A.1 Extended Related Work Hard Negatives in Retrieval Models Static and dynamic hard negatives have been extensively stud- ied. Static negatives, such as those generated by BM25 (Robertson and Walker, 1994) or Pas- sageBM25 (Nguyen et al., 2022), provide challeng- ing lexical contrasts but risk overfitting due to their fixed nature (Qu et al., 2020). Dynamic negatives, as introduced in ANCE (Xiong et al., 2020) and ADORE (Zhan et al., 2021) adapt during training, other effective methods like positive-aware mining (de Souza P. Moreira et al., 2024), theme-enhanced negatives (Li et al., 2024) offers relevant chal- lenges but incurring high computational costs due to periodic re-indexing and bigger embedding di- mension. Our framework mitigates these issues by leveraging clustering and dimensionality reduction to dynamically identify negatives without requiring re-indexing. Localized Contrastive Estimation (LCE) (Guo et al., 2023; AGARWAL, 2021) further demon- strated the effectiveness of incorporating hard nega- tives into cross-encoder training, improving rerank- ing accuracy when negatives align with retriever outputs. Additionally, (Pradeep et al., 2022) high- lighted the importance of hard negatives even in advanced pretraining setups like Condenser (Gao and Callan, 2021), which emphasizes their neces- sity for robust optimization. Advances in Dense Retrieval and Cross- Encoders Dense retrieval models like DPR (Karpukhin et al., 2020) and REALM (Guu et al., 2020) encode queries and documents into dense embeddings, enabling semantic matching. Recent advances in dense retrieval and ranking include GripRank’s generative knowledge-driven passage ranking (Bai et al., 2023), Dense Hierarchical Retrieval’s multi-stage framework for efficient question answering (Liu et al., 2021; Pattnayak et al., 2025a,c,b; Patel et al., 2025), and TriSampler’s optimized negative sampling for dense retrieval (Yang et al., 2024), collectively enhancing retrieval performance.Cross-encoders, such as monoBERT (Nogueira et al., 2019; Nogueira and Cho, 2019), further improve retrieval precision by jointly encoding query-document pairs but require high-quality training data, particularly challenging negatives (MacAvaney et al., 2019; Panda et al., 2025b). Techniques such Accepted in ACL 2025 as synthetic data generation (Askari et al., 2023; Agarwal et al., 2024a, 2025) augment training datasets but lack the realism and semantic depth provided by our hard negative mining approach. Dimensionality Reduction in IR Clustering methods have been used to group semantically similar documents, improving retrieval efficiency and training data organization (Mehta et al., 2024; Jasila et al., 2023; Dua et al., 2025; Panda et al., 2025a). Dimensionality reduction techniques like PCA (Wold et al., 1987) enhance scalability by re- ducing computational complexity. Our framework uniquely combines
https://arxiv.org/abs/2505.18366v1
these techniques to dynamically identify negatives that challenge retrieval models in a scalable manner. Synthetic Data in Retrieval Recent work (Askari et al., 2023; Agarwal et al., 2024a,b; Patel et al., 2024; Dua et al., 2024; Pabolu et al., 2024a,b) has explored using large language models to generate synthetic training data for retrieval tasks. While effective in low-resource settings, synthetic data often struggles with factual inaccuracies and domain-specific relevance. In contrast, our framework relies on real-world data to curate semantically challenging negatives, ensuring high-quality training samples without introducing synthetic biases. Summary of Contributions While previous works address various aspects of negative sampling, hard negatives, and synthetic data, our approach bridges the gap between static and dynamic strate- gies. By dynamically curating negatives using clus- tering and dimensionality reduction, we achieve a scalable and semantically precise methodology tailored to domain-specific retrieval tasks. A.2 Extended Methodology A.2.1 Dataset Statistics Queries Length Distribution In this section we analyze the distribution of queries length in our enterprise dataset. Figure 4 shows that the length of queries ranges from 1 to 25 words, with some queries having very few words. This highlights that user queries can sometime be just 2-3 words about a topic, increasing the probability of retrieving doc- uments mentioning those topics or concepts which can be contextually different. Therefore, when we select hard negatives, it is crucial to consider not only the relationship between the query and documents but also the relationship between the Figure 4: Length Distribution of queries in the dataset. positive document and other documents, ensuring a comparison with texts on similar topics and similar lengths. Model ( Ek) Params (M) Dimension Max Tokens stella_en_400M_v5 435 8192 8192 jina-embeddings-v3 572 1024 8194 (multilingual) mxbai-embed-large-v1 335 1024 512 bge-large-en-v1.5 335 1024 512 LaBSE 471 768 256 (multilingual) all-mpnet-base-v2 110 768 514 (multilingual) Table 7: Embedding models used to construct Xconcat, combining diverse semantic representations for queries (Q), positive documents ( PD), and corpus documents (D). Figure 5: Shows document length distribution in Enter- prise corpus. Accepted in ACL 2025 Document Length Distribution As shown in Figure 5 , document lengths are significantly longer than query lengths. This disparity in context length affects the similarity scores, potentially reducing the accuracy of retrieval systems. In our in-house dataset, each query is paired with a single correct document (though its not limited by number of positive-negative document per query). This posi- tive document is crucial for identifying challenging hard negatives and hence helpful for encoder-based model training. A.2.2 Embedding Models Table 7 lists the embedding models (Zhang, 2024; Sturua et al., 2024; Li and Li, 2023; Xiao et al., 2023; Feng et al., 2022; Reimers and Gurevych, 2019; Zhang et al., 2024) used to construct Xconcat , combining diverse semantic representations for queries ( Q), positive documents ( PD), and cor- pus documents ( D). These models were selected for their performance, model size, ability to han- dle multilingual context, providing complemen- tary strengths in dimensionality and token cover- age. By integrating embeddings from these models, the framework captures nuanced semantic relation- ships crucial for
https://arxiv.org/abs/2505.18366v1
reranker training. A.2.3 Unified Contrastive Loss The unified contrastive loss is designed to improve ranking precision for both bi-encoders and cross- encoders, by ensuring that positive documents (PD) are ranked closer to the query ( Q) than hard negatives ( DHN) by a margin m. The loss is de- fined as: L=NX i=1max (0 , m+d(Qi, PD i)−d(Qi, DHN i)) (7) where: •PDi: Positive document associated with query Qi. •DHN i: Hard negative document, semantically similar to PDibut contextually irrelevant. •d(Qi, Di): Distance metric measuring rele- vance between QiandDi. •m: Margin ensuring PDiis closer to Qithan DHN iby at least m, encouraging the model to distinguish between relevant and irrelevant documents effectively.Forbi-encoders , the distance metric is defined as: d(Qi, Di) = 1−cosine (eQi, eDi), (8) where eQiandeDiare the embeddings of the query and document, respectively. For cross-encoders , the distance metric is: d(Qi, Di) =−s(Qi, Di), (9) where s(Qi, Di)is the cross-encoder’s relevance score for the query-document pair. This formulation leverages the triplet of ( Q,PD, DHN) to minimize d(Qi, PD i), pulling positive documents closer to the query, while maximizing d(Qi, DHN i), pushing hard negatives further away. By emphasizing semantically challenging exam- ples, the model learns sharper decision boundaries for improved ranking precision. A.3 Experimental Setup Datasets We evaluate our framework extensively using both proprietary and public datasets: •Internal Proprietary Dataset: Consisting of approximately 5250 query-document pairs, on cloud services like computing, networking, firewall, ai services. It includes both short (< [1024 tokens] ) and long documents (>= [1024 tokens] ). •FiQA Dataset: A financial domain-specific dataset widely used for retrieval benchmark- ing. •Climate-FEVER Dataset: An environment- specific fact-checking dataset focused on climate-related information retrieval. •TechQA Dataset: A technical question- answering dataset emphasizing software engi- neering and technology-related queries. Training and Fine-tuning All re-ranking mod- els are fine-tuned using a triplet loss with margin with same hyper-parameters. Early stopping is em- ployed based on validation MRR@10 scores to prevent overfitting. Evaluation Metrics Model performance is eval- uated using standard retrieval metrics: Mean Recip- rocal Rank (MRR) at positions 3 and 10 (MRR@3 and MRR@10), which measure retrieval quality and ranking precision. Each reported metric is averaged across three experimental runs for robust- ness. Accepted in ACL 2025 Strategy Training Data MRR@3 MRR@10 Baseline 0 0.42 0.45 Finetuned with Hard Negatives (Ours)100 0.46 0.49 200 0.48 0.51 300 0.50 0.53 400 0.52 0.56 500 0.52 0.58 600 0.54 0.60 700 0.54 0.62 800 0.56 0.63 900 0.57 0.64 1000 0.57 0.64 Table 8: Comparison of Strategies with Varying Train- ing Data Sizes A.4 Extended Results & Ablation Impact of Training Data Size As shown in Ta- ble 8, both MRR@3 and MRR@10 improve as the training data size increases, with more pronounced gains in MRR@10. MRR@3 shows gradual im- provement, from 0.42 at the baseline to 0.57 with 100 examples, highlighting the model’s enhanced ability to rank relevant documents within the top 3. MRR@10, on the other hand, shows more signif- icant improvement, from 0.45 to 0.64, indicating that the model benefits more from additional data when considering the top
https://arxiv.org/abs/2505.18366v1
10 ranked documents. Our method shows promising results even with smaller training sets, demonstrating the effective- ness of incorporating hard negatives early in the training process. This suggests that hard negatives significantly enhance the model’s ability to distin- guish relevant from irrelevant documents against a given query, even when data is limited. This ap- proach is particularly beneficial in enterprise con- texts, where annotated data may be scarce, enabling quicker improvements in domain-specific retrieval performance. Models in the Study In our study we com- pared the performance of other finetuned re-ranker (Glass et al., 2022; Wang et al., 2023; Raffel et al., 2020) and embedding models (Zhang et al., 2024; Nussbaum et al., 2024) using hard negatives gen- erated by our proposed framework in Table 4. We benchmarked the BGE-Reranker (Xiao et al., 2023), NV-Embed (Lee et al., 2024) Salesforce- SFR (Meng et al., 2024a,b) , jina-reranker (AI, 2023) and Cohere-Reranker (Cohere, 2023a,b), A.4.1 Analysis of Long vs. Short Documents Table 5 reveals a consistent disparity in MRR scores between short and long documents, withlong documents showing lower performance. Here, we analyze potential reasons and propose mitiga- tion strategies. Challenges with Long Documents. •Semantic Redundancy: Long documents of- ten contain repetitive or tangential content, diluting their relevance to a specific query. •Context Truncation: Fixed-length tokeniza- tion (e.g., 512 or 1024 tokens) truncates long documents, potentially discarding critical in- formation. •Query-to-Document Mismatch: Short queries may not provide sufficient context to match the nuanced information spread across a lengthy document. Potential Solutions. •Chunk-Based Retrieval: Split long doc- uments into smaller, semantically coherent chunks and rank them individually. •Hierarchical Embeddings: Use hierarchical models to aggregate sentence- or paragraph- level embeddings for better context represen- tation. •Query Expansion: Enhance short queries with additional context using techniques like query rewriting or pseudo-relevance feedback. This analysis highlights the need for future work to address the inherent challenges of ranking long documents effectively. A.5 Practical Implications for Enterprise Applications The proposed framework has significant practical implications for enterprise information retrieval systems, particularly in retrieval-augmented gener- ation (RAG) pipelines. Improved Ranking Precision. By training with hard negatives, the model ensures that the most relevant documents are retrieved for each query. This is particularly critical for enterprise use cases such as: •Technical Support: Retrieving precise docu- mentation for customer queries, reducing res- olution times. •Knowledge Management: Ensuring that em- ployees access the most relevant internal re- sources quickly. Accepted in ACL 2025 Enhanced Generative Quality. High-quality re- trieval directly improves the factual accuracy and coherence of outputs generated by large language models in RAG pipelines. For example: •Documentation Summarization: Sum- maries generated by models like GPT are more reliable when based on top-ranked, ac- curate sources. •Customer Interaction: Chatbots generate more contextually relevant responses when fed precise retrieved documents. Scalability and Adaptability. The framework’s modular design, including the use of diverse embed- dings and clustering-based hard negative selection, allows it to adapt to: •Different industries (e.g., healthcare, finance, manufacturing). • Multi-lingual or cross-lingual retrieval tasks. These practical implications underscore the ver- satility and enterprise readiness of the proposed framework.
https://arxiv.org/abs/2505.18366v1
arXiv:2505.18374v1 [cs.CL] 23 May 2025ShIOEnv: A CLI Behavior-Capturing Environment Enabling Grammar-Guided Command Synthesis for Dataset Curation Jarrod Ragsdale The University of Texas at San Antonio One UTSA Circle, San Antonio, TX, USA jarrod.ragsdale@utsa.eduRajendra Boppana The University of Texas at San Antonio One UTSA Circle, San Antonio, TX, USA rajendra.boppana@utsa.edu Abstract Command-line interfaces (CLIs) provide structured textual environments for system administration. Explorations have been performed using pre-trained language models (PLMs) to simulate these environments for safe interaction in high-risk environments. However, their use has been constrained to frozen, large parameter models like GPT. For smaller architectures to reach a similar level of believability, a rich dataset of CLI interactions is required. Existing public datasets focus on mapping natural -language tasks to commands, omitting crucial execution data such as exit codes, outputs, and environmental side effects, limiting their usability for behavioral modeling. We introduce a Shell Input -Output Environment (ShIOEnv), which casts command construction as a Markov Decision Process whose state is the partially built sequence and whose actions append arguments. After each action, ShIOEnv executes the candidate and returns its exit status, output, and progress toward a minimal -length behavioral objective. Due to the intractable nature of the combinatorial argument state-action space, we derive a context -free grammar from man pages to mask invalid arguments from being emitted. We explore random and proximal-policy optimization (PPO) -optimized sampling of unrestricted and grammar -masked action spaces to produce four exploration strategies. We observed that grammar masking and PPO significantly improve sample efficiency to produce a higher quality dataset (maximizing the number of arguments while minimizing redundancies). Policy-generated datasets of shell input-output behavior pairs are used to fine -tune CodeT5, where we observe 85% improvements in BLEU-4 when constraining the action space to grammar productions with an additional 26% improvement when applying PPO. The ShIOEnv environment and curated command behavior datasets are released for use in future research. 1 Introduction Networked systems typically provide a command-line interface (CLI) as a text environment for efficient administration. Common CLIs include PowerShell, Cisco IOS, and Linux shell variants such as Bash, [1, 2, 3]. These CLIs accept input in a well-structured format, which is parsed and executed by the underlying operating system to produce feedback returned to the user. Recent advances in large pre-trained language models (PLMs) have motivated the simulation of such CLIs through language modeling methods to facilitate safe experimentation in contexts such as honeypots, which could be exploited if a functional Linux machine is used [4]. While vanilla PLMs can imitate surface syntax, they require large-scale implementations for accurate modeling and lack grounding in system semantics, being observed to hallucinate outputs or ignore non-output producing behaviors [5]. Preprint. A large and diverse set of samples is required to represent CLI behavior holistically for language modeling, where each input is mapped to its captured execution metadata, such as exit codes, textual output, and observable state changes. Such behavioral datasets remain scarce, with existing resources emphasizing natural language (NL) to command translation [6, 7] or narrowly scoped session logs [8], omitting systematic coverage of argument–behavior relationships. To curate
https://arxiv.org/abs/2505.18374v1
such a dataset requires enumerating a command’s arguments to find novel combinations that demonstrate each possible behavior for that command. A naive exhaustive enumeration of command–argument combinations without considering syntactic validity is inefficient, as a substantial subset of combinations will be syntactically invalid or duplicate the behavior of simpler invocations. An environment capable of evaluating candidate command–argument combinations to identify which behaviors are already represented and which introduce new patterns is essential for guiding the selection of representative samples during dataset curation. Environments have been proposed to explore NL-command equivalency or test case satisfaction in program synthesis tasks [9, 10], though none have investigated their use for command-behavior feedback in isolation. To this end, we introduce ShIOEnv, a Linux shell input–output environment that returns observed execution behaviors and provides execution feedback based on progress towards satisfying an objective specification of irreducibility for observed behaviors. ShIOEnv presents the synthesis process of commands as a Markov decision process (MDP) in which the partially constructed sequence constitutes the state upon which an argument in the form of an action is appended. To address the intractability of naive argument combination exploration, we investigate efficient exploration methods by constraining a generating agent’s action space with a context -free grammar (CFG) derived from man-page specifications to prevent unproductive token selection. Within this environment and exploration paradigm, we optimize an example policy using proximal policy optimization (PPO), not as an end but as a mechanism for guided traversal of the state space to populate a diverse dataset. Datasets generated using the formulated method are used to fine -tune a CodeT5 model, demonstrating that minimizing argument redundancies improves downstream simulation fidelity. Our contributions are: •We present ShIOEnv, a shell command environment providing execution feedback for com- binatorial argument state spaces with fine -grained behavioural logging to enable systematic command execution behavior collection [11]. •We formulate a behavior redundancy reward and show that a grammar-constrained approach to constructing arguments outperforms unconstrained action spaces and that a carefully tuned PPO policy further improves over random command-argument construction. •We release seven datasets (ShIOEnv-40c) totalling 71K Bash input -output behaviours for 40 commands generated from six policies and an adapted NL2Bash baseline. [12]. 2 Background & Related Work This work aims to curate semantically faithful shell behavior datasets by learning directly from execution feedback inside an instrumented container environment. We review prior work on terminal simulation, existing shell datasets, execution -feedback environments, and reinforcement learning (RL) for syntax-guided exploration. Terminal Simulation Recent studies on attacker engagement via honeypots, systems whose value is derived from their ability to be interacted with by malicious users [13], have explored replacing risky, high-interaction shells with simulated terminals in which a PLM generates output using the provided input [5, 4]. In doing so, a similar range of capabilities can be emulated as a fully implemented system without exposing a potentially exploitable system. However, the absence of labeled datasets detailing command behavior forces current approaches to rely on in-context learning from large-scale PLMs, as the range of behaviors is too diverse for local implementation without specialization from a large amount
https://arxiv.org/abs/2505.18374v1
of data. CLI Datasets Existing CLI command datasets such as NL2BASH [6] and NL2CMD [7] present natural-language (NL) descriptions paired with commands for transductive tasks. Because these corpora treat the CLI as a textual target rather than an executable program, they neglect to provide 2 Table 1: Terminal command datasets Dataset Input Output Context Env # Entries Commands NL2Bash [6] ✓ 9,305 135 Svabensky et al. [8] ✓ 13,446 107 Ragsdale & Boppana [4] ✓ 32,229 55 NL2CMD [7] ✓ 71,705 36 ShIOEnv-40c ✓ ✓ ✓ ✓ 71,794 40 that command’s effect on the executing environment, e.g., output generation or modification of the filesystem. Additionally, each token’s impact on the observed behavior cannot be readily substantiated, given that these datasets lack execution traces or environment state logs tying individual arguments to their induced effects. Alternatively, command traces from training sandboxes and honeypots have been presented in which CLI commands are available [8, 4]. Although the logs record genuine user behavior, they are gathered opportunistically. As a result, the use of each command is constrained to a few frequently used combinations collected in non-persistent environments, making it difficult to reproduce the observed behavior. Table 1 summarizes the scope of these datasets and their included execution behaviors, where none provide inputs with their corresponding execution behavior. This lack of data motivates our implementation of ShIOEnv to systematically explore each utility’s argument space, record behavioural signatures from their execution, and do so in an environment that can be extended to other commands or operating systems. Execution -Feedback Environments The value of execution traces for code generation is now well established with environments like InterCode offering a unified framework in which an agent issues generated commands to an executing environment to benchmark operationally equivalent sequences [9]. Similar interactive environments have framed the generation process as a Markov decision process (MDP) in which execution feedback as a scalar reward obtained from executing test code guides PLMs towards an objective specification [14]. These works demonstrate that execution feedback can provide a supervision signal to enable the discovery of specification-satisfying or operationally equivalent behaviors. However, such environments require grounding in a specific behavior or labeled correct behavior, limiting their use for scalable data generation. In our redundancy analysis, the constructed command serves as the grounding label against which we compare subsequences obtained by systematically omitting one or more arguments. Execution-Guided RL Program Synthesis Recent work in program synthesis has investigated fine-tuning PLMs with RL using execution feedback. CodeRL [15] trains an actor–critic agent to maximise unit -test pass rates. PPOCoder [16] augments this objective with a structural -similarity bonus that encourages compilable solutions, while β-Coder [10] shows that conservative value learning can succeed even under sparse pass/fail rewards. AlphaDev [17] frames sorting -network discovery as a single -player game, rewarding instruction -minimal solutions. More recent systems further refine their reward signal: RLTF [18] exploits fine -grained unit -test diagnostics while Step- Coder [19] decomposes long programs into curriculum -guided subtasks and masks unexecuted code during optimisation. Unlike these approaches, we specifically target shell -command synthesis and implement a dense
https://arxiv.org/abs/2505.18374v1
reward that measures the behavioural minimality of each argument sequence, eliminating the need for pre-existing unit tests or oracle outputs. Syntax -Constrained Generation for Efficient Exploration A CFG is a formal definition of a language structure where a start token is expanded using production rules that repeatedly expand non-terminal tokens to produce a sequence of terminal tokens [20]. Constraining action spaces with CFGs has been shown to provide immediate gains in sample efficiency for RL tasks in program synthesis environments including data generation [21, 22]. Other approaches have used the CFG- defined structure to prune infeasible candidates mid-generation [23]. We apply these exploration methods to the CLI environment for data generation by casting the full production set of 40 GNU utilities as a traversable grammar to be expanded into combinations of arguments for redundancy analysis. 3 Figure 1: dfgrammar representation. 3 Methodology This section introduces ShIOEnv’s state-action-reward formulation as an MDP, describes an abstracted grammar-constrained action space to efficiently explore the argument action space, and defines a redundancy reward to encourage minimal-length behavior-inducing sequences. 3.1 Naive State–Action Formulation A Linux command-line input can be represented as a sequence of arguments drawn from the vo- cabulary Vwhere Vdenotes the set of all possible arguments. Using the 4-tuple (S,A, p, r)MDP definition [24], we represent the state as an ordered sequence of tokens of length tfrom Vin which each new action appends a new token or terminates production. S={(v1, . . . , v n)|n≥1, vi∈V},A=V∪ {a′}. where a′indicates a terminating action. A generating policy πinteracts with this state-action space by supplying arguments with each action to the environment, for which the environment provides a redundancy reward (Section 3.3.2) and updated sequence. However, without knowledge of syntactic constraints, the formulated state-action space of S × A = (V+ 1)P k|V|kis information-sparse. For instance, if the initial token v0is not a valid command, execution will fail regardless of the following tokens, in which all future actions will yield uninformative returns. This observation holds for subsequent tokens not being a valid argument for the most recently observed command token, resulting in exhaustive enumeration or naive exploration being prohibitively sample inefficient. 3.2 Grammar-Constrained Action Space (Agent View) Linux command manual pages specify legal command and option orders that can be encoded as CFGs Gc= (Nc, Tc, Pc, sc)for each utility c∈C(e.g., Figure 1 gives each of df’s nonterminals N in brackets mapping to productions Pfrom the start token <df> to emit terminals from T). These grammar representations are agent-internal, where the policy πselects productions rather than raw tokens until an argument v∈Vis constructed. The agent’s action space for all grammars is defined as Aa=[ c∈C a|a∈Pc ∪ {a′}, where the set union of all production rules is combined with the terminating action. To enforce grammar constraints, a binary mask inspects the left-most non-terminal nduring expansion and applies x(n, a) =( 1, a expands n, 0,otherwise . where production actions not relevant to the current nonterminal have their mass removed from the sampled action distribution. This masking ensures the policy samples only syntactically valid productions by setting all
https://arxiv.org/abs/2505.18374v1
actions not masked to a large negative value, negating their probability, 4 3.3 Reward Formulation yielding the production-masked conditional policy πn: πn(a|st) =x(n, a)π(a|st)P a′′∈Aax(n, a′′)π(a′′|st). The renormalized distribution confines exploration to syntactically valid actions, raising the baseline for combinatorial search [25, 26]. After a first-order production finishes, its terminals are concatenated into a single command-level argument and appended to ShIOEnv’s action sequence. Because the environment sees only completed arguments, it remains agnostic to their construction, facilitating alternative controllers. However, reward discounting must be carefully managed or disabled to prevent misaligned returns. 3.3 Reward Formulation Using the sequence synthesis method described, ShIOEnv employs a dense difference-based re- dundancy reward to reinforce semantically efficient and syntactically valid inputs while penalizing superfluous additions. 3.3.1 Objective Specification We formalize an objective specification for an expanded sequence to satisfy the policies to converge to in their synthesis. Specifically, we define a command sequence sas satisfying our objective specification if: (i) sexecutes successfully (returning a zero exit code), and (ii) sproduces an output or state change that is unobtainable by any subsequence of s. A candidate sequence ssatisfies the objective specification if no proper subsequence s′⊂sterminates successfully (exit code 0) while reproducing the output of sor causing an equivalent change in the system context. This specification ensures that generated sequences are valid and minimally sufficient to achieve the observed effect upon execution, ensuring behaviors can be accurately attributed to each argument in the sequence. Formally, the defined objective for sequence sis given as: E(s) = 0∧[F(s) =o∨C(s) =c] ∧ s′⊂s∧E(s′) = 0∧[F(s′) =o∨C(s′) =c]→∅ . where E:s→Zis a function returning the exit code from the execution of sequence s,F:s→o is a function returning text output oproduced from executing s, andC:s→cis a function returning the change in context from executing s. 3.3.2 Reward Signal Our reward combines execution feedback in the form of argument-level redundancy analysis, which, when maximized, satisfies the objective specification. For a candidate state st, we define its subse- quence with the k-th argument removed as st\k= (v1, . . . , v k−1, vk+1, . . . , v t). For each subsequent argument added, the incremental reward is formulated as follows. We quantify a sequence’s argu- ment redundancy by comparing the behavior of the complete sequence to each argument-omitted subsequence. Let D:o1, o2→[0,1]represent the normalized lexical similarity between two outputs, with a similarity threshold βset to tolerate minor drift caused by temporal differences in execution (Algorithm 1). Reward signals rd t: [0,1]andrc t: [0,1]are given as: rd t= 1−1 |st||st|X k=21D(F(st),F(st\k))≥β, rc t= 1−1 |st||st|X k=21C(st)=C(st\k). measuring the inverse proportion of redundant arguments determined by a lack of behavior change from equivalency given by DandC, with a score of one indicating each argument is required for the final behavior to be observed, satisfying the objective specification. These signals are set to zero for the first argument, usually a command, that is added in a sequence, as it is trivially non-redundant, and there is no proper subsequence against which to test its removal. Thus, these signals begin from the second argument ( k= 2)
https://arxiv.org/abs/2505.18374v1
onwards. rd tandrc tcan be applied to a per-step reward to inform redundancy changes with each argument addition by measuring changes in rd torrc t, providing feedback for each new argument. An increase in these signals indicates that the argument contributes meaningfully to the final behavior, while decreased signals indicate superfluity. To translate this into a stepwise novelty delta, we define r∆ t: [0,1]to indicate changes in redundancy with an argument’s addition, where r∆ t= max( rd t, rc t −max( rd t−1, rc t−1) +1max( rd t,rc t)=1. 5 Figure 2: Environment episode interaction loop. being positive when adding the t-th argument indicates a reduction in redundant arguments. This signal scales in magnitude with the number of preceding arguments the tth argument affects, allowing for previously redundant arguments that become non-redundant and vice-versa to be measured. For perfect sequence construction, the difference of redundancy scores will equal 0, in which the indicator function ensures that perfect sequence construction is given the maximum incremental reward. max( rd t, rc t)acts as a pseudo-potential function for sequences with redundant tokens in that transitions to states are rewarding according to their proximity to the objective [27]. The accumulation of these rewards in the final return encourages the generating agent to add novel behavior-inducing arguments whenever possible. The overall reward signal given for each time step tis given as r=(Pt k=2δ max rd t, rc t −m −1E(st)̸=0,if terminal, δ r∆ t, otherwise. whereupon a terminal state being reached (no nonterminals or terminating action), a final reward of a time-scaled behavior uniqueness score above or below a set margin m: [0,1]reduced by one unit to penalize execution failures. Applying this margin explicitly penalizes sequences whose proportion of redundant tokens exceeds mwhile rewarding those that approach the objective. By weighting the terminal uniqueness score with the sum of intermediate scaling factorsPt k=1δ, longer sequences are given more weight to prevent exploitation of the reward function by ending as soon as possible. During non-terminal steps, r∆ tprovides per-step feedback on redundancy shifts. We scale r∆ tby a factor δ: [0,1]so that the sum of intermediate returns doesn’t overpower the terminal reward signal. By integrating both incremental and terminal reward components, this framework delivers continuous guidance during sequential argument construction and provides a conclusive evaluation upon sequence completion. 4 Evaluation We assess the usefulness of ShIOEnv’s reward signal in representing the objective specification by contrasting four exploration policies. Each policy yields a 10k-sample dataset (Appendix A) used to fine-tune CodeT5 [28]. Policy training is performed on 1x80GB A100 GPU, whereas CodeT5 tuning is performed on 4x80GB A100 GPUs running in data parallel. The remainder of this section outlines our experimental protocol, convergence metrics, and the datasets’ impact on modeling realistic shell behavior. 4.1 Setup Training runs on fifty parallel ShIOEnv workers, each controlling an Ubuntu 22.04 Docker container with a custom home directory. The policy is implemented in PyTorch 2.3.1 [29] to parameterize 225 grammar productions spanning forty commands, producing 1,778 distinct actions derived from the corresponding man pages [30]. Figure 2 illustrates this interaction process, where the
https://arxiv.org/abs/2505.18374v1
agent incrementally constructs command sequences from a start state of a starting directory by sampling productions until all non-terminals are resolved or an arbitrarily chosen 14-argument horizon is met (steps 1-2). Generated sequences are evaluated through behavioral comparison with argument-omitted 6 4.2 Objective Satisfaction Convergence Figure 3: Objective proximity EWMA. subsequences (3-4) using ephemeral containers to execute each sequence and subsequence upon the addition of each argument. The experience from each sequence construction is added to an episode buffer for use in policy updates (5-6). Intermediate reward scale δ= 0.071is set such that it equals one if the argument horizon is reached. Environment-specific (e.g., filenames) or continuous nonterminals are excluded from the learned distribution and instead sampled uniformly from the current state to curb combinatorial growth and allow the agent to generalize to other environments. The policy network is parameterized as a transformer encoder that embeds up to 128 terminal tokens into 1,024-dimensional vectors with learned positional embeddings, and processes them through ten attention layers (eight heads, hidden size 2,048). A [CLS] vector stores the sequence representation used for action sampling and value estimates for generalized advantage estimation [31]. Learned policies are optimized using Proximal Policy Optimization due to its stability and convergence properties ( clip= 0.2,value coefficient = 0.5,entropy coefficient = 0.01,λ= 0.95). We set the return discount factor γto 1.0 to align temporal credit assignment from sampled production actions and the eventual environmental reward for complete arguments, while assigning a zero "filler" reward to inter-argument expansion actions. The policy updates its parameters via the PPO objective [32] every 1,024 steps on 128-step mini-batches for four epochs with an Adam optimizer with a linearly-decaying learning rate of 10−4. 4.2 Objective Satisfaction Convergence To evaluate how effectively ShIOEnv directs agents toward the objective specification, we optimize our grammar-constrained policy network (GCPN) and ablated versions in which learning and action masking have been removed. These ablations yield an unconstrained policy network (UCPN) along- side random strategies that either preserve grammatical validity (GCRT) or sample indiscriminately from all productions (UCRT). To prevent spurious repetition arising from purely random choices, each random policy incorporates a probabilistic stopping rule, "terminating" argument construction with a fixed likelihood at every step. To quantify the behavior of the unconstrained samplers under realistic conditions, we begin each input construction by selecting a utility at random, ensuring that every generated command executes stably. For each policy variant, we train two versions: a baseline, unchanged final reward ( m= 0), and one with a moderate redundancy penalty ( m= 0.5) to measure the effect of discouraging superfluous expansions. Our evaluation metric examines the synthesis of argument-minimal commands by computing the proportion of arguments whose removal produces a change in execution, which indicates average objective proximity (OP). We track the exponentially weighted moving average of the OP over twenty thousand training episodes with a span of 1000 to chart each policy’s convergence behavior in Figure 3. Restricting exploration to the grammar markedly improves OP, with GCRT at a baseline of 0.3, while UCRT remains around 0.1. Furthermore, positive movement in each learned policy’s
https://arxiv.org/abs/2505.18374v1
OP is evident, with GCPN-m0 and GCPN-m50 surpassing their random analogues within 3K episodes and UCPN-m50 overtaking UCRT after roughly 7K episodes. Interestingly, the margin penalty of 0.5 improves earlier and elevates asymptotic performance, with GCPN-m50 approaching 0.7 OP and converging near 0.6 by 5K episodes, compared with GCPN-m0’s plateau at about 0.45. UCPN-m50’s apparent late-stage gains are somewhat illusory as the policy learns to terminate sequences early to avoid pervasive negative rewards. This becomes clear when the gradient updates push the policy to 7 4.3 Dataset Use Case Table 2: CodeT5 input-output behavioral modeling by dataset Metadata Metrics OPavg SR% T/S WT/S chrF BLEU-4 NL2Bash [6] 0.379* 52.53 6.913 1.147* 31.93 0.221 UCRT 0.072 10.82 7.033 0.221 28.97 0.266 UCPN-m0 0.038 49.27 10.681 0.369 7.39 0.028 UCPN-m50 0.214 28.02 3.998 0.658 27.48 0.272 GCRT 0.313 60.00 5.885 1.440 57.29 0.492 GCPN-m0 0.383 87.33 8.053 2.804 64.04 0.555 GCPN-m50 0.681 91.93 5.531 3.596 75.63 0.618 *estimate exploratory behaviors with a sharp decline in OP. In contrast, both GCPN variants exhibit smooth, near-monotonically rising EWMA curves, underscoring their robustness and sample-efficiency advantages derived from a constrained action space. 4.3 Dataset Use Case We fine-tune CodeT5 (220M parameters) for ten epochs on six command-behavior datasets, each with 10,000 unique commands collected after 20,000 training episodes, using a linearly decaying Adam optimizer (starting learning rate 10−4) to minimize categorical cross-entropy loss. Each command is paired with its execution behavior in a split of 80/10/10 into train, validation, and test sets for CodeT5 to model. Test instances with OP greater than 0.9 are deduplicated and pooled into a unified optimal set of 1,287 samples for evaluation. We assess lexical fidelity with BLEU-4 [33] and character sensitivity with chrF [34] to correlate dataset characteristics with model performance using extrapolated metadata. Specifically, we report each dataset’s average OP, tokens per sequence (T/S), and OP-weighted tokens per sequence (WT/S), with the latter estimating the proportion of tokens that materially influence behavior to provide insight into each dataset’s behavioral complexity. A 10,000-sample NL2Bash reference set is adapted to ShIOEnv for the same 40 commands, using Bashlex [35] to separate arguments for OP calculation. This parsing approach may split arguments on delimiters such as whitespace, leading ShIOEnv to underestimate a sample’s OP (and thus WT/S), though it does not affect SR% or training metrics, which are based on the observed behaviors of the full sequence. Additional details on command and behavior distributions for each dataset are provided in Appendix A. Table 2 shows clear benefits from grammar-constrained action selection. Grammar-constrained learned policies (GCPN) attain SR%/OP of 87.33/0.383 without a margin and 91.93/0.681 with a 0.5 margin. WT/S varies by dataset, reflecting different generation philosophies: GCPN-m50 achieves optimal non-redundant combinations with WT/S = 3.596, whereas UCPN-m50 produces very short sequences with WT/S = 0.658. Moreover, the margin limits discourage superfluous arguments, as seen by GCPN-m0, which has an average T/S 2.5 points higher while having a 20% lower WT/S compared to GCPN-m50, whose T/S is close to GCRT’s. CodeT5 trained with data generated from a grammar-constrained policy with an OP margin outperformed
https://arxiv.org/abs/2505.18374v1
all other data generation methods with a character F1 score of 75.63 and BLEU-4 score of 0.618 and, 11% higher than the closest other dataset (GCPN-m0) and 26% higher than the random baseline (GCRT). These gains demonstrate that signals provided by the environment provide meaningful guidance for command synthesis and that requiring a level of redundancy in a known valid argument space compounds these gains. The performance of CodeT5 trained on grammar-constrained datasets is markedly higher than on unconstrained datasets across all policies. GCRT showed an 81% improvement in BLEU-4 over the best unconstrained method (UCPN-m50), demonstrating that grammar-informed synthesis offers a stronger baseline than unconstrained approaches. We find that NL2Bash, given its OP, performed well below its expected performance due to the dataset’s skewed command distribution, where the find command comprises 70% of the original corpus and 86% of our adapted dataset, heavily biasing the model. Interestingly, UCRT, despite having only 0.07 OP, achieved performance comparable to UCPN-m50 and NL2Bash. In our analysis, we attribute this to a limitation of the evaluation metrics, where the error patterns prevalent in the UCRT dataset exhibit a superficial, though limited, 8 resemblance to samples within the optimal command testing set. This establishes a baseline for all non-grammar-constrained methods except UCPN-m0 with the lowest OP, which fails to converge to shorter sequences and cannot attribute even error-inducing arguments to their behaviors. With model gains able to be loosely correlated to OP and WT/S, we can estimate a policy’s ability to generate datasets given that the commands sampled are well-distributed. 5 Limitations Although ShIOEnv and our methods of grammar-guided exploration have improved sample efficiency and coverage, some limitations remain. First, we exclude interactive commands (e.g., ssh,passwd ) due to automation and argument-sampling challenges. Redundancy analysis incurs computational costs linear to sequence length, limiting practicality for long-horizon episodes. We mitigate this cost by caching executions for reuse in subsequent redundancy calculations. Furthermore, ShIOEnv evaluates only the presence, not the quality, of behavioral changes due to the complexity of defining universal heuristics, though ShIOEnv can be adapted to use such heuristics. Additionally, the current reward mechanism assesses redundancy internally for single commands despite ShIOEnv’s multi-command capability. This is due to the complexity of attributing redundancy across multiple commands, leaving multi-command redundancy measures for future work. The supplied margin to punish generated sequences below a set threshold was tested preliminarily with promising results. A curriculum of slowly increasing this threshold for more stable learning is left for future work. Finally, recycling fresh container states assumes single-user contexts, overlooking multi-user or persistent system states. This is a foundational limitation to methods of execution feedback, as a stable environment is needed for behavior reproducibility. The grammar-constrained policy network was demonstrated on forty GNU utilities as proof of concept. Broader coverage requires extensive manual definitions, though we feel 40 utilities provide a promising baseline in line with other datasets. Our masking method with a shared latent representation may cause conflicting signals and training instability, suggesting modular or multi-task approaches, albeit at increased complexity. The grammar abstraction is sensitive to discounting hyperparameters, potentially
https://arxiv.org/abs/2505.18374v1
misaligning returns from temporal differences. A hierarchical action-selection framework might address these issues, which we leave for future work. Nevertheless, this work establishes an extendable and modifiable environment for synthesizing CLI command sequences, supporting future, more extensive studies. 6 Conclusion We introduce ShIOEnv, an RL environment that evaluates Linux command sequences for argument- level redundancy, enabling efficient exploration of the CLI argument space. By representing forty core utilities as context-free grammars to filter out syntactically invalid actions, the generating agent produces more informative arguments to increase sample efficiency for PPO updates across 1,778 production actions than through naive enumeration or unconstrained reinforcement learning. A grammar-constrained PPO policy trained with a redundancy-margin reward (GCPN-m50) converges to an objective-proximity score of 0.68, almost double the next best policy. Sequences generated from this policy populate a 10k-sample dataset, whose argument-minimal commands boost CodeT5’s behavior prediction accuracy by 26% (chrF 75.6, BLEU-4 0.62) over the best ablated method. We release this dataset alongside ablated baselines and an adapted NL2Bash split totaling 71K samples to facilitate future work in command synthesis and shell-behavior modeling. ShIOEnv’s argument-level interface supports extension to new commands or longer horizons, making it a reusable testbed for reward-design studies, curriculum learning across heterogeneous utilities, multi-step planning, and security analysis. Future improvements include further grammar creation, curriculum-driven reward shaping, and techniques to mitigate gradient interference in multi-objective settings. Acknowledgments. The research was sponsored by the Army Research Laboratory and was accomplished under the Cooperative Agreement Number W911NF-24-2-0180 and was conducted in the Systems and Networks (SyN) Lab at The University of Texas at San Antonio using the Cyber Deception Testbed, funded by the Army Research Office under grant number W911NF-21-1-0188. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. 9 REFERENCES References [1] Microsoft. PowerShell Documentation .URL:https://learn.microsoft.com/en-us/ powershell/ (visited on 11/13/2024). [2] Cisco. Networking Software (IOS & NX-OS) .URL:https://www.cisco.com/c/en/us/ products/ios-nx-os-software/index.html (visited on 11/13/2024). [3] Stephen G Kochan and Patrick Wood. UNIX shell programming . Sams Publishing, 2003. [4] Jarrod Ragsdale and Rajendra Boppana. “Evaluating Few-Shot Learning Generative Honeypots in A Live Deployment”. In: 2024 IEEE International Conference on Cyber Security and Resilience (CSR) . IEEE. 2024, pp. 379–386. [5] Muris Sladi ´c et al. “Llm in the shell: Generative honeypots”. In: 2024 IEEE European Sympo- sium on Security and Privacy Workshops (EuroS&PW) . IEEE. 2024, pp. 430–435. [6] Xi Victoria Lin et al. “Nl2bash: A corpus and semantic parser for natural language interface to the linux operating system”. In: arXiv preprint arXiv:1802.08979 (2018). [7] Quchen Fu et al. “Nl2cmd: An updated workflow for natural language to bash commands translation”. In: arXiv preprint arXiv:2302.07845 (2023). [8] Valdemar Švábensk `y et al. “Dataset of shell commands used by participants of hands-on cybersecurity training”. In: Data in Brief 38 (2021), p. 107398. [9] John Yang et al. “Intercode: Standardizing and benchmarking interactive coding with ex- ecution feedback”. In: Advances in Neural Information Processing Systems 36 (2024). URL:https : / /
https://arxiv.org/abs/2505.18374v1
proceedings . neurips . cc / paper _ files / paper / 2023 / hash / 4b175d846fb008d540d233c188379ff9-Abstract-Datasets_and_Benchmarks.html . [10] Zishun Yu et al. “ B-Coder: Value-Based Deep Reinforcement Learning for Program Synthesis”. In: arXiv:2310.03173 (Mar. 2024). arXiv:2310.03173 [cs]. DOI:10.48550/arXiv.2310. 03173 .URL:http://arxiv.org/abs/2310.03173 . [11] synlab-jragsdale. ShIOEnv: A CLI Behavior-Capturing Environment Enabling Grammar- Guided Command Synthesis for Dataset Curation .URL:https://github.com/synlab- jragsdale/ShIOEnv . [12] Jarrod Ragsdale. ShIOEnv_40cmd_7x10K . Version V1. 2025. DOI:10.7910/DVN/BWUIOS . URL:https://doi.org/10.7910/DVN/BWUIOS . [13] Lance Spitzner. Honeypots: tracking hackers . Addison-Wesley Longman Publishing Co., Inc., 2002. [14] Jonas Gehring et al. “Rlef: Grounding code llms in execution feedback with reinforcement learning”. In: arXiv preprint arXiv:2410.02089 (2024). [15] Hung Le et al. “Coderl: Mastering code generation through pretrained models and deep reinforcement learning”. In: Advances in Neural Information Processing Systems 35 (2022), pp. 21314–21328. [16] Parshin Shojaee et al. “Execution-based Code Generation using Deep Reinforcement Learning”. In: arXiv:2301.13816 (July 2023). DOI:10 . 48550 / arXiv . 2301 . 13816 .URL:http : //arxiv.org/abs/2301.13816 . [17] Daniel J Mankowitz et al. “Faster sorting algorithms discovered using deep reinforcement learning”. In: Nature 618.7964 (2023), pp. 257–263. [18] Jiate Liu et al. “Rltf: Reinforcement learning from unit test feedback”. In: arXiv preprint arXiv:2307.04349 (2023). [19] Shihan Dou et al. “Stepcoder: Improve code generation with reinforcement learning from compiler feedback”. In: arXiv preprint arXiv:2402.01391 (2024). [20] John E Hopcroft, Rajeev Motwani, and Jeffrey D Ullman. “Introduction to automata theory, languages, and computation”. In: Acm Sigact News 32.1 (2001), pp. 60–65. [21] Julian Parsert and Elizabeth Polgreen. “Reinforcement learning and data-generation for syntax- guided synthesis”. In: Proceedings of the AAAI Conference on Artificial Intelligence . V ol. 38. 9. 2024, pp. 10670–10678. URL:https://ojs.aaai.org/index.php/AAAI/article/ view/28938 . [22] Pengcheng Yin and Graham Neubig. “A syntactic neural model for general-purpose code generation”. In: arXiv preprint arXiv:1704.01696 (2017). 10 [23] Yanju Chen et al. “Program Synthesis Using Deduction-Guided Reinforcement Learning”. en. In: Computer Aided Verification . Ed. by Shuvendu K. Lahiri and Chao Wang. V ol. 12225. Lecture Notes in Computer Science. Cham: Springer International Publishing, 2020, pp. 587– 610. ISBN : 978-3-030-53290-1. DOI:10.1007/978- 3- 030- 53291- 8_30 .URL:http: //link.springer.com/10.1007/978-3-030-53291-8_30 . [24] Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming . John Wiley & Sons, 2014. [25] Shengyi Huang and Santiago Ontañón. “A closer look at invalid action masking in policy gradient algorithms”. In: arXiv preprint arXiv:2006.14171 (2020). [26] Ziyi Wang et al. “Learning state-specific action masks for reinforcement learning”. In: Algo- rithms 17.2 (2024), p. 60. [27] Andrew Y Ng, Daishi Harada, and Stuart Russell. “Policy invariance under reward trans- formations: Theory and application to reward shaping”. In: Icml. V ol. 99. Citeseer. 1999, pp. 278–287. [28] Yue Wang et al. “Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation”. In: arXiv preprint arXiv:2109.00859 (2021). [29] Pytorch .URL:https://pytorch.org/ (visited on 03/03/2025). [30] Linux man pages online .URL:https : / / man7 . org / linux / man - pages/ (visited on 02/28/2025). [31] John Schulman et al. “High-dimensional continuous control using generalized advantage estimation”. In: arXiv preprint arXiv:1506.02438 (2015). [32] John Schulman et al. “Proximal policy optimization algorithms”. In:
https://arxiv.org/abs/2505.18374v1
arXiv preprint arXiv:1707.06347 (2017). [33] Kishore Papineni et al. “Bleu: a method for automatic evaluation of machine translation”. In: Proceedings of the 40th annual meeting of the Association for Computational Linguistics . 2002, pp. 311–318. [34] Maja Popovi ´c. “chrF: character n-gram F-score for automatic MT evaluation”. In: Proceedings of the tenth workshop on statistical machine translation . 2015, pp. 392–395. [35] Idank. Idank/bashlex: Python parser for bash .URL:https : / / github . com / idank / bashlex . [36] Greg Brockman et al. “Openai gym”. In: arXiv preprint arXiv:1606.01540 (2016). A Dataset Details We provide information on available fields and metadata for the adapted NL2Bash [6] and each of the six evaluated datasets. Each dataset is generated from interacting with ShIOEnv according to its argument creation policy. Each dataset contains approximately 10K entries containing nine fields, enumerated in Table 3. The dataset is stored as a JSON list of dictionaries. The session_id field contains an identifier for the sampled input. The image field contains the Docker image tag in which the input is executed. All entries will have the default provided container testubuntu as its entry. However, the ShIOEnv environment interface can be used with any Docker container. The cwd field is populated with the starting directory for each input. This field can also be derived by looking at the first command in the input to which the starting directory is traversed. The input field contains the final observed input by the environment, including the starting state of the agent: cd <cwd>; . The input_len field contains the number of command sequences in the full input after the change directory command. All entries will have a value of one in this field, as experimentation was conducted on single command sessions. However, ShIOEnv provides support for command sessions of arbitrary length. The code field contains the exit code returned by the last input in the sequence. A zero value corresponds to successful execution, while any other value indicates an error specific to that utility. The output field is the output from the last command in that session. The context_key field stores the type of context that was changed, if any. If the context changes, the context_value field provides a structural diff of how the environment was changed. These output and context-change fields serve as labels mapping inputs to their corresponding targets. The distribution of each dataset’s utilities, output lengths, exit codes, execution effects, and changes to the environment for each of the seven datasets is given in Figures 4 to 10. Management fields (session_id, image, cwd, context_key) in which no information on quality or all fields are identical 11 Table 3: ShIOEnv dataset fields Field Type Description session_id Integer Identifier for worker session from which the sequence is sampled image String Name of the image in which the input was executed cwd String Starting working directory of input sequence input String Constructed input sequence code Integer returned exit code from executing the input output String Observed output from executing the input output_len Integer length of output by character (max 4096 characters)
https://arxiv.org/abs/2505.18374v1
context_key String Key detailing environment change, if any, from executing the input context_value String Observed change in context when the input is executed Figure 4: Adapted NL2bash dataset metadata are omitted. Each dataset covers 40 utilities distributed according to the generating policy. We exclude repeatedly sampled sequences in any singular dataset to prevent duplication bias. We do not filter on output as we believe that a command with the same output across different sequences is valuable information and can be filtered according to the use case. Due to randomly choosing each starting command, all 40 commands are well distributed, save for those with limited options that were fully enumerated or those converging to the same behavior. Our adapted NL2Bash [6] dataset was bootstrapped from 4,486 original commands with system-specific arguments (e.g., usernames, directories, files), modified to be valid options in ShIOEnv’s executing container. The distribution of commands is heavily skewed towards find , as in the original dataset. Only half of the executed commands provided a successful exit code, though a number can likely be attributed to issues in our adaptation pipeline. 12 Figure 5: UCRT dataset metadata Figure 6: UCPN-m0 dataset metadata 13 Figure 7: UCPN-m50 dataset Metadata Figure 8: GCRT dataset metadata 14 Figure 9: GCPN-m0 dataset metadata Figure 10: GCPN-m50 dataset metadata 15 B Environment Details We describe the implementation details of the ShIOEnv environment that underpins our experiments. The environment implements the MDP defined in Section 3 by coupling argument-level sequence construction with atomic execution inside an instrumented Docker container. This container is created from an image for each execution to ensure the operating environment is consistent for redundancy calculations. This design provides precise behavioural feedback while maintaining bounded episode length and wall-clock throughput suitable for policy optimisation. B.1 ShIOEnv ShIOEnv extends gym.Env from OpenAI Gym [36]. Internally, ShIOEnv alternates between two phases: construction, during which the agent appends textual tokens to the current command, and exe- cution, during which the partially -constructed session is run in a fresh container to collect behavioural feedback. Helper routines record the command’s standard output, exit status, execution latency, and a structured snapshot of the system context (working directory, directory contents, environment variables, user groups, shell options, resource limits, and firewall rules). These observations are cached and reused to amortise the cost of redundancy analysis. Two horizon parameters cap the episode: the maximum number of commands per session ( HG) and the maximum number of arguments per command ( HL). When either bound is reached, the environment emits Gym’s truncate flag so the roll -out buffer remains bounded. The experiments in Section 4 used horizon parameters of 1 and 14 for GandL, respectively. B.1.1 State Representation The environment is fully observable, meaning the environment’s state representation and observation returned to the agent are equivalent. Each state observation is an immutable, padded 2 -D tensor of UTF-8 strings ot∈(Σ≤64)HH×HL, where Σ≤64denotes strings of at most 64 characters. The first axis indexes commands in the current session, and the second axis indexes the arguments of each command. Empty strings ( ””) act as padding
https://arxiv.org/abs/2505.18374v1
so the tensor maintains a fixed shape. At reset, the first row is pre -initialised with (cd, c0) where c0is a starting directory drawn uniformly from a user -defined set. Two internal counters (h, l)track the agent’s position inside the tensor. The truncate flag is returned when either counter reaches its bound. B.1.2 Action Representation Each action is passed as a dictionary <input_addition: s,exec_action: e,new_global: n>, withs∈Σ≤64ande, n∈0,1. The Boolean flags are mutually exclusive. If e= 0, the string sis appended to the current command ( n= 0) or starts a new command ( n= 1). Ife= 1, the current session is executed or concluded, and smust be empty. Invalid flag combinations or over -length strings incur an immediate penalty of −10. B.1.3 Reward Signal While we describe the reward signal and its motivation in detail in Section 3, we reiterate for convenience, having a complete description of ShIOEnv’s environment in one section. Intermediate rewards are given based on changes in redundancy, whereas the terminal reward employs a time- scaled, margin-reduced proportional redundancy measure, encouraging incremental progress toward the objective while emphasizing final performance. Let Sh= (s0, . . . , s t), s t= (v0, . . . , v l) be the session consisting of targuments from vocabulary v∈Vafter applying the t-th argument supplied by action at. During construction, the environment performs an intermediate execution 16 B.1 ShIOEnv using output and context-gathering helper functions F:s→oandC:s→c, respectively. The resulting output oand context care compared with each single-argument-omitted variant, given as rd t= 1−1 |st||st|X k=11D(F(st),F(st\k))≥β, rc t= 1−1 |st||st|X k=11C(st)=C(st\k). where Dis the normalised Levenshtein similarity and βis an adaptive noise threshold set to 0.95 or 1 deviation below the mean similarity of 5 repeated executions of the evaluated sequence(Algorithm 1). We set βin this way to avoid improperly rewarding commands whose execution noise may bring their similarity below a static threshold. However, by setting the threshold in such a way, behavior-altering arguments may be incorrectly penalized. We prioritise a lower bound for execution noise over perfect sensitivity to every possible semantic alteration, preserving the integrity of our reward signal while acknowledging a small risk of overly harsh penalties for genuinely behaviour-changing arguments. If no change is observed, that argument is marked redundant as a normalized sum of indicators stored in rd tandrc t. The per -step uniqueness score is given as the max of these terms to equally reward output-producing and context-modifying commands, of which most commands are mutually exclusive. The movement of this redundancy score for added arguments is used as an intermediate signal r∆ t= max( rd t, rc t)−max( rd t−1, rc t−1) +1max( rd t,rc t)=1. where r∆ tprovides a positive reward when the redundancy of a sequence’s arguments is reduced and provides a penalty when it increases. This change in redundancy may occur when the new argument emits novel behavior in isolation or changes the validity or lack thereof of a previous argument. This signal is applied for the 2nd argument onwards, as there is no prior uniqueness score to evaluate for the first argument
https://arxiv.org/abs/2505.18374v1
(in our evaluation, the first argument is always a given command). An indicator is added to provide a maximum reward for cases where r∆ t= 0 with a perfect uniqueness score, providing a strong signal for reinforcement of maintaining objective completion. This intermediate reward is scaled by a factor δ: [0,1]to reduce the magnitude of intermediate rewards to prevent the policy from exploiting the secondary signal, as their accumulation would overpower the final reward. Once a sequence synthesis is complete, known when the agent sets exec_action =1, the session is executed once more to obtain the final reward, given as tX k=2δ! max rd t, rc t −m −1E(st)̸=0 where the margin m: [0,1]explicitly discourages sequences whose proportion of redundant tokens exceeds m, while rewarding those with increased relative value as the objective is approached. If the exit status is non -zero, the indicator 1E(st)̸=0is subtracted from the final reward to further penalize sequences resulting in invalid execution. By weighting the terminal uniqueness score with the sum of intermediate scaling factorsPt k=1δ, longer sequences are given more weight to prevent exploitation of the reward function by ending as soon as possible. In implementation, we found setting δto1 Hwhere His a set horizon yielded stable results. An increasing δcould be implemented to put more weight on arguments being added to longer sequences, though the policy’s return will no longer be on a constant scale. Combining these terms, we get an overall return of G=tX k=2 δr∆ k +tX k=2δ max rd t, rc t −m −1E(st)̸=0. where a maximum undiscounted return of 2δ(t−2) + 1 −mand minimum undiscounted return of −m−1is possible. Error handling and invalid actions. The environment enforces several validity constraints. Viola- tions such as providing an input_addition while exec_action =1, or setting both Boolean flags yield an immediate reward of −10and terminate the episode. Attempting to exceed either horizon raises the truncate flag with zero reward. The shaping schedule promotes concise, minimal command lines. By punishing redundancy at the granularity of individual arguments, the agent learns to add options only when they demonstrably alter 17 B.2 Container Content Algorithm 1 Adaptive noise threshold β Require: command sequence S; repeats N←5;O← {} 1:fori←1toNdo 2: O←O∪F(S) 3:end for 4:P ← sim(oi, oj) 1≤i < j≤N {allN 2 pair-wise similarities} 5:µ←mean( P) 6:σ←stdev( P) 7: 8:return β←min 0.95, µ−σ Figure 11: ShIOEnv Environment Container Filesystem the command’s observable behaviour. The adaptive threshold βprevents the agent from exploiting uncontrollable stochasticity in the container image, such as timestamps or background processes, ensuring that only argument-induced changes are rewarded. B.2 Container Content The Docker container environment executing each candidate sequence is provisioned with various files categorized by file extension. This ensures that generated commands relying on specific artifacts are assessed fairly, preventing undeserved errors unrelated to policy decisions. Files with distinct extensions are systematically organized into logically named directories, ensuring that the state representation, of which the working directory is a component, meaningfully influences command and option selection. For instance, files with the shextension are stored within a directory aptly named scripts . The file system
https://arxiv.org/abs/2505.18374v1
arXiv:2505.18383v1 [cs.CL] 23 May 2025 NileChat: Towards Linguistically Diverse and Culturally Aware LLMs for Local Communities Abdellah El MekkiλHoudaifa AtouδOmer Nacarψ Shady Shehataξ,γMuhammad Abdul-Mageedλ,ξ,γ λThe University of British ColumbiaδUM6PψPSU ξMBZUAIγInvertible AI {abdellah.elmekki,muhammad.mageed}@ubc.ca (b) Controlled generation: Cultural heritage & values aware data Blog posts StoriesPersonal Essays Conversations Reviews LLM English Data Parse top results Search Engine(a) Translation: Educational data LLMTranslate to low-resource language Translated Data Query each individual concept through search engine API Cultural Heritage Concepts Local News Language & Cultural Cues PersonasAct like you are the following person: {persona} Write a {text_genre} in {low_resource_language} . Make sure to include this expression in your generated text: {linguistic_cue} . Rely on this context while you write your {text_genre} : {cultural_concept_article || local_news_article} (c) Retrieval: Local cultural heritage knowledge Cultural Heritage Concepts Retrieved DocumentsMostafa is a 34-year-old Egyptian sales professional with a bachelor's degree, living in Alexandria with his wife and three children. He identifies as working class and feels he's "just getting by" financially . A devout Muslim, he values family and religion highly , along with friends, leisure, politics, and work, though he's cautious with people. He reports good health and a high sense of control over his life, despite moderate life and financial satisfaction. Mostafa holds a mix of traditional and progressive views on gender roles .......... but has low trust in those outside his immediate family . Koshary is a popular and inexpensive Egyptian dish made with pasta, rice, black lentils, fried onions, and spicy tomato sauce. While high in calories, the fried components raise some health concerns. The dish has evolved in Egypt with specialized restaurants, and Alexandrian Koshary dif fers by using yellow lentils and omitting tomato sauce ...... garnish, and sauce. ............. Finally , it highlights the nutritional content and warns about potential health risks, including severe food poisoning, if reheated rice is used. ﯾﺎ ﻋم :Oh uncle/Hey uncle (like "mate," "buddy ," "man," or "bro"), ﻣﺗﻘﻠﻘش : Don't worry , إزﯾك :How are you?, ﻣﻌﻠش :I'm sorry/Never mind/It's okay/Don't worry about it, ﻣش ﻛده : Isn't it?/Right? Conversationھﺘﺎﻛﻞ؟ ﻣﺶ ﻣﺼﻄﻔﻰ، ﯾﺎ إﯾﮫ :أﺣﻤﺪ اﻟﻤﺪام ﺑﻜﻠﻢ ﻛﻨﺖ أﺣﻤﺪ، ﯾﺎ ﻣﻌﻠﺶ :ﻣﺼﻄﻔﻰ . ﻓﻌﻼً اﻟﻐﻼﺑﺔ اﻟﻨﺎس أﻛﻠﺔ ده اﻟﻜﺸﺮي. ﻋﻢ ﯾﺎ ﻋﻮﻧﻚ ﻓﻲ ﯾﻜﻮن رﺑﻨﺎ :أﺣﻤﺪ . ﺑﯿﺠﯿﺐ ﺑﯿﻘﻮﻟﻮا ﺑﯿﺒﺎت ﻟﻤﺎ اﻟﺮز. ﺷﻮﯾﺔ ﻣﻨﮫ ﺑﻘﻠﻖ ﺑﺲ ﻛﺪه؟ ﻣﺶ :ﻣﺼﻄﻔﻰ ﺗﺴﻤﻢ. ﺣﻜﺎﯾﺔ، ده ﺑﺘﺎﻋﻨﺎ إﺳﻜﻨﺪرﯾﺔ وﻛﺸﺮي. ﻧﻀﯿﻒ ده اﻟﻤﺤﻞ ،ﻋﻢ ﯾﺎ ﻣﺘﻘﻠﻘﺶ :أﺣﻤﺪ ﺗﺎﻧﻲ طﻌﻢ ﺻﻠﺼﺔ ﻏﯿﺮ ﻣﻦ اﻷﺻﻔﺮ اﻟﻌﺪس. ﻋﻠﻰ أﺣﺎﻓﻆ ﺑﺤﺎول ﺻﺤﯿﺔ؟ دي اﻟﺰﯾﻮت ﻛﻞ ﺑﺲ ﺣﻖ، ﻋﻨﺪك أﯾﻮه :ﻣﺼﻄﻔﻰ اﻟﻌﯿﺎل ﻋﺸﺎن ﻧﻔﺴﻲ. ﻗﺎﻋﺪﯾﻦ إﻧﻨﺎ اﻟﻤﮭﻢ. ﺣﺎﺟﺔ ھﺘﻌﻤﻞ ﻣﺶ اﻟﺸﮭﺮ ﻓﻲ ﻣﺮة ﻣﺼﻄﻔﻰ، ﻋﻢ ﯾﺎ :أﺣﻤﺪ ﺑﻌﺾ ﻣﻊ. ﻣﻦ ﺑﺎﻟﮫ ﯾﺎﺧﺪ اﻟﻮاﺣﺪ ﻻزم ﺑﺲ رﺑﻨﺎ، ﺑﺘﺎع ده اﻟﺮزق. هلل اﻟﺤﻤﺪ :ﻣﺼﻄﻔﻰ ﺻﺤﺘﮫ. ﯾﺎ ﯾﺒﺎرﻛﻠﻚ رﺑﻨﺎ. وﻣﺒﺴﻮط راﺿﻲ ﯾﺒﻘﻰ اﻟﻮاﺣﺪ اﻟﻤﮭﻢ. ﺣﻖ ﻋﻨﺪك :أﺣﻤﺪ ﺻﺎﺣﺒﻲ. ﺣﺒﯿﺒﻲ ﯾﺎ ﻟﯿﻨﺎ وﯾﺨﻠﯿﻚ أﺣﻤﺪ، ﯾﺎ آﻣﯿﻦ :ﻣﺼﻄﻔﻰ .Persona Local Context Linguistic Cue Text Genre Egyptian Arabic dialect expressed in Arabic ScriptLow-Resourced LanguageAhmed: What's up , Mostaf a? Aren't y ou going to eat? Mostafa: Sorry Ahmed, I w as talking to the wi fe. Ahmed: May God be wi th you, man. This
https://arxiv.org/abs/2505.18383v1
K oshary is truly a poor man's meal, isn't i t? Mostafa: Isn't it though? B ut I worry about i t a bit. They sa y leftover rice can cause f ood poisoning. Ahmed: Don't worry , man, this place is clean. And our Alexandrian koshary is something else; the y ellow lenti ls without sauce ar e a whole di fferent taste. Mostafa: Yeah, y ou're right, but ar e all these oi ls heal thy? I'm trying to tak e care of m yself for the kids. Ahmed: Oh Mostaf a, once a month isn't going to do an ything. The important thing is that we'r e sitting together . Mostafa: Thank God. Sustenance is f rom God, but one must tak e care of their heal th. Ahmed: You're right. The important thing is to be content and happy. God bless y ou, m y friend. Mostafa: Amen, Ahmed, and ma y He k eep y ou for us, m y dear . LLMGenerated Egyptian conversationEnglish translation of the generated conversion Figure 1: Our proposed framework for data augmentation tailored to low-resource local communities. (a)Workflow for generating educational data in the target language through machine translation. (b)Workflow for generating diverse text genres in the target language by simulating scenarios that incorporate local cultural contexts and persona descriptions. (c)Retrieval process for augmenting local cultural knowledge by parsing web pages representing specific local cultural concepts through web search. Abstract Enhancing the linguistic capabilities of Large Language Models (LLMs) to include low- resource languages is a critical research area. Current research directions predominantly rely on synthetic data generated by translating En- glish corpora, which, while demonstrating promising linguistic understanding and trans- lation abilities, often results in models aligned with source language culture. These models frequently fail to represent the cultural her- itage and values of local communities. This work proposes a methodology to create both synthetic and retrieval-based pre-training data tailored to a specific community, considering its(i) language ,(ii) cultural heritage , and (iii) cultural values . We demonstrate our method- ology using Egyptian and Moroccan dialects as testbeds, chosen for their linguistic and cul- tural richness and current underrepresentation in LLMs. As a proof-of-concept, we develop NileChat , a 3B parameter LLM adapted for Egyptian and Moroccan communities, incor- porating their language, cultural heritage, andvalues. Our results on various understanding, translation, and cultural and values alignment benchmarks show that NileChat outperforms existing Arabic-aware LLMs of similar size and performs on par with larger models. We share our methods, data, and models with the community to promote the inclusion and cov- erage of more diverse communities in LLM development.1 1 Introduction Large Language Models (LLMs) have advanced rapidly, enabling remarkable proficiency across many tasks. Yet, this success is unevenly dis- tributed across languages, with substantial perfor- mance disparities observed for non-English lan- guages, particularly low-resource languages and dialectal variants (Navigli et al., 2023). A primary factor underlying this discrepancy is the limited representation of diverse multilingual data within the foundational pre-training corpora of these mod- 1https://github.com/UBC-NLP/nilechat. 1 els which favors high-resource
https://arxiv.org/abs/2505.18383v1
languages spoken in regions with high economic influence (Bender, 2011; Joshi et al., 2020). Beyond linguistic limitations, a more profound challenge is the inherent risk of cultural encapsu- lation (Wrenn, 1962) in LLMs. Cultural encapsu- lation refers to an unconscious tendency to oper- ate within one’s own cultural lens, leading to mis- understanding or avoidance of differing perspec- tives and values. As LLMs are optimized to repli- cate patterns in their training data—predominantly sourced from specific cultural contexts (e.g., West- ern, English-speaking)—they risk internalizing and propagating these dominant perspectives as the norm (Dwivedi et al., 2023; Tao et al., 2024; Wang et al., 2024; Naous et al., 2024). The significance of cultural context cannot be overstated. As Edward Sapir noted: "No two languages are ever sufficiently similar to be considered as representing the same social reality. The worlds in which different societies live are distinct worlds, not merely the same world with different labels attached." - Sapir (1929) This cultural bias is compounded by a funda- mental mismatch: LLMs typically process data through a language-centric lens, whereas human communities are structured around shared social ties, perspectives, and values (MacQueen et al., 2001). Current LLMs adaptation techniques for new languages or communities (Gurgurov et al., 2024; Joshi et al., 2025) often fall short in bridg- ing this cultural divide, especially for low-resource communities (Naous et al., 2024). For instance, machine translation, while useful for generating synthetic data to boost linguistic coverage (Joshi et al., 2025; Shang et al., 2025; Wang et al., 2025), primarily addresses the linguistic deficit. The trans- lated content often retains the source language’s cultural perspective, failing to incorporate authen- tic local nuances crucial for genuine interaction. Supervised fine-tuning (SFT) on target language data (Gala et al., 2024; Shang et al., 2025) can align models to specific tasks, but small datasets may not reshape deep-seated cultural biases from pre-training (Rystrøm et al., 2025) and can encour- age hallucination with new factual data (Gekhman et al., 2024). While continued pre-training with cul- turally rich data could mitigate these issues, it faces a critical bottleneck for low-resource contexts: the scarcity of such high-quality digital texts.This paper addresses the critical need to adapt multilingual LLMs to low-resource language com- munities by jointly considering their linguistic char- acteristics and cultural heritage & values. We propose a novel pipeline (illustrated in Figure 1) focused on data augmentation for continued pre- training. Our approach combines controlled syn- thetic data generation (Section 3.1.2) with retrieval (Section 3.1.3) methods. To address linguistic adap- tation, we translate English pre-training data into the target local language focusing only on high- quality data from the educational domain (Sec- tion 3.1.1). Crucially, to imbue cultural relevance, we generate diverse texts reflecting specific cul- tural heritage concepts (e.g., food, celebrations, proverbs) using local persona descriptions (Sec- tion 3.1.2) reflecting the local cultural values. We demonstrate our method on the Moroccan and Egyptian Arabic dialects as low-resource testbeds. We further pre-train a multilingual LLM on a cu- rated mix of real and synthetic data, evaluating its performance on tasks involving language under- standing, translation,
https://arxiv.org/abs/2505.18383v1
and alignment with cultural knowledge and values. Our findings show that the adapted model significantly outperforms baseline and existing models that are even bigger in size on most evaluation tasks. The main contributions of this work are: (i)A novel framework for augmenting pre-training cor- pora tailored to local communities. This framework considers their unique linguistic features, cultural heritage, and values by leveraging a teacher LLM. (ii)The public release of new datasets, representing the largest publicly available corpora for Egyptian and Moroccan Arabic dialects. These resources are intended to foster further research in these under- resourced languages. (iii)The development and public release of NileChat , a robust 3-billion pa- rameter LLM. This model demonstrates proficiency in both Egyptian and Moroccan dialectal Arabic (using Arabic script and Arabizi) while maintain- ing strong performance in Modern Standard Arabic, French, and English. 2 Related Work Adaptation of LLMs. LLMs, despite general strengths, often require adaptation for specific lan- guages, domains, or cultures (Bang et al., 2023; AlKhamissi et al., 2024; Naous et al., 2024; Song et al., 2025). Adaptation techniques include prompt engineering (Shen et al., 2024), SFT on culturally 2 specific datasets (Huang et al., 2024), and contin- ued pre-training on target-specific data (Fujii et al., 2024; Huang et al., 2024). A key challenge, es- pecially for SFT-based cultural adaptation, is the scarcity of comprehensive cultural datasets, hinder- ing alignment with under-represented communities (Ahmad et al., 2024; Shen et al., 2024). Synthetic Data Augmentation for LLMs. To ad- dress data limitations, synthetic data augmentation has shown promise in improving LLM performance (Ge et al., 2024; Li et al., 2024; Joshi et al., 2025). Machine-translated data, for instance, can enhance capabilities in new languages (Joshi et al., 2025; Shang et al., 2025), and persona-driven synthetic data generation has also yielded performance gains (Ge et al., 2024) and aided in tasks like assessing LLM political alignment (Bernardelle et al., 2024). However, synthetic data can sometimes degrade performance (Seddik et al., 2024), necessitating best practices for its use (Liu et al., 2024). Arabic LLMs. In Arabic LLM development, models are either trained from scratch (Bil- lah Nagoudi et al., 2023; Sengupta et al., 2023) or adapted from existing ones (Huang et al., 2024; Bari et al., 2025; Team et al., 2025a). A common method involves translating English data to Arabic, which, however, can introduce cultural biases from the source language (Sengupta et al., 2023; Naous et al., 2024). Recent work on dialectal Arabic, such as translating instructions into Moroccan dialect for SFT, has improved generation tasks (Shang et al., 2025). Yet, enhanced performance on standard tasks does not guarantee cultural awareness. While models like AceGPT (Huang et al., 2024) and Fanar (Team et al., 2025a) aim for cultural cognizance, our work uniquely focuses on adapting existing LLMs to a local community by deeply integrating its specific linguistic features, cultural heritage, and values, building upon these prior advancements. 3 Methodology In this work, we investigate the potential of pre- training data to imbue LLMs with the specific lo- cal characteristics of under-represented communi- ties. We conceptualize these characteristics along
https://arxiv.org/abs/2505.18383v1
three primary dimensions (Geertz, 1977; Ander- son, 1991; Bourdieu and Thompson, 1991; Higgins and Douglas, 2020; Stanlaw and Adachi, 2025): (i) Language: Encompassing dialectal nuances, idiomatic expressions, and linguistic structuresunique to the community. (ii) Cultural Heritage: Reflecting the customs, traditions, social norms, historical context, and common knowledge preva- lent within the community. (iii) Cultural Values: Capturing the ethical standpoints, belief systems, and societal priorities that define the community. We refer to these three dimensions as Language- Heritage-Values dimensions, LHV for short. While we do not posit these as exhaustive of the attributes of a given community, we employ them as a vehi- cle to approximate the LLM communication and information needs at local levels. To ground our in- vestigation, we focus on two low-resource varieties of Arabic—The Egyptian Arabic (EGY) and Mo- roccan Arabic (MOR)—. These dialects serve as our primary case studies for evaluating the methods proposed herein. 3.1 Data Augmentation The construction of LLMs that can serve a specific population fundamentally depends on the availabil- ity of representative data. Recognizing the acute scarcity of publicly available pre-training corpora for many low-resource languages, including EGY and MOR, we propose a novel data production method encapsulating the LHV dimensions of a given country-level population. As depicted in Fig- ure 1, our approach leverages three complemen- tary strategies intended to collectively capture the LHV dimensions: (a) machine translation (MT) ,(b) controlled-generation and(c) retrieval . We explain these next. 3.1.1 MT for Knowledge and Fluency To ensure linguistic fluency and coherence, we translate structured educational content from En- glish into the target low-resource language using a specialized teacher model. Our pipeline preserves original formatting and includes filtering to remove unreliable translations identified by repetitive n- grams. We use educational materials for their topi- cal breadth (including subjects such as education, history, health, medicine, and biology). 3.1.2 Controlled Generation for Cultural Heritage and Cultural Values Linguistic fluency, while a foundational capabil- ity for LLMs, does not inherently guarantee their awareness of, or alignment with, the culture and values of a specific target community (Naous et al., 2024). To bridge this gap, we employ controlled generation . For controlled generation, we use the 3 teacher LLM to generate diverse texts in the target language. These texts are specifically designed to discuss local topics, which are identified from arti- cles sourced from local news websites or the target country’s Wikipedia portal. Furthermore, the gen- erated content is crafted to reflect distinct personas, each defined by a profile encompassing specific moral values, demographic characteristics, and so- cioeconomic attributes. Our approach integrates four key components to achieve this: Local Contextual Information. We ground our synthetic data by incorporating local context drawn from news websites within the target communities. These sources provide relevant contextual informa- tion and do not necessarily need to be in the target local language. Core Cultural Heritage Concepts. We inte- grate key regional cultural elements, such as cui- sine, landmarks, and celebrations, by extracting relevant articles from country-specific Wikipedia portals. Linguistic and Cultural Expressions. To au- thentically capture local idiomatic styles, we col- lect common expressions,
https://arxiv.org/abs/2505.18383v1
proverbs, idioms, dia- logues from TV programs, and local terminology, pairing each with English translations for accuracy. Representative Personas. We develop represen- tative personas reflecting local moral, demographic, and socioeconomic attributes by leveraging data from the World Values Survey (WVS) (EVS/WVS, 2024). Selected survey responses are transformed into textual descriptions, which are further refined by an LLM to create concise and coherent persona profiles (see Figure 2). To produce diverse text genres for pre-training, we combine data points from the four listed com- ponents into a unified prompt to guide the teacher LLM. This prompt instructs the LLM to generate varied text outputs in the target low-resource lan- guage, explicitly integrating the selected persona’s values, the specified cultural concepts, and pro- vided linguistic cues. Specifically, we focus on generating the following genres: stories ,personal essays ,blog posts ,reviews , and conversations . An example of this process is depicted in Figure 1 (b). 3.1.3 Retrieval for Local Cultural Heritage This method involves querying a search engine us- ing a pre-defined list of cultural concepts that span multiple cultural categories. For each concept, we extract the top 20search results, systematically ex- cluding social media platforms. The textual contentfrom the retrieved web pages is then parsed and extracted using Trafilatura (Barbaresi, 2021). Participant Response Extraction Name: Fatima Gender: Female Age: 22 Question: How important is family in your life? Answer: V ery important ........ Question: Are you a religious person? Answer: YesResponses Parsing into T ext This person is Moroccan and lives in Morocco, her first name is Fatima She is female aged 22. She views family as very important in her life. She considers friends as very important. ........Formatting & Summarization Fatima is a 22-year-old Moroccan woman. In terms of life priorities, Fatima considers family , friends, leisure, politics, and work very important. ........ Figure 2: Pipeline for generation of persona descriptions using the WVS. 3.2 Model Training We evaluate our proposed method on Egyptian (EGY) and Moroccan (MOR) Arabic dialects. De- spite their large speaker populations, these dialects remain low-resource, underscoring the need for spe- cialized language models. We select Command R+ (Cohere Labs, 2024) (104B) as our teacher model, as it demonstrates reasonable text-generation capa- bilities in both target dialects. Additionally, Com- mand R+ provides open weights, enabling us to efficiently generate or translate extensive datasets without incurring API costs. 3.2.1 Continued Pre-Training Data. We generate pre-training data for EGY and MOR using the methods outlined in Section 3.1. Our approach involves three main components: (i) MT Data. We employ our teacher model to trans- late English educational content into both dialects. Specifically, we sample 5.5 million texts from the Fineweb-edu dataset (Penedo et al., 2024) and translate them into EGY and MOR. (ii) Controlled Generation Data. We craft tailored prompts incor- porating personas, local cultural contexts, dialectal glosses, expressions, and utterances to instruct the LLM in generating diverse genres of text. For per- sona descriptions, we generate 1,200 descriptions based on data from Egyptian and Moroccan par- ticipants in the WVS. For local news context , we leverage an in-house
https://arxiv.org/abs/2505.18383v1
corpus comprising approxi- mately 1.5 million Egyptian and 800,000 Moroccan news articles, originally published in MSA by local news websites. Additionally, we include 25,000 Egyptian and 49,000 Moroccan Wikipedia articles. Fordialectal glosses, expressions, and utterances , we draw from publicly available resources on EGY and MOR proverbs and idiomatic expressions, each accompanied by English explanations. We further 4 augment this with an in-house dataset of 600 di- alectal utterances from Egyptian and Moroccan television shows paired with English translations, as well as 4,000 dialect-to-English word pairs for each dialect from the Gatitos dictionary (Jones et al., 2023). (iii) Retrieval Data. For information retrieval, we query the Brave Search API2using 6,500 cultural concepts from Morocco and 4,500 cultural concepts from Egypt. These concepts rep- resent the ten cultural heritage categories in the set {food, clothes, landmarks, festivals & celebrations, geography, handicrafts, architecture, auna, flora, music} . The generated dataset comprises approximately 5.5 million educational articles for both EGY and MOR. Additionally, for EGY , it includes approxi- mately 300,000 samples for each category of con- versions, personal essays, blog posts, reviews, and stories. For MOR, there are approximately 150,000 samples for each of these same categories. These latter categories represent the LHV dimensions (§3.1). Table A.3 presents a sample of these texts. A filtering process using a repetitive n-gram fil- ter removed 3.97% of the data. We also conducted a dialectness check on the generated data using ALDi (Keleg et al., 2023). The average dialectness scores for the EGY and MOR educational articles are0.45and0.32, respectively. In contrast, for the texts focused on cultural heritage and values, the average dialectness scores are higher, at 0.84for EGY and 0.72for MOR. We attribute the lower dialectness levels in the educational articles to the prevalence of scientific terms that often lack di- rect equivalents in EGY and MOR, and were there- fore retained in MSA. We convert 1.5M and 0.5M from this generated data to Arabizi for both EGY and MOR, respectively. For retrieval, we collect 110,000 and 30,000 articles about cultural heritage to both EGY and MOR. Our final pre-training dataset is a mixture of our generated and retrieved data, combined with pre- existing publicly available data for these dialects, MSA, English, French, Math, and Code. Our ob- jective is to preserve the data distribution of the base model’s pre-training data to mitigate catas- trophic forgetting (Luo et al., 2025). The resulting pre-training dataset comprises 98.57 billion words, and its composition is detailed in Table A.1. Compute. We used a cluster of 4 ×A100 80GB GPUs for 1,096 hours to create our augmented pre- 2https://brave.com/search/api/training dataset using the listed inputs. Continued Pre-training. Rather than pretrain- ing an LLM from scratch, we continue pretraining Qwen-2.5-3B (Qwen et al., 2025) with our data. We select this model due to its competitive per- formance and good tokenizer compression ratio on MSA. We continue pretraining the full model (3.1B parameters) for one whole epoch, which took 750 hours on 4 ×A100 80GB GPUs. More details about the base model selection and the training are in Appendix B.1. 3.2.2 Supervised Fine-Tuning
https://arxiv.org/abs/2505.18383v1
Data. To adapt our pre-trained model for instruc- tion following, we perform supervised fine-tuning (SFT). Due to the scarcity of SFT datasets for EGY and MOR, we construct a comprehensive training set. This process involves several key steps: (i) translation of SmolTalk dataset (Allal et al., 2025) into MOR, EGY , French, and MSA using the teacher LLM; (ii) generation of synthetic dialectal question-answer pairs using our the re- trieved dataset of local Egyptian and Moroccan cultural heritage;3(iii) incorporation of the Darija- SFT-Mixture MOR dataset provided by Shang et al. (2025); and (iv) translation of TULU-V2-mix dataset (Ivison et al., 2023) into EGY . Finally, (v) this consolidated SFT dataset is augmented by con- verting understanding and generation tasks from the training sets of the ORCA (Elmadany et al., 2023) and Dolphin (Nagoudi et al., 2023) bench- marks into instruction-response formats. The final composition of our instruction dataset is in Table A.2. Fine-Tuning. For model fine-tuning, we follow recent approaches (Ramé et al., 2024; Dang et al., 2024) that leverage model merging techniques to produce models effective across multiple languages or tailored for particular tasks. Specifically, we fully fine-tuned two separate variants of the base model—one specialized for MOR and the other for EGY—each trained on its respective dialectal data in both Arabic script and Arabizi (plus an amount of shared data between the two variants from the other languages; see B.2). We fine-tune each dialect-specific model for two epochs and em- ploy weighted linear averaging (Aakanksha et al., 2024) for merging, dubbing our merged model NileChat . 3This data is initially created in Arabic script, and a portion is subsequently converted to Arabizi. 5 More information about our model merging is in B.2 and the prompts used for generating and translating our pre- and fine-tuning datasets is in E. 4 Experiments 4.1 Evaluation Tasks We employ a comprehensive evaluation framework to measure the performance of NileChat for EGY and MOR. This framework enables comparison with our baseline and other LLMs across multi- ple capability dimensions: Understanding, cultural knowledge, translation, and value alignment. Understanding. We evaluate understanding ca- pabilities using MMLU (Hendrycks et al., 2021), HellaSwag (Zellers et al., 2019), and Bele- bele (Bandarkar et al., 2024) benchmarks, each adapted to both EGY and MOR dialects. For MOR, we directly employ the MMLU and HellaSwag versions provided by Shang et al. (2025). For EGY , we follow the translation pipeline described in Shang et al. (2025), translating the English and MSA MMLU tasks and the English HellaSwag dataset into EGY using our teacher model. For the Belebele benchmark, we utilize the official Mo- roccan and Egyptian dialect sets. Evaluations are conducted in both zero-shot and 3-shot scenarios, using accuracy as our performance metric. Cultural Knowledge. To assess cultural knowl- edge specific to Morocco and Egypt, we utilize the publicly available test set from the Palm bench- mark (Alwajih et al., 2025), focusing on these two countries. We adopt an LLM-as-Judge method- ology (Zheng et al., 2023), employing Gemma-3- 27b (Team et al., 2025b) to rate the correctness of model-generated responses compared to
https://arxiv.org/abs/2505.18383v1
ground- truth answers on a scale from 0 to 10. The final evaluation score is calculated as the average cor- rectness across all responses. Translation. We evaluate the translation perfor- mance across multiple directions: dialect↔dialect (i.e., Moroccan ↔Egyptian), dialect↔MSA , English ↔dialect , and French↔dialect . Our pri- mary benchmark is the Flores-200 dataset (Team et al., 2022), comprising 1,012test examples per translation direction. Additionally, we introduce an in-house, human-curated dataset consisting of 300 authentic EGY and MOR utterances transcribed from local television programs then translated to MSA and English. This dataset provides a moreModelMMLU HellaSwag Belebele Palm EGY MOR EGY MOR EGY MOR EGY MORLess than 7BQwen3-1.7B 28.53 28.53 28.44 27.47 22.89 22.89 3.61 2.12 ar-stablelm-2-chat 41.56 40.36 34.79 33.45 38.89 36.11 4.20 3.62 Atlas-Chat-2B 42.61 44.87 29.66 34.74 50.56 55.67 3.16 3.42 Llama-3.2-3B-Instruct 40.68 37.54 29.16 28.27 45.44 35.89 3.21 2.28 gemma-3-4b-it 40.79 32.70 34.21 31.35 37.33 34.22 7.61 5.42 Qwen3-4B 28.61 28.54 30.28 29.04 22.89 22.89 4.51 2.71 Qwen2.5-3B-Instruct 43.37 44.43 31.62 29.58 51.33 41.44 2.86 2.31 NileChat (3B) 57.56 57.36 37.97 39.33 72.67 70.33 5.72 5.86More than 7BAceGPT-7B-chat 40.29 37.57 33.27 30.47 32.67 32.00 5.58 3.93 ALLaM-7B-Instruct 60.04 58.72 39.40 37.30 69.56 57.78 6.78 6.14 Qwen2.5-7B-Instruct 49.65 44.98 34.67 32.16 64.22 48.56 6.70 4.77 Qwen3-8B 28.53 28.53 31.76 30.32 22.89 22.89 5.88 3.96 Atlas-Chat-9B 55.17 58.84 33.71 44.34 70.33 74.11 5.24 4.84 gemma-3-12b-it 61.17 60.00 38.59 35.66 75.78 64.89 8.76 7.09 AceGPT-13B-chat 45.45 40.68 35.06 32.40 38.78 36.44 6.10 4.83 jais-13b-chat 49.79 48.10 39.02 36.56 64.22 53.78 5.66 4.80 Table 1: Zero-shot performance of models on under- standing and cultural knowledge evaluations. Metrics are accuracy for MMLU, HellaSwag, and Belebele, and a 0-10 correctness score for Palm. Bold values indicate the highest score among models comparable in size to ours (< 7B). Underlined values represent the highest score in the entire column, including larger models. accurate reflection of natural, colloquial language usage compared to Flores-200, which primarily contains Wikipedia-based sentences. We conduct evaluations in both zero-shot and 4-shot settings, reporting results using ChrF++ (Popovi ´c, 2015) and spBLEU scores (Goyal et al., 2022). Value Alignment. To assess alignment with so- cietal values, we adapt WVS questions into a multiple-choice format (expressed in the local lan- guage). The questions are categorized into 13 di- mensions such as Economic Values (EcoV) ,Ethi- cal Values (EthV) , and Happiness and Wellbeing (HW) .4We use the Social Value Alignment (SV A) metric (Lee et al., 2024), which measures align- ment using the distribution of survey responses. A model’s alignment score for each question corre- sponds to the proportion of participants who chose the model-predicted option, averaged across all questions for the final score. Baseline Models. We compare NileChat against a set of 17instruction-tuned LLMs known for their strong capabilities in Arabic, capped at 13B param- eters (see full list in Table C.1 and Appendix C.2 for details).5 4.2 Results and Discussion Understanding. As Table 1 shows, NileChat demonstrates SoTA performance on the MMLU, HellaSwag, and Belebele benchmarks for both 4See Appendix C.1 for the full list. 5We also evaluate our translation performance
https://arxiv.org/abs/2505.18383v1
against an NLLB-200’s 3.3B variant (Team et al., 2022). 6 EGY and MOR when compared to similar size models. Specifically, NileChat surpasses its base- line model, Qwen2.5-3B-instruct, by a signifi- cant margin of ∼10 points across the majority of these tasks. Notably, NileChat also outperforms larger Arabic-focused models such as AceGPT-13B and Jais-13B. Furthermore, it achieves on-par per- formance with recent leading Arabic LLMs like ALLaM-7B, with a performance gap of less than 3 points on most tasks, and even surpasses it on certain benchmarks, including Belebele. Results for 3-shot are presented in Table D.1 and they show a similar trend to the zero-shot ones. Cultural Knowledge. As shown in Table 1, our approach significantly enhances cultural knowl- edge (Palm), enabling NileChat to achieve scores of 5.72 (EGY) and 5.86 (MOR), compared to base- line Qwen2.5-3B-instruct scores of 2.86 and 2.31, respectively. Among similarly sized models, ours achieves the highest performance on MOR and ranks second only to Gemma-3-4B for EGY . Al- though larger models such as Gemma-3-12B ex- hibit superior overall scores (EGY: 8.71, MOR: 7.09), NileChat notably surpasses AceGPT-7B and -13B on Moroccan cultural knowledge, despite their despite their claimed alignment with Arabic cul- tures. Additionally, it outperforms Atlas-chat-2B and -9B, models specifically fine-tuned for Mo- roccan dialects. These results support our claim that linguistic fluency alone—gained through su- pervised fine-tuning or pre-training on potentially biased, translated datasets—is insufficient for gen- uine cultural alignment with local communities. Translation. Table 2 summarizes the spBLEU scores from our zero-shot translation. Overall, NileChat achieves the highest average transla- tion quality (spBLEU: 21.32), outperforming all evaluated models, including larger alternatives such as ALLaM-7B (20.60) and NLLB-200-3.3B (18.29). Specifically, on the Flores benchmark, NileChat demonstrates comparable performance to the similarly-sized NLLB-200-3.3B, with only a marginal 1-point spBLEU difference aggregated across MOR and EGY . Notably, NileChat surpasses even larger competitors in all translation directions, except when translating into MOR, where its per- formance matches that of Atlas-Chat-9B—a larger, single-dialect-focused model that is 3X larger. On our in-house, human-curated dataset—which closely represents authentic speech patterns from local populations— NileChat significantly outper-forms all baselines, including NLLB-200-3.3B, in all translation directions for both EGY and MOR. This real-world evaluation emphasizes the effective- ness of our strategy to incorporate local linguistic and cultural elements into synthetic data generation, enriching the pre-training data with diverse dialec- tal expressions and vocabulary. Detailed results for both zero-shot and 4-shot translation experiments are provided in Table D.2. ModelFlores-200 In-House DataAverage XX→ → XX XX→ → XX EGY MOR EGY MOR EGY MOR EGY MORLess than 7BQwen3-1.7B 14.75 10.89 19.51 15.47 11.41 4.36 15.63 6.32 12.29 ar-stablelm-2-chat 14.35 7.07 11.10 9.72 9.23 2.92 11.23 7.73 9.17 Atlas-Chat-2B 15.20 13.40 21.39 21.11 5.36 7.83 14.52 13.54 14.05 Llama-3.2-3B-Instruct 14.25 9.15 19.28 15.54 10.67 3.16 13.61 4.87 11.32 gemma-3-4b-it 9.27 5.22 12.46 10.13 3.01 0.60 16.89 5.25 7.86 Qwen3-4B 17.93 11.64 20.03 18.90 13.09 4.44 20.72 8.52 14.41 NLLB-200-3.3B 23.93 15.37 25.84 26.57 16.77 7.49 18.90 11.43 18.29 Qwen2.5-3B-Instruct 15.14 11.27 20.52 17.37 9.91 4.19 19.24 7.83 13.18 NileChat (3B) 23.60 16.41 25.74 25.56 22.02
https://arxiv.org/abs/2505.18383v1
12.34 26.50 18.39 21.32More than 7BAceGPT-7B-chat 18.02 11.33 21.11 17.46 14.73 4.95 20.10 7.47 14.40 ALLaM-7B-Instruct 23.91 15.88 24.74 23.19 19.98 9.16 29.40 18.51 20.60 Qwen2.5-7B-Instruct 14.41 10.23 19.81 18.95 10.43 4.10 20.92 8.80 13.46 Qwen3-8B 20.03 13.86 22.56 21.33 13.38 4.73 24.14 9.27 16.16 Atlas-Chat-9B 18.20 16.89 24.92 26.29 5.36 7.68 17.35 15.23 16.49 gemma-3-12b-it 13.01 4.89 19.05 19.54 7.86 2.45 24.51 12.38 12.96 AceGPT-13B-chat 19.48 14.02 22.81 19.84 15.54 5.56 23.51 9.52 16.29 jais-13b-chat 8.80 4.29 15.77 17.12 10.83 4.02 19.19 12.47 11.56 Table 2: Zero-shot translation performance (spBLEU) on the Flores and in-house datasets. XX →EGY and XX→MOR denote average over target languages EGY and MOR, respectively. Conversely, EGY →XX and MOR→XX indicate average over EGY and MOR as source languages. Bold values highlight the top score among models with fewer than 7 billion parameters. Underlined values indicate the highest score overall in each column. Detailed results are in Table D.2. Value Alignment. Figure 3 illustrates the results of value alignment evaluation based on the WVS. NileChat demonstrates substantial improvements over the baseline across most societal-value dimen- sions for both Moroccan and Egyptian contexts. Specifically, for Morocco, NileChat surpasses the baseline in all dimensions except Religious Val- ues and the Index of Postmaterialism. Similarly, for Egypt, it outperforms the baseline across all dimensions except Political Interest and Political Participation, and the Index of Postmaterialism. These findings indicate that our approach—where a teacher LLM engages in role-playing by generat- ing diverse text genres through personas embody- ing local community values—successfully steers responses towards culturally aligned positions. In a broader comparative analysis against all evaluated models, ours achieves the best results for Morocco across several dimensions and remains competi- tive in others. For Egypt, NileChat notably excels in Perceptions of Migration, Political Culture and Political Regimes, Happiness and Wellbeing, and 7 SVNSHWSCTOMEcoV PC PM PS IP PST RV EthVPIPPPCPR01632486480 Morocco SVNSHWSCTOMEcoV PC PM PS IP PST RV EthVPIPPPCPR01632486480 EgyptSVNS --- Social Values, Norms, Stereotypes HW --- Happiness & Wellbeing SCTOM --- Social Capital, Trust & Organizational Membership EcoV --- Economic Values PC --- Perceptions of Corruption PM --- Perceptions of Migration PS --- Perceptions of Security IP --- Index of Postmaterialism PST --- Perceptions about Science and T echnology RV --- Religious Values EthV --- Ethical Values PIPP --- Political Interest and Political Participation PCPR --- Political Culture and Political Regimes ALLaM-7B-Instruct-preview AceGPT-13B-chat Atlas-Chat-2BAtlas-Chat-9B Llama-3.2-3B-Instruct NileChat (3B)Qwen2.5-3B-Instruct Qwen3-1.7B Qwen3-4Bgemma-3-4b-it jais-13b-chatFigure 3: Average SV A scores of evaluated models across societal value dimensions for Egypt and Morocco. Perceptions about Science and Technology, though models such as Jais-13B and ALLaM-7B show slightly stronger performance in certain other di- mensions. How many pre-training tokens are needed to reach good performance for a new language? Figure 4 shows the performance evolution of NileChat during the pre-training phase on Bele- bele and translation tasks. The charts show that the model starts to get a large boost in these tasks during the first 10B tokens and then continues to slightly increase until it becomes steady after around 60B tokens. 5 Conclusion We introduced a novel methodology for
https://arxiv.org/abs/2505.18383v1
adapting LLMs to specific communities by deeply integrat- ing their unique linguistic characteristics, cultural heritage, and societal values. Our approach lever- ages a teacher model proficient in generating low- resource languages to enable: (i) translation for the incorporation of community-specific language and (ii) controlled generation and retrieval mechanisms for the authentic inclusion of cultural heritage and values. We validated our methodology using the Moroccan and Egyptian Arabic dialects as testbeds by developing NileChat , an LLM covering these two dialects. Comprehensive evaluations on un- derstanding, translation, and cultural alignment 0 510 17 22 28 35 40 45 50 55 62 67 73 80 85 90 # of tokens (in billion)5055606570Accuracy Score 51015202530354045 CHRF Score Belebele_MOR (Accuracy) Belebele_EGY (Accuracy)In-house_MOR (CHRF) In-house_EGY (CHRF)Flores_MOR (CHRF) Flores_EGY (CHRF)Figure 4: Evolution of model performance during pre- training, measured by the number of tokens processed. benchmarks demonstrate that our method signif- icantly enhances the baseline LLM’s performance in capturing target language nuances and cultural values. Notably, NileChat also outperform existing Arabic-aware LLMs. Our method offers a promis- ing research direction for fostering inclusivity of diverse local communities within LLM develop- ment, thereby emphasizing the critical role of such an inclusion in the broader democratization of this technology. 8 Limitations •Teacher Model Dependency for Low- Resource Languages: Our method’s reliance on a teacher model proficient in generating even low-resource target languages may not hold for extremely under-resourced languages (e.g., Berber, Malayo-Polynesian varieties) (Team et al., 2022), potentially limiting its applicability in such contexts. •Supervised Fine-Tuning Data: SFT phase predominantly utilized translated data due to resource constraints. This reliance on trans- lated, rather than native, data for SFT might impact the model’s nuanced performance in the target languages and their corresponding cultures. •Susceptibility to Hallucination: As a 3B pa- rameter model, our LLM is relatively small, rendering it more prone to hallucination and the generation of inaccurate or incomplete in- formation compared to larger architectures (Wei et al., 2022). •Computational Cost of Synthetic Data Gen- eration: The process of generating synthetic data is computationally intensive, particularly when employing large teacher models (e.g., Command-R+, a 104B parameter model re- quiring substantial GPU resources: 4x80GB). This challenge is amplified by the autoregres- sive generation of long documents from ex- tensive input contexts (e.g., articles, persona descriptions, cultural concepts) restricting the scale of this approach for more languages. •Absence of Explicit Safety Alignment: The model has not undergone dedicated safety alignment. While trained on curated datasets (Wikipedia, educational, news) largely de- void of toxic content and leveraging a safety- aligned teacher LLM, specific safety tuning is acknowledged as important future work. •Limited generation of subtle details. While the controlled generation uses multiple sources (WVS, news, Wikipedia, TV scripts), the generated texts are limited in terms of the very subtle cultural nuances, implicit knowl- edge, humor, or sarcasm that are often not explicitly stated in these source materials (wikipedia and news articles).Ethics Statement Our work contributes to the development of inclu- sive, linguistically, and culturally diverse LLMs capable of serving varied communities. While we generate our pre-training and instruction-tuning data using a teacher LLM,
https://arxiv.org/abs/2505.18383v1
this process is critically informed by ground-truth cultural values survey data from the communities of interest and local context to control the generation. This approach aims to imbue our models with specific cultural nuances relevant to these communities. As our evaluations demonstrate, the resulting models exhibit reasonable alignment with the cul- tural heritage and values of our target communities and can produce fluent text in their respective di- alects. Despite these advancements, we have not conducted explicit safety alignment procedures for these models. Consequently, we strongly recom- mend thorough testing and further safety evalua- tions before any deployment in real-world scenar- ios. Acknowledgments We acknowledge support from Canada Research Chairs (CRC), the Natural Sciences and Engineer- ing Research Council of Canada (NSERC; RGPIN- 2018-04267), the Social Sciences and Humani- ties Research Council of Canada (SSHRC; 895- 2020-1004; 895-2021-1008), Canadian Foundation for Innovation (CFI; 37771), Digital Research Al- liance of Canada,6and UBC Advanced Research Computing-Sockeye.7 References Aakanksha, Arash Ahmadian, Seraphina Goldfarb- Tarrant, Beyza Ermis, Marzieh Fadaee, and Sara Hooker. 2024. Mix data or merge models? op- timizing for diverse multi-task learning. Preprint , arXiv:2410.10801. Ibrahim Ahmad, Shiran Dudy, Resmi Ramachandran- pillai, and Kenneth Church. 2024. Are generative language models multicultural? a study on Hausa culture and emotions using ChatGPT. In Proceed- ings of the 2nd Workshop on Cross-Cultural Consid- erations in NLP , pages 98–106, Bangkok, Thailand. Association for Computational Linguistics. Badr AlKhamissi, Muhammad ElNokrashy, Mai Alkhamissi, and Mona Diab. 2024. Investigating 6https://alliancecan.ca 7https://arc.ubc.ca/ubc-arc-sockeye 9 cultural alignment of large language models. In Pro- ceedings of the 62nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages 12404–12422, Bangkok, Thailand. Association for Computational Linguistics. Loubna Ben Allal, Anton Lozhkov, Elie Bak- ouch, Gabriel Martín Blázquez, Guilherme Penedo, Lewis Tunstall, Andrés Marafioti, Hynek Kydlí ˇcek, Agustín Piqueres Lajarín, Vaibhav Srivastav, Joshua Lochner, Caleb Fahlgren, Xuan-Son Nguyen, Clé- mentine Fourrier, Ben Burtenshaw, Hugo Larcher, Haojun Zhao, Cyril Zakka, Mathieu Morlon, Colin Raffel, Leandro von Werra, and Thomas Wolf. 2025. Smollm2: When smol goes big – data- centric training of a small language model. Preprint , arXiv:2502.02737. Fakhraddin Alwajih, Abdellah El Mekki, Samar Mo- hamed Magdy, Abdelrahim A. Elmadany, Omer Nacar, El Moatez Billah Nagoudi, Reem Abdel- Salam, Hanin Atwany, Youssef Nafea, Abdulfat- tah Mohammed Yahya, Rahaf Alhamouri, Hamzah A. Alsayadi, Hiba Zayed, Sara Shatnawi, Serry Sibaee, Yasir Ech-Chammakhy, Walid Al-Dhabyani, Marwa Mohamed Ali, Imen Jarraya, Ahmed Oumar El-Shangiti, Aisha Alraeesi, Mohammed Anwar Al-Ghrawi, Abdulrahman S. Al-Batati, Elgizouli Mohamed, Noha Taha Elgindi, Muhammed Saeed, Houdaifa Atou, Issam Ait Yahia, Abdelhak Bouayad, Mohammed Machrouh, Amal Makouar, Dania Alkawi, Mukhtar Mohamed, Safaa Taher Abdelfadil, Amine Ziad Ounnoughene, Rouabhia Anfel, Rwaa Assi, Ahmed Sorkatti, Mohamedou Cheikh Tourad, Anis Koubaa, Ismail Berrada, Mustafa Jarrar, Shady Shehata, and Muhammad Abdul-Mageed. 2025. Palm: A culturally inclusive and linguistically diverse dataset for arabic llms. Preprint , arXiv:2503.00151. Zaid Alyafeai, Michael Pieler, Hannah Teufel, Jonathan Tow, Marco Bellagente, Duy Phung, Nikhil Pinna- paraju, Reshinth Adithyan, Paulo Rocha, Maksym Zhuravinskyi, and Carlos Riquelme. 2024. Ara- bic stable lm: Adapting stable lm 2 1.6b to arabic. Preprint , arXiv:2412.04277. Benedict Anderson.
https://arxiv.org/abs/2505.18383v1
1991. Imagined Communities: Re- flections on the Origin and Spread of Nationalism . Verso, London. Lucas Bandarkar, Davis Liang, Benjamin Muller, Mikel Artetxe, Satya Narayan Shukla, Donald Husa, Naman Goyal, Abhinandan Krishnan, Luke Zettlemoyer, and Madian Khabsa. 2024. The belebele benchmark: a parallel reading comprehension dataset in 122 lan- guage variants. In Proceedings of the 62nd Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 749–775, Bangkok, Thailand. Association for Computational Linguistics. Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wen- liang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V . Do, Yan Xu, and Pascale Fung. 2023. A multitask, multilingual,multimodal evaluation of ChatGPT on reasoning, hal- lucination, and interactivity. In Proceedings of the 13th International Joint Conference on Natural Lan- guage Processing and the 3rd Conference of the Asia- Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 675–718, Nusa Dua, Bali. Association for Computational Lin- guistics. Adrien Barbaresi. 2021. Trafilatura: A web scraping library and command-line tool for text discovery and extraction. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Nat- ural Language Processing: System Demonstrations , pages 122–131, Online. Association for Computa- tional Linguistics. M Saiful Bari, Yazeed Alnumay, Norah A. Alzahrani, Nouf M. Alotaibi, Hisham Abdullah Alyahya, Sultan AlRashed, Faisal Abdulrahman Mirza, Shaykhah Z. Alsubaie, Hassan A. Alahmed, Ghadah Alabdul- jabbar, Raghad Alkhathran, Yousef Almushayqih, Raneem Alnajim, Salman Alsubaihi, Maryam Al Mansour, Saad Amin Hassan, Dr. Majed Alruba- ian, Ali Alammari, Zaki Alawami, Abdulmohsen Al- Thubaity, Ahmed Abdelali, Jeril Kuriakose, Abdal- ghani Abujabal, Nora Al-Twairesh, Areeb Alowisheq, and Haidar Khan. 2025. ALLam: Large language models for arabic and english. In The Thirteenth In- ternational Conference on Learning Representations . Emily M. Bender. 2011. On achieving and evaluating language-independence in nlp. Linguistic Issues in Language Technology , 6. Pietro Bernardelle, Leon Fröhling, Stefano Civelli, Ric- cardo Lunardi, Kevin Roitero, and Gianluca Demar- tini. 2024. arXiv preprint arXiv:2412.14843 . El Moatez Billah Nagoudi, Muhammad Abdul-Mageed, AbdelRahim Elmadany, Alcides Inciarte, and Md Tawkat Islam Khondaker. 2023. JASMINE: Ara- bic GPT models for few-shot learning. In Proceed- ings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 16721–16744, Singapore. Association for Computational Linguis- tics. P. Bourdieu and J.B. Thompson. 1991. Language and Symbolic Power . Harvard University Press. Cohere Labs. 2024. c4ai-command-r-plus-08-2024. John Dang, Shivalika Singh, Daniel D’souza, Arash Ahmadian, Alejandro Salamanca, Madeline Smith, Aidan Peppin, Sungjin Hong, Manoj Govindassamy, Terrence Zhao, Sandra Kublik, Meor Amer, Viraat Aryabumi, Jon Ander Campos, Yi-Chern Tan, Tom Kocmi, Florian Strub, Nathan Grinsztajn, Yannis Flet-Berliac, Acyr Locatelli, Hangyu Lin, Dwarak Talupuru, Bharat Venkitesh, David Cairuz, Bowen Yang, Tim Chung, Wei-Yin Ko, Sylvie Shang Shi, Amir Shukayev, Sammie Bae, Aleksandra Piktus, Ro- man Castagné, Felipe Cruz-Salinas, Eddie Kim, Lu- cas Crawhall-Stein, Adrien Morisot, Sudip Roy, Phil 10 Blunsom, Ivan Zhang, Aidan Gomez, Nick Frosst, Marzieh Fadaee, Beyza Ermis, Ahmet Üstün, and Sara Hooker. 2024. Aya expanse: Combining re- search breakthroughs for a new multilingual
https://arxiv.org/abs/2505.18383v1
frontier. Preprint , arXiv:2412.04261. Ashutosh Dwivedi, Pradhyumna Lavania, and Ashutosh Modi. 2023. EtiCor: Corpus for analyzing LLMs for etiquettes. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Process- ing, pages 6921–6931, Singapore. Association for Computational Linguistics. AbdelRahim Elmadany, ElMoatez Billah Nagoudi, and Muhammad Abdul-Mageed. 2023. ORCA: A chal- lenging benchmark for Arabic language understand- ing. In Findings of the Association for Computa- tional Linguistics: ACL 2023 , pages 9559–9586, Toronto, Canada. Association for Computational Lin- guistics. EVS/WVS. 2024. Joint evs/wvs 2017-2022 dataset (joint evs/wvs). GESIS, Cologne. ZA7505 Data file Version 5.0.0, https://doi.org/10.4232/1.14320. Kazuki Fujii, Taishi Nakamura, Mengsay Loem, Hi- roki Iida, Masanari Ohi, Kakeru Hattori, Hirai Shota, Sakae Mizuki, Rio Yokota, and Naoaki Okazaki. 2024. Continual pre-training for cross-lingual LLM adaptation: Enhancing japanese language capabili- ties. In First Conference on Language Modeling . Jay Gala, Thanmay Jayakumar, Jaavid Aktar Husain, Aswanth Kumar M, Mohammed Safi Ur Rahman Khan, Diptesh Kanojia, Ratish Puduppully, Mitesh M. Khapra, Raj Dabre, Rudra Murthy, and Anoop Kunchukuttan. 2024. Airavata: Introducing hindi instruction-tuned llm. Preprint , arXiv:2401.15006. Tao Ge, Xin Chan, Xiaoyang Wang, Dian Yu, Haitao Mi, and Dong Yu. 2024. Scaling synthetic data cre- ation with 1,000,000,000 personas. arXiv preprint arXiv:2406.20094 . C. Geertz. 1977. The Interpretation Of Cultures . Basic Books. Zorik Gekhman, Gal Yona, Roee Aharoni, Matan Eyal, Amir Feder, Roi Reichart, and Jonathan Herzig. 2024. Does fine-tuning LLMs on new knowledge encour- age hallucinations? In Proceedings of the 2024 Con- ference on Empirical Methods in Natural Language Processing , pages 7765–7784, Miami, Florida, USA. Association for Computational Linguistics. Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng- Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Kr- ishnan, Marc’Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2022. The Flores-101 evaluation benchmark for low-resource and multilingual ma- chine translation. Transactions of the Association for Computational Linguistics , 10:522–538. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schel- ten, Alex Vaughan, Amy Yang, Angela Fan, AnirudhGoyal, Anthony Hartshorn, Aobo Yang, Archi Mi- tra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Ro- driguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Al- lonsius, Daniel Song, Danielle Pintz, Danny Livshits, et al. 2024. The llama 3 herd of models. Preprint , arXiv:2407.21783. Daniil Gurgurov, Mareike Hartmann, and Simon Os- termann. 2024. Adapting multilingual LLMs to low-resource languages with knowledge graphs via adapters. In Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024) , pages 63–74, Bangkok, Thailand. Association for Computational Linguistics. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language under- standing. In International Conference on Learning Representations . V . Higgins and D. Douglas. 2020. Communities and Cultural Heritage: Global Issues, Local Values . Routledge studies in heritage. Routledge/Taylor & Francis Group. Huang Huang, Fei Yu, Jianqing Zhu, Xuening Sun,
https://arxiv.org/abs/2505.18383v1
Hao Cheng, Song Dingjie, Zhihong Chen, Mosen Alharthi, Bang An, Juncai He, Ziche Liu, Juny- ing Chen, Jianquan Li, Benyou Wang, Lian Zhang, Ruoyu Sun, Xiang Wan, Haizhou Li, and Jinchao Xu. 2024. AceGPT, localizing large language models in Arabic. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (Volume 1: Long Papers) , pages 8139–8163, Mexico City, Mexico. Association for Computational Linguistics. Hamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A. Smith, Iz Belt- agy, and Hannaneh Hajishirzi. 2023. Camels in a changing climate: Enhancing lm adaptation with tulu 2.Preprint , arXiv:2311.10702. Alexander Jones, Isaac Caswell, Orhan Firat, and Is- hank Saxena. 2023. GATITOS: Using a new multi- lingual lexicon for low-resource machine translation. InProceedings of the 2023 Conference on Empiri- cal Methods in Natural Language Processing , pages 371–405, Singapore. Association for Computational Linguistics. Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 11 6282–6293, Online. Association for Computational Linguistics. Raviraj Joshi, Kanishk Singla, Anusha Kamath, Raunak Kalani, Rakesh Paul, Utkarsh Vaidya, Sanjay Singh Chauhan, Niranjan Wartikar, and Eileen Long. 2025. Adapting multilingual LLMs to low-resource lan- guages using continued pre-training and synthetic corpus: A case study for Hindi LLMs. In Proceed- ings of the First Workshop on Natural Language Pro- cessing for Indo-Aryan and Dravidian Languages , pages 50–57, Abu Dhabi. Association for Computa- tional Linguistics. Amr Keleg, Sharon Goldwater, and Walid Magdy. 2023. ALDi: Quantifying the Arabic level of dialectness of text. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Process- ing, pages 10597–10611, Singapore. Association for Computational Linguistics. Jiyoung Lee, Minwoo Kim, Seungho Kim, Junghwan Kim, Seunghyun Won, Hwaran Lee, and Edward Choi. 2024. KorNAT: LLM alignment benchmark for Korean social values and common knowledge. In Findings of the Association for Computational Lin- guistics: ACL 2024 , pages 11177–11213, Bangkok, Thailand. Association for Computational Linguistics. Cheng Li, Mengzhuo Chen, Jindong Wang, Sunayana Sitaram, and Xing Xie. 2024. Culturellm: Incorpo- rating cultural differences into large language models. Advances in Neural Information Processing Systems , 37:84799–84838. Ruibo Liu, Jerry Wei, Fangyu Liu, Chenglei Si, Yanzhe Zhang, Jinmeng Rao, Steven Zheng, Daiyi Peng, Diyi Yang, Denny Zhou, et al. 2024. Best practices and lessons learned on synthetic data. arXiv preprint arXiv:2404.07503 . Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, and Yue Zhang. 2025. An empirical study of catas- trophic forgetting in large language models during continual fine-tuning. Preprint , arXiv:2308.08747. Kathleen M. MacQueen, Eleanor McLellan, David S. Metzger, Susan Kegeles, Ronald P. Strauss, Roseanne Scotti, Lynn Blanchard, and Robert T. Trotter. 2001. What is community? an evidence-based definition for participatory public health. American Journal of Public Health , 91(12):1929–1938. El Moatez Billah Nagoudi, AbdelRahim Elmadany, Ahmed El-Shangiti, and Muhammad Abdul-Mageed. 2023. Dolphin: A challenging and diverse bench- mark for Arabic NLG.
https://arxiv.org/abs/2505.18383v1
In Findings of the Associa- tion for Computational Linguistics: EMNLP 2023 , pages 1404–1422, Singapore. Association for Com- putational Linguistics. Tarek Naous, Michael J Ryan, Alan Ritter, and Wei Xu. 2024. Having beer after prayer? measuring cultural bias in large language models. In Proceedings of the 62nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages16366–16393, Bangkok, Thailand. Association for Computational Linguistics. Roberto Navigli, Simone Conia, and Björn Ross. 2023. Biases in large language models: Origins, inventory, and discussion. J. Data and Information Quality , 15(2). Guilherme Penedo, Hynek Kydlí ˇcek, Loubna Ben al- lal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro V on Werra, and Thomas Wolf. 2024. The fineweb datasets: Decanting the web for the finest text data at scale. Preprint , arXiv:2406.17557. Maja Popovi ´c. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation , pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. 2025. Qwen2.5 technical report. Preprint , arXiv:2412.15115. Alexandre Ramé, Johan Ferret, Nino Vieillard, Robert Dadashi, Léonard Hussenot, Pierre-Louis Cedoz, Pier Giuseppe Sessa, Sertan Girgin, Arthur Douil- lard, and Olivier Bachem. 2024. Warp: On the bene- fits of weight averaged rewarded policies. Preprint , arXiv:2406.16768. Jonathan Rystrøm, Hannah Rose Kirk, and Scott Hale. 2025. Multilingual != multicultural: Evaluating gaps between multilingual capabilities and cultural align- ment in llms. Preprint , arXiv:2502.16534. E. Sapir. 1929. The status of linguistics as a science. Language , 5(4):207–214. Mohamed El Amine Seddik, Suei-Wen Chen, Soufiane Hayou, Pierre Youssef, and Merouane Abdelkader DEBBAH. 2024. How bad is training on synthetic data? a statistical analysis of language model col- lapse. In First Conference on Language Modeling . Neha Sengupta, Sunil Kumar Sahu, Bokang Jia, Satheesh Katipomu, Haonan Li, Fajri Koto, Osama Mohammed Afzal, Samta Kamboj, Onkar Pandit, Rahul Pal, Lalit Pradhan, Zain Muham- mad Mujahid, Massa Baali, Alham Fikri Aji, Zhengzhong Liu, Andy Hock, Andrew Feldman, Jonathan Lee, Andrew Jackson, Preslav Nakov, Tim- othy Baldwin, and Eric Xing. 2023. Jais and jais- chat: Arabic-centric foundation and instruction-tuned open generative large language models. Preprint , arXiv:2308.16149. 12 Guokan Shang, Hadi Abdine, Yousef Khoubrane, Amr Mohamed, Yassine Abbahaddou, Sofiane Ennadir, Imane Momayiz, Xuguang Ren, Eric Moulines, Preslav Nakov, Michalis Vazirgiannis, and Eric Xing. 2025. Atlas-chat: Adapting large language models for low-resource Moroccan Arabic dialect. In Pro- ceedings of the First Workshop on Language Mod- els for Low-Resource Languages , pages 9–30, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics. Siqi Shen, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Soujanya Poria, and
https://arxiv.org/abs/2505.18383v1
Rada Mihalcea. 2024. Understanding the capabilities and limitations of large language models for cultural commonsense. InProceedings of the 2024 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 5668–5680, Mexico City, Mexico. Association for Computational Lin- guistics. Zirui Song, Bin Yan, Yuhan Liu, Miao Fang, Mingzhe Li, Rui Yan, and Xiuying Chen. 2025. In- jecting domain-specific knowledge into large lan- guage models: A comprehensive survey. Preprint , arXiv:2502.10708. J. Stanlaw and N. Adachi. 2025. Language, Culture, and Society: An Introduction to Linguistic Anthropol- ogy. Taylor & Francis. Yan Tao, Olga Viberg, Ryan S Baker, and René F Kizil- cec. 2024. Cultural bias and cultural alignment of large language models. PNAS Nexus , 3(9):pgae346. Fanar Team, Ummar Abbas, Mohammad Shahmeer Ah- mad, Firoj Alam, Enes Altinisik, Ehsannedin Asgari, Yazan Boshmaf, Sabri Boughorbel, Sanjay Chawla, Shammur Chowdhury, Fahim Dalvi, Kareem Dar- wish, Nadir Durrani, Mohamed Elfeky, Ahmed El- magarmid, Mohamed Eltabakh, Masoomali Fatehkia, Anastasios Fragkopoulos, Maram Hasanain, Majd Hawasly, Mus’ab Husaini, Soon-Gyo Jung, Ji Kim Lucas, Walid Magdy, Safa Messaoud, Abubakr Mo- hamed, Tasnim Mohiuddin, Basel Mousi, Hamdy Mubarak, Ahmad Musleh, Zan Naeem, Mourad Ouz- zani, Dorde Popovic, Amin Sadeghi, Husrev Taha Sencar, Mohammed Shinoy, Omar Sinan, Yifan Zhang, Ahmed Ali, Yassine El Kheir, Xiaosong Ma, and Chaoyi Ruan. 2025a. Fanar: An arabic- centric multimodal generative ai platform. Preprint , arXiv:2501.13944. Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, Louis Rouillard, Thomas Mesnard, Geoffrey Cideron, Jean bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Etienne Pot, Ivo Penchev, Gaël Liu, Francesco Visin, Kathleen Kenealy, Lu- cas Beyer, Xiaohai Zhai, Anton Tsitsulin, Robert Busa-Fekete, et al. 2025b. Gemma 3 technical report. Preprint , arXiv:2503.19786.Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupati- raju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, Anton Tsitsulin, Nino Vieillard, Piotr Stanczyk, Sertan Girgin, Nikola Momchev, Matt Hoffman, Shantanu Thakoor, Jean-Bastien Grill, Behnam Neyshabur, Olivier Bachem, Alanna Wal- ton, Aliaksei Severyn, Alicia Parrish, Aliya Ahmad, Allen Hutchison, Alvin Abdagic, Amanda Carl, Amy Shen, Andy Brock, Andy Coenen, Anthony Laforge, Antonia Paterson, Ben Bastian, Bilal Piot, Bo Wu, Brandon Royal, and Others. 2024. Gemma 2: Im- proving open language models at a practical size. Preprint , arXiv:2408.00118. NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Hef- fernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Bar- rault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Rop- ers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scal- ing human-centered machine translation. Preprint , arXiv:2207.04672. Shumin Wang, Yuexiang Xie, Bolin Ding, Jinyang
https://arxiv.org/abs/2505.18383v1
Gao, and Yanyong Zhang. 2025. Language adaptation of large language models: An empirical study on LLaMA2. In Proceedings of the 31st International Conference on Computational Linguistics , pages 7195–7208, Abu Dhabi, UAE. Association for Com- putational Linguistics. Wenxuan Wang, Wenxiang Jiao, Jingyuan Huang, Ruyi Dai, Jen-tse Huang, Zhaopeng Tu, and Michael Lyu. 2024. Not all countries celebrate thanksgiving: On the cultural dominance in large language models. In Proceedings of the 62nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 6349–6384, Bangkok, Thailand. Association for Computational Linguistics. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emer- gent abilities of large language models. Transactions on Machine Learning Research . Survey Certifica- tion. C. G. Wrenn. 1962. The counselor in a changing world. An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, 13 Chengen Huang, Chenxu Lv, Chujie Zheng, Dayi- heng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jing Zhou, Jingren Zhou, Junyang Lin, Kai Dang, Keqin Bao, Kexin Yang, Le Yu, Lianghao Deng, Mei Li, Mingfeng Xue, Mingze Li, Pei Zhang, Peng Wang, Qin Zhu, Rui Men, Ruize Gao, Shixuan Liu, Shuang Luo, Tianhao Li, Tianyi Tang, Wenbiao Yin, Xingzhang Ren, Xinyu Wang, Xinyu Zhang, Xu- ancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yinger Zhang, Yu Wan, Yuqiong Liu, Zekun Wang, Zeyu Cui, Zhenru Zhang, Zhipeng Zhou, and Zi- han Qiu. 2025. Qwen3 technical report. Preprint , arXiv:2505.09388. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a ma- chine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics , pages 4791–4800, Florence, Italy. Association for Computational Linguistics. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-judge with MT-bench and chatbot arena. InThirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track . 14 Language Data category Data source Nature # of words Dialectal dataEgyptGeneralWikipedia Real 128.71M MT fineweb-EDU Synthetic 2.08B LHV Synthetic 398.89M Fineweb2 Real 430.46M Arabizi MT fineweb-EDU (Arabizi) & LHV Synthetic 206.49M MoroccoGeneralWikipedia Real 1.67M Translated fineweb-EDU Synthetic 2.02B LHV Synthetic 207.41M Fineweb2 Real 1.64B Arabizi MT fineweb-EDU (Arabizi) & LHV Synthetic 467.30M MSAEgyptCultural Brave API Real 74.67M General Local News Real 346.79M MoroccoCultural Brave API Real 23.08M General Local News Real 220.16M General GeneralFineweb2 Real 28.80B Wikipedia Real 318.62M English General General Fineweb-EDU Real 51.57B French General General Fineweb2 Real 9.42B Code & Math Code & MathMathGenie/MathCode-Pile macrocosm-os/code-parrot-github-codeReal 818.35M Table A.1: Distribution of the final pre-training data mixture by language, nature (synthetic vs. real), and word count per dataset. Bold rows highlight data generated via our proposed augmentation pipeline. A Data B Training
https://arxiv.org/abs/2505.18383v1
details B.1 Pre-training Motivations for choosing Qwen-2.5-3B as our backbone model. We select Qwen-2.5-3B as our base model to continue pretrain for two primary reasons: its competitive performance on MSA tasks and good tokenizer compression ratio on Arabic dialect texts. At the time of the selection, the Gemma 2 (Team et al., 2024) and Qwen 2.5 base models showed the best performance in MSA. Also, both of their tokenizers have a good compression ratio for Arabic text in both standard and dialectal forms. Our analysis shows a ratio between 2.7 and 2.8 for Gemma, while Qwen 2.5 has a compression ratio between 2.9 and 3.1. Our final choice of Qwen 2.5 was based on its better performance on MSA. Details for continued model pretraining. We continue the pre-training of Qwen-2.5-3B on our curated pre-training dataset. Subsequently, the model is fully fine-tuned for one epoch using a sequence length of 4,096. To optimize the learning process, the learning rate is linearly decayed from 5×10−6to5×10−7. To mitigate overfitting, we apply a weight decay of 0.1, and gradient norms are clipped at a maximum value of 1.0. The training is performed on a cluster of 4 ×A100 80GB GPUs. B.2 Supervised fine-tuning To enhance model robustness and facilitate effective merging, we augmented each dialect-specific dataset with a shared multilingual corpus, comprising English SmolTalk, MSA SmolTalk, French SmolTalk, and additional data from the ORCA and Dolphin datasets. Each dialect-specific model was trained for two epochs with a sequence length of 4,096 tokens, using a learning rate that linearly decayed from 7×10−6 to7×10−7. Table B.1 compares the SFT model with models fine-tuned on Egyptian and Moroccan datasets individually, as well as with our final merged model, NileChat .NileChat performs well on tasks for both EGY and MOR. The MOR-specific model also demonstrates strong performance on both MOR and, to some extent, EGY tasks. In contrast, the EGY-specific model does not perform well on MOR tasks. 15 Dataset name Language Source # of Instructions Darija-SFT-Mixture MOR (Arabic) Atlas-Chat 458,155 TÜLU-V2-mix EGY (Arabic) Ours (MT) 178,109 SmolTalkMOR (Arabic) Ours (MT) 192,266 MOR (Arabizi) Ours (MT) 93,419 EGY (Arabic) Ours (MT) 195,260 EGY (Arabizi) Ours (MT) 93,181 French Ours (MT) 99,468 MSA Ours (MT) 96,933 English SmolTalk 149,124 ORCAMSA + dialectsOurs (Converted) 460,203 Dolphin Ours (Converted) 425,703 Cultural instructionsMOR (Arabic) Ours (Synthetic) 25,159 EGY (Arabic) Ours (Synthetic) 107,428 Table A.2: Distribution of the final instruction and response data mixture by language and number of instructions per dataset. ’Ours’ refers to datasets we created via machine translation (MT) or by converting existing datasets into an instruction/response format. Table B.1 compares the SFT model with models fine-tuned on Egyptian and Moroccan datasets individually, as well as with our final merged model, NileChat. NileChat performs well on tasks for both EGY and MOR. The MOR-specific model also demonstrates strong performance on both MOR and, to some extent, EGY tasks. In contrast, the EGY-specific model does not perform well on MOR tasks. We relate this observed asymmetry to the linguistic characteristics of the dialects relative to MSA. During the SFT phase, each dialect-specific dataset
https://arxiv.org/abs/2505.18383v1
was augmented with a shared multilingual corpus which included MSA data (e.g., MSA SmolTalk, and data from the ORCA and Dolphin datasets). It is plausible that EGY is linguistically closer to MSA compared to the MOR, which is often considered more distant from MSA due to influences such as Berber and French. Consequently, the MOR-tuned model, having been exposed to this shared MSA data, might more effectively leverage this MSA knowledge to generalize to EGY tasks. Conversely, the greater linguistic divergence of the Moroccan dialect from MSA could make it more challenging for the EGY-tuned model to transfer its learning, including the MSA component, to the distinct features of the Moroccan dialect. C Evaluation Setup C.1 Evaluation Tasks Full list of the 13 categories of WVS questions. Economic Values (EcoV); Ethical Values (EthV); Happiness and Wellbeing (HW); Index of Postmaterialism (IP); Perceptions about Science and Technology (PST); Perceptions of Corruption (PC); Perceptions of Migration (PM); Perceptions of Security (PS); Political Culture and Political Regimes (PCPR); Political Interest and Political Participation (PIPP); Religious Values (RV); Social Capital, Trust, and Organizational Membership (SCTOM); and Social Values, Norms, and Stereotypes (SVNS). C.2 Baselines We evaluate our model NileChat against a set of 17 LLMs that are Arabic-aware; some of these 17 models are also aligned to Arabic dialects. These models are from the following model families: ALLaM (Bari et al., 2025), Jais (Sengupta et al., 2023), Atlas-Chat (Shang et al., 2025), ar-stablelm-2-chat (Alyafeai et al., 2024), Gemma-3 (Team et al., 2025b), Qwen-2.5 (Qwen et al., 2025), Qwen3 (Yang et al., 2025) (non-thinking mode), and Llama-3.2 (Grattafiori et al., 2024). The full list of models, including their corresponding size and release date, are presented in Table C.1. 16 D Full results E Prompts The provided figures showcase diverse prompts for language models targeting low-resource languages. Figure 5 translates English educational content into conversational dialectal Arabic, while Figure 6 converts dialectal Arabic script to Arabizi. English instructions are translated to dialectal Arabic using the prompt in Figure 7. For content generation, Figure 8 guides the model to create culturally relevant dialectal Arabic text based on a given persona and context. Figure 9 focuses on summarizing detailed persona descriptions concisely. Finally, Figure 10 instructs an LLM to generate practical dialectal Arabic question-answer pairs in JSON format from provided text. Translate the following text from English to Egyptian Arabic . Ensure that all words are in Egyptian Arabic , and do not use any Modern Standard Arabic ( MSA ). Keep the translation casual , conversational , and reflective of how Egyptians would naturally speak in everyday situations . Avoid any formal or classical language structures . Translate only the input paragraph and don 't add anything else in your output . English : { English_text } Figure 5: The translation prompt used with a teacher model to convert English educational pre-training data to a low-resource target language. The placeholder ’{English_text}’ represents the input English text. Write the following Moroccan dialectal Arabic text in Moroccan Arabizi . Ensure that all words are written in Moroccan Arabizi . Keep the
https://arxiv.org/abs/2505.18383v1
text casual , conversational , and reflective of how Moroccans would naturally write in everyday situations using Arabizi . Translate only the content keys in the following JSON , and output a json of the same format : { JSON_OBJECT } Figure 6: The prompt used with our teacher LLM to convert dialectal Arabic text written in Arabic script into Arabizi. The placeholder {JSON_OBJECT} represents the input text formatted as a JSON object. Translate the following text from English to Moroccan Arabic . Ensure that all words are in Moroccan Arabic , and do not use any Modern Standard Arabic ( MSA ). Keep the translation casual , conversational , and reflective of how Moroccans would naturally speak in everyday situations . Avoid any formal or classical language structures . Translate only the content keys in the following JSON , and output a json of the same format : { JSON_OBJECT } Figure 7: The translation prompt used with a teacher model to convert SmolTalk and TULU instructions data to a low-resource target language. The placeholder ’{JSON_OBJECT}’ represents the input text. 17 Act as the following person : { persona_description } Act like you are { person_Name } and write a { text_genre } in Egyptian dialect , using colloquial Arabic script as spoken in Egypt and not Modern Standard Arabic ( MSA ). Use this context and use the information provided in it while writing the { text_genre }: { context } Make sure to follow these conditions : 1. Rely on the provided context when writing the { text_genre }. 2. Ensure that the written { text_genre } reflects the cultural background , values , and worldview of { person_Name }. 3. Don 't write the persona 's description . I want you to focus only on the provided context when writing while reflecting the perosna 's background . Note : Ensure that all words are in Egyptian Arabic , and do not use any Modern Standard Arabic ( MSA ). Keep the translation casual , conversational , and reflective of how Egyptians would naturally speak in everyday situations . Figure 8: Prompt for generating culturally and values-aware text genres in low-resource languages ,given a local persona description and a local cultural concept. The placeholders {persona_description}, {text_genre}, and {context} represent the persona description ,the intended text genre to generate ,and the cultural concept text , respectively. I have the following persona description , I want you to write it in a concise manner keeping all the information , the output should be plain text , make sure to include all values , morals , and culture of the persona : { PERSONA_DESCRIPTION } Figure 9: Prompt for generating concise persona descriptions with a LLM utilizing a comprehensive description of a specific persona extracted from the WVS. The placeholder ’{PERSONA_DESCRIPTION}’ represents the input persona description. 18 You a r e an e x p e r t i n Moroccan c u l t u r e and language , w it h n a t i v e − l
https://arxiv.org/abs/2505.18383v1
e v e l f l u e n c y i n Moroccan A r a b i c d i a l e c t w r i t t e n i n A r a b i c s c r i p t . Your t a s k i s t o t r a n s f o r m d e t a i l e d a r t i c l e s or i n f o r m a t i o n ( l i k e r e c i p e s , h i s t o r i c a l a c c o u n t s , c u l t u r a l t r a d i t i o n s , e t c . ) i n t o p r a c t i c a l , u s e f u l i n s t r u c t i o n − r e s p o n s e p a i r s i n a u t h e n t i c Moroccan A r a b i c d i a l e c t e x p r e s s e d i n A r a b i c s c r i p t . ## Task D e s c r i p t i o n : 1 . I w i l l p r o v i d e you wit h raw t e x t c o n t e n t ( l i k e cooking r e c i p e s , c u l t u r a l t r a d i t i o n s , h i s t o r i c a l i n f o r m a t i o n , e t c . ) 2 . Transform t h i s c o n t e n t i n t o p r a c t i c a l i n s t r u c t i o n − r e s p o n s e p a i r s where : − I n s t r u c t i o n s ask how t o do something , how something works , or how t o u n d e r s t a n d something − Responses p r o v i d e thorough , p r a c t i c a l e x p l a n a t i o n s i n Moroccan d i a l e c t 3 . Focus on making t h e s e i n t e r a c t i o n s u s e f u l f
https://arxiv.org/abs/2505.18383v1
o r someone wanting t o l e a r n p r a c t i c a l i n f o r m a t i o n 4 . Output t h e r e s u l t s i n s t r u c t u r e d JSON f o r m a t ## I n s t r u c t i o n Format G u i d e l i n e s : − For p r o c e d u r a l c o n t e n t ( r e c i p e s , c r a f t s , e t c . ) : C r e a t e "how t o " q u e s t i o n s − Example : { example_1 } − For h i s t o r i c a l / f a c t u a l c o n t e n t : C r e a t e "why" q u e s t i o n s − Example : { example_2 } − For c u l t u r a l p r a c t i c e s : C r e a t e " what i s " or "how do we c e l e b r a t e " q u e s t i o n s − Example : { example_3 } ## Response Format G u i d e l i n e s : − Make r e s p o n s e s d e t a i l e d and p r a c t i c a l − I n c l u d e s p e c i f i c s t e p s f o r p r o c e d u r a l c o n t e n t − Use a u t h e n t i c Moroccan v o c a b u l a r y and e x p r e s s i o n s − S t r u c t u r e r e s p o n s e s i n a l o g i c a l o r d e r − For r e c i p e s or p r o c e d u r e s , l i s t s t e p s c l e a r l y i n t h e o r d e r t h e y s h o u l d be performed − I n c l u d e t i p s and w a r n i n g s where a p p r o p r i a t e − Keep t h e
https://arxiv.org/abs/2505.18383v1
t o n e c o n v e r s a t i o n a l and h e l p f u l , as i f e x p l a i n i n g t o a f r i e n d ## Language G u i d e l i n e s : − Use a u t h e n t i c Moroccan A r a b i c d i a l e c t w r i t t e n i n A r a b i c s c r i p t ( n o t t r a n s l i t e r a t i o n ) − I n c l u d e t y p i c a l Moroccan cooking / c u l t u r a l t e r m i n o l o g y and e x p r e s s i o n s − Use l a n g u a g e as i t would n a t u r a l l y be spoken , n o t l i t e r a r y A r a b i c − I n c l u d e common Moroccan f i l l e r words and e x p r e s s i o n s where n a t u r a l − For cooking t e r m s or s p e c i a l i z e d v o c a b u l a r y , use t h e a c t u a l t e r m s Moroccans use . ## Output Format : The o u t p u t s h o u l d be v a l i d JSON wit h t h e f o l l o w i n g s t r u c t u r e : ``` j s o n { " i n s t r u c t i o n _ r e s p o n s e _ p a i r s " : [ { " i n s t r u c t i o n " : " [ p r a c t i c a l i n s t r u c t i o n i n Moroccan d i a l e c t ] " , " r e s p o n s e " : " [ d e t a i l e d p r a c t i c a l r e s p o n s e i n Moroccan d i a l e c t ] " } , { " i n s t r u c t i o n " : " [ a n o
https://arxiv.org/abs/2505.18383v1
t h e r p r a c t i c a l i n s t r u c t i o n ] " , " r e s p o n s e " : " [ a n o t h e r d e t a i l e d r e s p o n s e ] " } ] } ``` ## I n s t r u c t i o n s : 1 . G e n e r a t e a t l e a s t 3−5 p r a c t i c a l i n s t r u c t i o n − r e s p o n s e p a i r s based on t h e c o n t e n t I p r o v i d e . 2 . Make each i n s t r u c t i o n f o c u s on a p r a c t i c a l a s p e c t ( how−to , i n g r e d i e n t s , common m i s t a k e s , v a r i a t i o n s , e t c . ) . 3 . Ensure r e s p o n s e s a r e d e t a i l e d enough t o be p r a c t i c a l l y u s e f u l . 4 . The r e s u l t i n g JSON must be p r o p e r l y f o r m a t t e d and v a l i d . 5 . For s p e c i a l i z e d t e r m s or t e c h n i q u e s , use t h e a u t h e n t i c Moroccan t e r m s . 6 . When d e a l i n g wit h r e c i p e s or p r o c e d u r e s , make s u r e t h e s t e p s a r e i n l o g i c a l o r d e r . 7 . I f t h e a r t i c l e c o n t a i n s c u l t u r a l c o n t e x t or h i s t o r i c a l i n f o r m a t i o n , i n c l u d e some of t h i s i n t h e r e
https://arxiv.org/abs/2505.18383v1
s p o n s e s t o p r o v i d e d e p t h . 8 . I f t h e a r t i c l e does n o t have any r e l e v e n t i n f o r m a t i o n t o t h e Moroccan c u l t u r e t h e n r e t u r n an empty l i s t . Now, I ' l l p r o v i d e you w it h c o n t e n t , and you ' l l t r a n s f o r m i t i n t o p r a c t i c a l i n s t r u c t i o n − r e s p o n s e p a i r s i n Moroccan d i a l e c t as s p e c i f i e d above . { Text } Figure 10: Prompt for generating synthetic cultural dialectal question-answer pairs using a teacher model given raw text describing a local cultural heritage concept. The placeholder {Text} represents the raw text that is used to generate question-answer pairs from it. F Samples of texts generated from NileChat 19 Moroccan conversation (based on Wikipedia context) Text genre Conversation PersonaJilali, a 40-year-old Moroccan male, lives in Karia Ba Mohamed, Fes-Meknes region, in an urban area. He is married with 3 children and lives in a household of 7, including his parents and in-laws. Jilali has post-secondary non-tertiary education and works full-time as a skilled worker in a government or public institution. He is in the third income decile, considers himself upper middle class, and manages to save money. Jilali values family, friends, leisure, politics, work, and religion highly. He is Muslim, attends religious services weekly, and prays several times a day. He believes most people can be trusted and is quite happy, rating his life satisfaction as 6 out of 10 and his financial satisfaction as 8 out of 10.... Context وﺻﻔﺎت و أﻧواع اﻟطواﺟﯾن اﻟﻣﺧﺗﻠﻔﺔ اﻟﻣوﺟودة ﻓﻲ اﻟﻣطﺑﺦ اﻟﻣﻐرﺑﻲ. ﻣﻌﺗرف ﺑﮫ ﻋﺎﻟﻣﯾﺎ ﻛطﺑق ﻛﻼﺳﯾﻛﻲ ﻣن اﻟﻣطﺑﺦ 400 طﺎﺟﯾن اﻟزﯾﺗون )ﻛﻣﺎ ﯾﻌرف أﯾﺿﺎ ﺑﺎﻻﺳم اﻟﻛﺎﻣل: طﺎﺟﯾن اﻟدﺟﺎج ﺑﺎﻟزﯾﺗون( ھو طﺑق ﻣﻐرﺑﻲ ﯾﻧﺣدر ﻣن اﻟﻣطﺑﺦ اﻷﻣﺎزﯾﻐﻲ اﻟﻣﻐرﺑﻲ اﻟﺗﻘﻠﯾدي. ﯾﻌﺗﺑر وﺻﻔﺔ ﻣن ﺑﯾن اﻟﻣﻐرﺑﻲ ﺑﺎﻋﺗﺑﺎره، ﻣﻊ طﺎﺟﯾن اﻟﺑرﻗوق، ﺟوھرة ﻓن اﻟطﮭو اﻟﻣﻐرﺑﻲ. ﻛﻣﺎ ﯾﻌﺗﺑر واﺣد ﻣن أﺷﮭر اﻟطواﺟﯾن اﻟﻣﻐرﺑﯾﺔ ﻓﻲ اﻟﻣﻐرب و ﺧﺎرج اﻟﺑﻠد. ﯾﻌﺗﺑر اﻟزﯾﺗون واﻟدﺟﺎج ﻣن اﻟﻣﻛوﻧﺎت اﻷﺳﺎﺳﯾﺔ ﻟدى طﺎﺟﯾن اﻟزﯾﺗون، ﻛﻣﺎ ﯾﻣﻛن ﺗﺣﺿﯾره ﺑزﯾﺎدة اﻟﻠﯾﻣون اﻟﻣﺻﯾر، اﻟﻔطرﯾﺎت، اﻟﺟزر أو اﻟﺗﯾن. أﺻل و اﯾﺗﯾﻣوﻟوﺟﯾﺎ. ﺗﻌود أﺻول اﻟطﺎﺟﯾن إﻟﻰ اﻟﻣﻐرب ﺣﯾث ﯾﻧﺣدر ﻣن اﻟﺣﺿﺎرة اﻷﻣﺎزﯾﻐﯾﺔ اﻟﻣﻐرﺑﯾﺔ اﻷﺻﻠﯾﺔ. ذﻛر إﺧﺗرﻋﮫ ﻓﻲ اﻟﻣﻐرب ﻣﺎ ﻗﺑل اﻟﻌﺻور اﻟوﺳطﻰ ﺣﯾث ﻛﺎن ﻣﻌروف ﻓﻲ اﻟﻣﻐرب ﻓﻲ اﻟﻘرن اﻟﺛﺎﻣن و ﻛﺎن ﯾﻌﺗﺑر طﺑق ﻣﻠﻛﻲ ﻟدى ﻗﺑﺎﺋل أﻣﺎزﯾﻎ ﺻﻧﮭﺎﺟﺔ اﻟﻣﻐﺎرﺑﺔ ﻓﻲ اﻟﻘرن اﻟﺛﺎﻟث ﻋﺷر. ﺗﺣظﻰ ﺑﺷﻌﺑﯾﺔ ﻛﺑﯾرة ﺧﺎرج ﺣدود اﻟﻣﻐرب، ﻣﻌﺗرف ﺑﮫ ﻓﻲ اﻟﻣطﺎﻋم اﻷوروﺑﯾﺔ واﻷﻣرﯾﻛﯾﺔ. ﻛﻣﺎ ﻋرف طﺎﺟﯾن
https://arxiv.org/abs/2505.18383v1
اﻟدﺟﺎج ﺑﺎﻟزﯾﺗون اﻟﻣﻐرﺑﻲ إﻧﺗﺷﺎرا واﺳﻌﺎ ﻓﻲ اﻟﺟزاﺋر، ﺧﺻوﺻﺎ ﺑﻣدﯾﻧﺔ ﺗﻠﻣﺳﺎن. و ﯾﻔﺳر ھذا اﻹﻗﺑﺎل ﺑﺎﻧﺟذاب ﻋﺷﺎق اﻟطﺎﺟﯾن اﻟﻣﻐرﺑﻲ إﻟﻰ ﺟﻐراﻓﯾﺎ اﻟﻣدﯾﻧﺔ اﻟﻘرﯾﺑﺔ ﻣن اﻟﻣﻐرب. طﻘوس ﻣرﺗﺑطﺔ. اﻟﺷﻼل. ﯾﺗم ﺗﻘدﯾم ھذا اﻟطﺑق ﺑﻌد ﻏﺳﯾل اﻷﯾﺎدي ﻋﻠﻰ اﻟطرﯾﻘﺔ اﻟﻣﻐرﺑﯾﺔ اﻷﺻﻠﯾﺔ ﺑﺎﺳﺗﺧدام ﻣﺎ ﯾﺳﻣﻰ ﺑﺎﻟﺷﻼل اﻟﻣﻐرﺑﻲ اﻟذي ﯾﻌﺗﺑر ﻣن طﻘوس اﻟﺿﯾﺎﻓﺔ اﻟﻣﻐرﺑﯾﺔ و ھو ﻋﺑﺎرة ﻋن إﻧﺎء أو ﺷﻼل ﻣﺗﺣرك ﯾﻣﻧﺢ ﻟﻠﺿﯾوف إﻣﻛﺎﻧﯾﺔ ﻏﺳل اﻷﯾﺎدي ﺑدون اﻟﺗﺣرك ﻣن ﻣﻛﺎﻧﮭم، إﺣﺗراﻣﺎ ﻟﻠﺿﯾف ﺣﯾث ﯾﻘدم اﻟﺷﻼل ﻣﺑﺎﺷرة إﻟﻰ أﯾﺎدي اﻟﺿﯾف ﻟﯾﺗﻣﻛن ﺑﻐﺳﯾل أﯾﺎدﯾﮫ ﻗﺑل اﻷﻛل. ﯾﺗم ﺗﻧﺎول طﺎﺟﯾن اﻟدﺟﺎج ﺑﺎﻟزﯾﺗون ﺳﺎﺧﻧًﺎ و ﯾﻌﺗﺑر طﺑق ﯾوﻣﻲ ﻟدى اﻟﻣﻐﺎرﺑﺔ ﻟﻛن ﯾﺗم ﺗﻘدﻣﮫ ﻓﻲ ﺣﻔﻼت اﻻﺳﺗﻘﺑﺎل اﻟﻛﺑﯾرة و اﻷﻋراس ﺑﺻورة ﻣﻠﺣوظﺔ ﻛرﻣز ﻟﻠﺿﯾﺎﻓﺔ. اﻟﻣرﺷﺔ. ﺗﻌود اﻟﻣرﺷﺔ واﺣدة ﻣن أھم اﻟﺗﻘﺎﻟﯾد اﻟﻣﻐرﺑﯾﺔ اﻟﻌرﯾﻘﺔ ﺣﯾث ﯾﺗم رش ﻣﺎء اﻟزھر أو اﻟﻣﺳك ﻓﻲ أﻛف اﻟﺿﯾوف ﻛﻧوع ﻣن اﻟﺗرﺣﯾب. Linguistic cuesInclude these Moroccan dialect words in your generated text: " دﻋم( "meaning "sustain"), " ﺗﺣﺿﯾر( " meaning "preparation"), " ﻣﺎرس( "meaning "march"), " ﺗﻐذﯾﺔ( "meaning "nutrition"), and " ﻗﻠب( "meaning "flip"). Generated text- اﻟﺳﻼم ﻋﻠﯾﻛم، ﻛﯾف داﯾر ﻟﺑﺎرح؟ - ﻻﺑﺎس اﻟﺣﻣد ﻟﻠﮫ، اﻟﯾوم ﺟﯾت ﻧﻛﻠﻣك ﻋﻠﻰ واﺣد اﻟطﺑق ﻣﻐرﺑﻲ ﺗﻘﻠﯾدي ﻛﯾﻌﺟﺑﻧﻲ ﺑزاف، ﺳﻣﯾﺗو طﺎﺟﯾن اﻟزﯾﺗون. ھﺎد اﻟطﺑق ﻛﯾﻌﺗﺑر ﻣن أﺷﮭر اﻷطﺑﺎق اﻟﻣﻐرﺑﯾﺔ، وﻛﯾﺗﻛون أﺳﺎﺳﺎ ﻣن اﻟدﺟﺎج واﻟزﯾﺗون، وﻟﻛن ﻛﺎﯾﻧﯾن ﺑﻌض اﻹﺿﺎﻓﺎت اﻟﻠﻲ ﻛﺗزﯾده ﻧﻛﮭﺔ ﻣﻣﯾزة، ﺑﺣﺎل اﻟﻠﯾﻣون اﻟﻣﺻﯾر، واﻟﻔطرﯾﺎت، واﻟﺟزر، وﺣﺗﻰ اﻟﺗﯾن. - واااه، ﻛﯾﻌﺟﺑﻧﻲ طﺎﺟﯾن اﻟزﯾﺗون ﺑزاف، ﻛﯾﻛون ﺑﻧﯾن وﻣداﻗو ﻣﻣﯾز. ﻛﻧﻌرف ﺑﻠﻲ ﻛﯾﺗﺎﻛل ﻓﺎﻟﻣﻧﺎﺳﺑﺎت اﻟﻛﺑﯾرة واﻟﻌرس، وﻟﻛن ﻛﻧﻔﺿل ﻧﺎﻛﻠو ﻓﺎﻟدار ﻣﻊ اﻟﻌﺎﺋﻠﺔ، ﻛﯾﻛون ﻟذﯾذ وداﻓﺊ. - ﺑﺎﻟﺿﺑط، طﺎﺟﯾن اﻟزﯾﺗون ﻛﯾﻛون دﯾﻣﺎ ﺣﺎﺿر ﻓﺎﻟﻣﻧﺎﺳﺑﺎت اﻟﻛﺑﯾرة، وﻛﯾﻌﺗﺑر ﻣن رﻣوز اﻟﺿﯾﺎﻓﺔ اﻟﻣﻐرﺑﯾﺔ. ﻛﯾﻌﺟﺑﻧﻲ ﻛﯾﻔﺎش ﻛﯾﻘدم ﻣﻊ اﻟﺷﻼل، ﺣﯾث ﻛﻧﻐﺳﻠو ﯾدﯾﻧﺎ ﺑطرﯾﻘﺔ ﺗﻘﻠﯾدﯾﺔ ﻗﺑل ﻣﺎ ﻧﺎﻛﻠو، وﻛﯾﻛون اﺣﺗرام ﻛﺑﯾر ﻟﻠﺿﯾف. - آه، اﻟﺷﻼل! ھﺎد اﻟﺷﻲ ﻛﯾﻌﺟﺑﻧﻲ ﺑزاف، ﻛﯾﻛون ﻋﻧدﻧﺎ ﻓﺎﻟدار دﯾﻣﺎ، ﻛﻧﺳﺗﻌﻣﻠوه ﻓﻛل اﻟﻣﻧﺎﺳﺑﺎت. ﻛﻧﻐﺳﻠو ﯾدﯾﻧﺎ ﻗﺑل ﻣﺎ ﻧﺎﻛﻠو، وﻛﯾﻛون ﻋﻧدﻧﺎ اﻟﻣرﺷﺔ دﯾﺎل ﻣﺎء اﻟزھر أو اﻟﻣﺳك، ﻛﻧرﺣﺑو ﺑﺎﻟﺿﯾﺎف ﺑﯾﮫ. - ھﺎدﺷﻲ ﻛﯾﻌﺟﺑﻧﻲ ﻓﺎﻟﻣﻐرب، اﻟﺗﻘﺎﻟﯾد دﯾﺎﻟﻧﺎ واﻟطﻘوس اﻟﻠﻲ ﻛﺗراﻓق اﻷﻛل. طﺎﺟﯾن اﻟزﯾﺗون ﻛﯾﺗطﻠب ﺗﺣﺿﯾر ﺧﺎص، ﺧﺎﺻو اﻟوﻗت وداﻛﺷﻲ، وﻟﻛن ﻛﯾﺟﻲ ﺑﻧﯾن وﻣداﻗو ﻛﯾﻌﺟب اﻟﻛل. - ﻛﻧت ﺳﻣﻌت ﺑﻠﻲ أﺻل ھﺎد اﻟطﺑق أﻣﺎزﯾﻐﻲ، ﻣن اﻟﺣﺿﺎرة اﻷﻣﺎزﯾﻐﯾﺔ اﻟﻣﻐرﺑﯾﺔ اﻷﺻﻠﯾﺔ. ﻛﯾﻌﺟﺑﻧﻲ ﻛﯾﻔﺎش ﻛﻧﻠﻘﺎو ھﺎد اﻟﺗﻧوع اﻟﺛﻘﺎﻓﻲ ﻓﺎﻟطﺑﺦ دﯾﺎﻟﻧﺎ، ﻛﯾﺟﻣﻊ ﺑﯾن اﻟﺗﻘﻠﯾدي واﻟﺣدﯾث. - ﺑﺎﻟﺿﺑط، ﻛﻧﻠﻘﺎو ھﺎد اﻟطﺑق ﺣﺗﻰ ﻓﺎﻟﻣطﺎﻋم اﻷوروﺑﯾﺔ واﻷﻣرﯾﻛﯾﺔ، ﺣﯾث ﻛﯾﻌرﻓوه ﺑزاف. ﻛﯾﻌﺟﺑﻧﻲ ﻛﯾﻔﺎش ﻛﻧﻠﻘﺎو دﻋم ﻟﺛﻘﺎﻓﺗﻧﺎ ﻓﺎﻟﻌﺎﻟم. - وﻓﺎش ﻛﺗﺣﺿرو، ﻛﺗﻌطﻲ ﻟﯾﮫ ﻟﻣﺳﺔ ﺧﺎﺻﺔ، ﻛﺗزﯾد ﻟﯾﮫ اﻟﺧﺿرة واﻟﺗواﺑل، ﻛﯾﺟﻲ ﺑﻧﯾن ﺑزاف. ﻛﻧت ﺟرﺑت ﻣرة ﻧدﯾر ﻟﯾﮫ اﻟﺗﯾن، ﻗﻠب ﻟﯾﮫ اﻟﻣذاق، وﻻ ﻛﯾﻌﺟﺑﻧﻲ ﺑزاف. - ھﺎدﺷﻲ اﻟﻠﻲ ﻛﯾﻌﺟﺑﻧﻲ ﻓﺎﻟطﺑﺦ، ﻛﻧﻠﻘﺎو ﻓﯾﮫ اﻹﺑداع واﻟﺗﺟدﯾد. ﻛﻧﻠﻘﺎو ﻛل واﺣد ﻛﯾدﯾر ﻟﯾﮫ ﻟﻣﺳﺔ ﺧﺎﺻﺔ، ﻛﯾﺟﻲ ﺑﻧﯾن وﻣﺗﻧوع. - وﻓﺎش ﻛﯾﺟﻲ ﻣﺎرس، ﻛﻧﻠﻘﺎو اﻟزﯾﺗون اﻟﺟدﯾد، ﻛﯾﻛون ﺑﻧﯾن ﺑزاف، ﻛﻧﺳﺗﻌﻣﻠوه ﻓﺎﻟطﺑﺦ، وﻛﯾﻌطﻲ ﻧﻛﮭﺔ ﺧﺎﺻﺔ. - آه، اﻟزﯾﺗون دﯾﺎل ﻣﺎرس ﻛﯾﻛون ﺑﻧﯾن ﺑزاف، ﻛﻧﺳﺗﻌﻣﻠوه ﻓﺎﻟطﺑﺦ، وﺣﺗﻰ ﻓﺎﻟﺳﻠطﺎت، ﻛﯾﻌطﻲ ﺗﻐذﯾﺔ وﺻﺣﺔ. - ﺑﺎﻟﺿﺑط، ﻛﻧﻠﻘﺎو ﻓواﺋد ﻛﺛﯾرة ﻓﺎﻟزﯾﺗون، ﻛﻧﺳﺗﻌﻣﻠوه ﻓﺎﻟطﺑﺦ، وﻛﯾﻌطﻲ ﻣذاق ﻣﻣﯾز، وﻛﯾﻌﺗﺑر ﻣن اﻟﻣﻛوﻧﺎت اﻷﺳﺎﺳﯾﺔ ﻓﺎﻟطﺑﺦ اﻟﻣﻐرﺑﻲ. Egyptian Blog Post (based on Wikipedia context) Text genre Blog Post PersonaHatem, a 29-year-old Egyptian male, lives in a rural area of Markaz Al Delengat, Behaira region. He is single, has no children, and lives with his parents in a household of three. Hatem has no formal education and works part-time as an unskilled laborer in private
https://arxiv.org/abs/2505.18383v1
business or industry. He is in the third income decile and considers himself lower class. He has spent savings and borrowed money, and rates his financial satisfaction as 2 out of 10 and his overall life satisfaction as 3 out of 10. Despite this, he reports good health and rates his control over life as 5 out of 10. Hatem is a devout Muslim who attends religious services more than once a week and prays several times a day. He values family and friends highly but does not prioritize leisure or politics. Work and religion are very important to him. He is cautious with people and not very happy..... Context اﻟﻛﺷري ﻣن أﺷﮭر اﻷﻛﻼت اﻟﺷﻌﺑﯾﺔ اﻟﻣﺻرﯾﺔ وﺗﺗﻛون ﻣن اﻟﻣﻌﻛروﻧﺔ واﻷرز واﻟﻌدس اﻷﺳود واﻟﺑﺻل اﻟﻣﻘﻠﻲ وﺻﻠﺻﺔ اﻟطﻣﺎطم أي اﻟﺑﻧدورة اﻟﺣﺎرة واﻟﺗﻘﻠﯾﺔ، وﯾﻌﺗﺑر اﻟﻛﺷري أﻛﻠﺔ ﻣﻧﺧﻔﺿﺔ اﻟﻛﻠﻔﺔ ﻋﺎﻟﯾﺔ اﻟﺳﻌرات اﻟﺣرارﯾﺔ وذات زﯾوت ﻧﺑﺎﺗﯾﺔ ﻣﺗﺄﻛﺳدة ﺑﺳﺑب اﻟﻘﻠﻲ اﻟذي ﯾﺛﯾر ﺣول اﻟوﺟﺑﺔ ﻛﻣًﺎ ﻣن اﻻﻧﺗﻘﺎدات اﻟﺻﺣﯾﺔ وﻧﺣوھﺎ. ﺗطوره. ﻓﻲ ﻣﺻر. ﺗطور اﻟﻛﺷري ﻓﻲ اﻵوﻧﺔ اﻷﺧﯾرة ﻓﻲ ﻣﺻر ﺣﯾث زادت ﻋدد اﻟﻣﺣﻼت اﻟﻣﺗﺧﺻﺻﺔ ﻓﻲ ﻋﻣﻠﮫ، ﻛذﻟك ﺗﻘوم ھذه اﻟﻣﺣﻼت ﺑﺗﻘدﯾم ﻣﺄﻛوﻻت أﺧرى ﺑﺟﺎﻧب اﻟﻛﺷري ﻣﺛل أطﺑﺎق اﻟﻣﻌﻛروﻧﺔ واﻷرز ﺑﺎﻟﻠﺑن واﻟﻌﺎﺷوراء واﻟﻔطﺎﺋر )اﻟﮭرﯾﺳﺔ(. ﺣﺑﺔ 2 - ﻋدد 2 .- ﺧﻣس ﻓﺻوص ﺛوم 1 .وﯾﺧﺗﻠف اﻟﻛﺷري اﻹﺳﻛﻧدراﻧﻲ ﻋن اﻟﻛﺷري اﻟﻌﺎدي ﻓﻲ ﺑﺎﻗﻲ اﻟﻣﺣﺎﻓظﺎت ﺣﯾث ﯾﺗم وﺿﻊ اﻟﻌدس اﻷﺻﻔر ﺑدﻻً ﻣن اﻟﻌدس اﻷﺳود ﻟﯾﺧﺗﻠف ﺷﻛﻠﮫ ﻋن اﻟﻛﺷري اﻟﻌﺎدي وﯾﺻﺑﺢ ﻟوﻧﮫ أﺻﻔر وﻻ ﯾﺿﺎف ﻣﻌﮫ اﻟﺻﻠﺻﺔ. ﺗﺣﺿﯾر اﻟﺻﻠﺻﺔ. ﻣﻘﺎدﯾرھﺎ - ﻣﻠﻌﻘﺔ ﻛﺑﯾرة ﺳﻣن. اﻟﻛﺷري ﻓﻲ ﺑﻼد اﻟﺷﺎم. اﻟﻛﺷري ﻓﻲ اﻟﺷﺎم ﯾﺳﻣﻰ اﻟﻣﺟدّرة ﯾطﺑﺦ وﯾﻘدم ﺑطرﯾﻘﺗﯾن ﺑﺎﻟﺑرﻏل أو ﺑﺎﻷرز، واﻟطرﯾﻘﺔ اﻷوﻟﻰ وھﻲ اﻟﺑرﻏل اﻟﻣﺧﻠوط ﺑﺎﻟﻌدس اﻷﺳود اﻟﻣﺳﻠوق وﯾﻘدم ﺣﺎراً أو ﺑﺎرداً 5 - ﻣﻠﻌﻘﺗﯾن ﺧل أو ﺛﻼث ﻣﻼﻋق 4 .- ﻋﻠﺑﺔ ﺻﻠﺻﺔ ﻛﺑﯾرة 3 .طﻣﺎطم وﺧﺻوﺻﺎً ﻋﻧدﻣﺎ ﯾﻛون اﻷرز واﻟﻌدس ﻣﮭدى ﻋﻠﻰ زﯾت اﻟزﯾﺗون ﺑدﻵ ﻣن اﻟﺳﻣن اﻟﺑﻠدي ﻓﻌﻧدھﺎ ﯾﻔﺿل ﺗﻧﺎوﻟﮫ ﺳﺎﺧﻧﺎً. ﻣن اﻟﻣﻌﺗﺎد ﺗﻧﺎول اﻟﻛﺷري ﻣﻊ اﻟﻠﺑن اﻟراﺋب أو ﺗﻧﺎوﻟﮫ ﺑدون ذﻟك. اﻟﻛﺷري ﻓﻲ اﻟﻌراق. اﻟﻛﺷري ﻓﻲ اﻟﻌراق ﯾطﺑﺦ ﺑﺎﻷرز واﻟﻌدس اﻷﺻﻔر اﻟﻣﺟروش وﯾﻘدم ﻋﺎدة ﻣﻊ ﻛﺑﺔ غ ﺗﻘرﯾﺑﺎً(، ﺑﺣﺳب ﻣوﻗﻊ ﺷﮭﯾﺔ، ﻋﻠﻰ اﻟﻣﻌﻠوﻣﺎت اﻟﻐذاﺋﯾﺔ اﻟﺗﺎﻟﯾﺔ: ﺑﻌض اﻻﻧﺗﻘﺎدات. إذا وﺿﻊ اﻷرز اﻟﻣﻌﺎد ﺗﺳﺧﯾﻧﮫ ﻓﻲ اﻟﻛﺷري ﯾﺧﻠق ﻧوﻋﺎً ﻣن اﻟﺑﻛﺗﯾرﯾﺎ ﻗد ﺗؤدي ﻟﻣﺷﺎﻛل ﺻﺣﯾﺔ ﺗؤدي ﻟﻠوﻓﺎة وﻣن 300) اﻟﺣﺎﻣض وﯾﻌﺗﺑر ﻣﻛﻣﻼً ﻟﮭﺎ. اﻟﻣﻌﻠوﻣﺎت اﻟﻐذاﺋﯾﺔ. ﺗﺣﺗوي وﺟﺑﺔ اﻟﻛﺷري اﻟﻣﺻري أﻋراض اﻟﺗﺳﻣم ﻣن اﻷرز اﻟﻣﻌﺎد ﺗﺳﺧﯾﻧﮫ: ﻟذا ﻣن اﻟﻣﻔﺗرض أن ﯾطﻠب اﻟﻛﺷري ﺑدون اﻷرز. Linguistic cues N/A Generated text ﯾﺎ ﺟﻣﺎﻋﺔ، اﻟﻧﮭﺎردة ھﺗﻛﻠم ﻣﻌﺎﻛوا ﻋن أﻛﻠﺔ ﺷﻌﺑﯾﺔ ﻣﺻرﯾﺔ ﻣﺷﮭورة ﺟداً وھﻲ اﻟﻛﺷري. اﻟﻛﺷري ده ﺑﻘﻰ أﻛﻠﺔ ﺣﻛﺎﯾﺔ، ﻋﺑﺎرة ﻋن ﺧﻠﯾط ﻣن اﻟﻣﻛروﻧﺔ واﻷرز واﻟﻌدس اﻷﺳود واﻟﺑﺻل اﻟﻣﻘﻠﻲ وﺻﻠﺻﺔ اﻟطﻣﺎطم اﻟﺣﺎرة، وﺑﺗﺗﺣط ﻓوﻗﮭم اﻟﺗﻘﻠﯾﺔ. أﻛﻠﺔاﻟﻛﺷري دي ﻣش ﺑس رﺧﯾﺻﺔ، ﻷ وﻛﻣﺎن ﻣﻠﯾﺎﻧﺔ ﺳﻌرات ﺣرارﯾﺔ وزﯾوت ﻧﺑﺎﺗﯾﺔ ﻣﺗﺄﻛﺳدة ﺑﺳﺑب اﻟﻘﻠﻲ، وده اﻟﻠﻲ ﺧﻠﻰ ﻧﺎس ﻛﺗﯾر ﺗﺗﻛﻠم ﻋن أﺿرارھﺎ اﻟﺻﺣﯾﺔ. اﻟﻛﺷري ﺑﻘﻰ ﻟﮫ ﺷﻌﺑﯾﺔ ﻛﺑﯾرة ﻓﻲ ﻣﺻر، وﻓﯾﮫ ﻣﺣﻼت ﻛﺗﯾر ﻣﺗﺧﺻﺻﺔ ﻓﻲ ﻋﻣﻠﮫ، وﺑﻘت ﻛﻣﺎن ﺗﻘدم أﻛﻼت ﺗﺎﻧﯾﺔ ﺟﻧب اﻟﻛﺷري زي اﻟﻣﻛروﻧﺔ ﺑﺎﻟﻠﺑن واﻟﻌﺎﺷوراء واﻟﮭرﯾﺳﺔ. ﺑس ﻋﺎرﻓﯾن، ﻓﻲ اﻹﺳﻛﻧدرﯾﺔ ﻋﻧدھم طرﯾﻘﺔ ﻣﺧﺗﻠﻔﺔ ﺷوﯾﺔ، ﺑﯾﺳﺗﺧدﻣوا اﻟﻌدس اﻷﺻﻔر ﺑدل اﻷﺳود، ﻓﺑﯾﺑﻘﻰ ﺷﻛﻠﮫ ﻣﺧﺗﻠف وﻟوﻧﮫ أﺻﻔر، وﺑﯾﻘدﻣوه ﻣن ﻏﯾر ﺻﻠﺻﺔ. ﻧﯾﺟﻲ ﺑﻘﻰ ﻟطرﯾﻘﺔ ﻋﻣل اﻟﺻﻠﺻﺔ ﺑﺗﺎﻋﺔ اﻟﻛﺷري، ھﻘوﻟﻛوا ﻋﻠﻰ اﻟﻣﻘﺎدﯾر: ھﺗﺣﺗﺎﺟوا ﺛوم وطﻣﺎطم وﺻﻠﺻﺔ ﻛﺑﯾرة وﺧل وﺳﻣﻧﺔ. وﺑﺎﻟﻣﻧﺎﺳﺑﺔ، اﻟﻛﺷري ﻓﻲ ﺑﻼد اﻟﺷﺎم اﺳﻣﮫ اﻟﻣﺟدرة، وﺑﯾطﺑﺧوه ﺑطرﯾﻘﺗﯾن، ﯾﺎ إﻣﺎ ﺑﺎﻟﺑرﻏل ﯾﺎ
https://arxiv.org/abs/2505.18383v1
إﻣﺎ ﺑﺎﻷرز. واﻟﻠﻲ ﺑﯾﺣﺑوا اﻟﺑرﻏل ﺑﯾﻌﻣﻠوه ﻣﻊ اﻟﻌدس اﻷﺳود اﻟﻣﺳﻠوق وﺑﯾﻘدﻣوه ﺳﺎﺧن أو ﺑﺎرد، ﺧﺻوﺻﺎً ﻟو ﺣطوا ﻋﻠﯾﮫ زﯾت زﯾﺗون ﺑدل اﻟﺳﻣﻧﺔ. وﻓﻲ اﻟﻌراق، اﻟﻛﺷري ﺑﯾﺗﺣﺿر ﺑﺎﻷرز واﻟﻌدس اﻷﺻﻔر اﻟﻣﺟروش، وﺑﯾﻘدﻣوه ﻣﻊ ﻛﺑﺔ اﻟﺣﺎﻣض. ﺑس ﺧدوا ﺑﺎﻟﻛم، ﻓﻲ ﺑﻌض اﻟﺗﺣذﯾرات ﻣن أﻛل اﻟﻛﺷري اﻟﻠﻲ ﻓﯾﮫ أرز ﻣﻌﺎد ﺗﺳﺧﯾﻧﮫ، ﻷن ده ﻣﻣﻛن ﯾﺳﺑب ﻧوع ﻣن اﻟﺑﻛﺗﯾرﯾﺎ اﻟﻠﻲ ﻣﻣﻛن ﺗﺄذي اﻟﺻﺣﺔ وﺗﺳﺑب ﻣﺷﺎﻛل ﺧطﯾرة. ﻋﻣوﻣﺎً، اﻟﻛﺷري أﻛﻠﺔ ﺷﻌﺑﯾﺔ ﻟذﯾذة، ﺑس زي ﻣﺎ ﻗﻠﺗﻠﻛوا، ﻻزم ﻧﺎﺧد ﺑﺎﻟﻧﺎ ﻣن اﻷرز اﻟﻣﻌﺎد ﺗﺳﺧﯾﻧﮫ، واﻷﻓﺿل ﻧطﻠﺑﮫ ﻣن ﻏﯾر أرز ﻋﺷﺎن ﻧﺗﺟﻧب أي ﻣﺷﺎﻛل ﺻﺣﯾﺔ. ﺑﺎﻟﮭﻧﺎ واﻟﺷﻔﺎ! Moroccan conversation (based on local news context) Text genre Conversation PersonaZohra, a 49-year-old Moroccan woman, lives in a rural area of Tigrigra within the Fes-Meknes region. She is married with two children and lives in a household of five people, not including her parents. Zohra has no formal education and works as a housewife and farm laborer in private business or industry. She is in the sixth income decile and considers herself lower middle class, just getting by financially. Family is very important to Zohra, while friends are rather important. She does not prioritize leisure, politics, or work, and considers religion very important. She is a practicing Muslim, attending religious services once a week and praying several times a day. She believes most people can be trusted and rates her life satisfaction as 6 out of 10 and her financial satisfaction as 7 out of 10. She reports fair health and feels complete control over her life..... Context ﻣﺗرا وﻗﻧﺎة 1102 ﺗﺷﮭد ﻣﻧطﻘﺔ طﻣﺎرﯾس ﺑدار ﺑوﻋزة ﺑﺎﻟﺑﯾﺿﺎء إﻧﺟﺎز ﻋدد ﻣن اﻟﻣﺷﺎرﯾﻊ اﻟﮭﯾﻛﻠﯾﺔ، ﻣن ﺑﯾﻧﮭﺎ ﻣﺷروع ﺗﻧﺟزه ﻟﯾدك ﻟﺗزوﯾد اﻟﻣﻧطﻘﺔ ﺑﺷﺑﻛﺔ ﻟﻠﺗطﮭﯾر اﻟﺳﺎﺋل ﻟﻣﯾﺎه اﻷﻣطﺎر واﻟﻣﯾﺎه اﻟﻌﺎدﻣﺔ. وﯾﺷﻣل ھذا اﻟﻣﺷروع إﻧﺟﺎز ﺳرداب ﻋﻠﻰ طول 11:00 ,2018 أﻛﺗوﺑر 5 اﻟﺟﻣﻌﺔ ﻣﺗر، ﺑﺎﻹﺿﺎﻓﺔ إﻟﻰ وﺣدة ﻟﺿﺦ اﻟﻣﯾﺎه. وﯾﺄﺗﻲ ھذا اﻟﻣﺷروع، اﻟذي ﺗﻧﺟزه ﻟﯾدك، ﺑﺷراﻛﺔ وﺗﻧﺳﯾق ﻣﻊ ﻋﻣﺎﻟﺔ إﻗﻠﯾم اﻟﻧواﺻر، 1383 ﻣﺗرا، وﻛذا ﻗﻧﺎة ﻣﺟﻣﻌﺔ ﻟﺗﺻرﯾف اﻟﻣﯾﺎه اﻟﻌﺎدﻣﺔ طوﻟﮭﺎ 984 ﻛﻠم، إﺿﺎﻓﺔ إﻟﻰ ﻗﻧﺎة ﻛﺑﯾرة ﻟﺗﺻرﯾف ﻣﯾﺎه اﻷﻣطﺎر طوﻟﮭﺎ 1,2 ﻟﺗﺻرﯾف اﻟﻣﯾﺎه اﻟﻌﺎدﻣﺔ طوﻟﮭﺎ ﻣﻠﯾون درھم، ﺗﻘوم 58 واﻟﺳﻠطﺎت اﻟﻣﺣﻠﯾﺔ وﺟﻣﺎﻋﺔ دار ﺑوﻋزة، ﻓﻲ إطﺎر ﻣﻘﺎرﺑﺔ ﺗﮭدف إﻟﻰ ﻣواﻛﺑﺔ ﺣرﻛﯾﺔ ﺗﻌﻣﯾر ﻣﻧطﻘﺔ طﻣﺎرﯾس ﺑﺎﻟﺟﮭﺔ اﻟﺟﻧوﺑﯾﺔ ﻟﻠدار اﻟﺑﯾﺿﺎء ﻣن ﺧﻼل ﺗﺟﮭﯾزھﺎ ﺑﺷﺑﻛﺔ ﻟﻠﺗطﮭﯾر اﻟﺳﺎﺋل وﺷﺑﻛﺔ اﻟﻣﺎء اﻟﺷروب. وإﻟﻰ ﺟﺎﻧب ھذا اﻟﻣﺷروع، اﻟذي ﯾﻧﺟز ﺑﺎﺳﺗﺛﻣﺎر ﯾﺗﺟﺎوز ﻣﯾﻠﻣﺗر. وﯾﻧدرج اﻟﻣﺷروﻋﺎن ﻓﻲ إطﺎر ﻣﺟﮭودات واﻟﺗزاﻣﺎت ﻟﯾدك اﻟراﻣﯾﺔ إﻟﻰ ﻣواﻛﺑﺔ اﻟﺗﻧﻣﯾﺔ اﻻﺟﺗﻣﺎﻋﯾﺔ واﻻﻗﺗﺻﺎدﯾﺔ واﻟﺗوﺳﻊ 315 ﻛﻠم و ﻗطرھﺎ 3,32 ﻣﻼﯾﯾن درھم. وﯾﺗﻛون اﻟﻣﺷروع ﻣن ﻗﻧﺎة طوﻟﮭﺎ 7 ﻟﯾدك ﺑﺈﻧﺟﺎز ﻣﺷروع ﺷﺑﻛﺔ ﻟﻺﻣداد ﺑﺎﻟﻣﺎء اﻟﺷروب ﺑﻛﻠﻔﺔ ﻣﺎﻟﯾﺔ ﻗدرھﺎ أﻛﺛر ﻣن اﻟﺣﺿري ﻟﻣدﯾﻧﺔ اﻟدار اﻟﺑﯾﺿﺎء. Linguistic cues 3. Include these Moroccan dialect words in your generated text: " ﺷﺎك( "meaning "doubt"), " ﻣﺧﺑﻲ( "meaning "hidden"), " رﯾﺢ( "meaning "windy"), " رﺟل( "meaning "feet"), and " ﺑﺧﯾل( "meaning "stingy"). Generated text- اﻟﺳﻼم ﻋﻠﯾﻛم، ﻛﯾف داﯾرﯾن اﻟﯾوم؟ ﻛﺎﯾن ﺷﻲ ﺟدﯾد ﻓﺎﻟﺑﻼد؟ - وﻋﻠﯾﻛم اﻟﺳﻼم واﻟرﺣﻣﺔ، ﻛﺎﯾن ﺷﻲ ﺟدﯾد ﻓطﻣﺎرﯾس، ﻛﺎﯾﻧﺔ ﻣﺷﺎرﯾﻊ ﺟدﯾدة ﻛﯾدﯾرو ﻓدار ﺑوﻋزة، ﻛﺎﯾن ﺷﻲ ﻣﺷروع دﯾﺎل اﻟﺗطﮭﯾر اﻟﺳﺎﺋل دﯾﺎل ﻣﯾﺎه اﻷﻣطﺎر واﻟﻣﯾﺎه اﻟﻌﺎدﻣﺔ. - أه، ھﺎدﺷﻲ زوﯾن، وﻋﻼش ھﺎد اﻟﻣﺷروع ﺑﺎﻟﺿﺑط؟ - ﻛﺎﯾﻧﺔ ﺷﻲ ﻣﺷﺎرﯾﻊ ﺟدﯾدة دﯾﺎل اﻟﺗﻌﻣﯾر ﻓﺎﻟﻣﻧطﻘﺔ، وﺑﻐﺎو ﯾدﯾرو ﺷﺑﻛﺔ دﯾﺎل اﻟﺗطﮭﯾر ﺑﺎش ﯾدوزو ﻣﯾﺎه اﻷﻣطﺎر واﻟﻣﯾﺎه اﻟﻌﺎدﻣﺔ، ﺣﯾت اﻟﻣﻧطﻘﺔ
https://arxiv.org/abs/2505.18383v1
ﻛﺗﻌﺎﻧﻲ ﺷوﯾﺔ ﻣن ھﺎد اﻟﻣﺷﻛل. - آه ﻓﮭﻣت، ﯾﻌﻧﻲ ﻛﺎﯾﻧﺔ ﺷﻲ ﻣﺷﺎﻛل دﯾﺎل اﻟﺻرف اﻟﺻﺣﻲ ﻓطﻣﺎرﯾس؟ - ﺑﺎﻟﺿﺑط، ﻛﺎﯾن ﺷﻲ ﻣﺷﺎﻛل ﻣﺧﺑﯾﯾن، وﻛﺎﯾن ﺷﻲ ﻣﻧﺎطق ﻛﯾﺗﺟﻣﻊ ﻓﯾﮭﺎ ﻣﯾﺎه اﻷﻣطﺎر وﻛﯾﺗﺳرﺑو ﻟﻠﻣﯾﺎه اﻟﻌﺎدﻣﺔ، وﻛﺎﯾﻧﺔ ﺷﻲ ﻣﺷﺎﻛل دﯾﺎل اﻟﺗﻠوث. - ﻣﺳﺎﻛن، ﺧﺎﺻﮭم ﯾدﯾرو ﺷﻲ ﺣل، ﺣﯾت ھﺎدﺷﻲ ﻛﯾﺧﻠﻲ اﻟﻣﻧطﻘﺔ رﯾﺢ وﺑﻼ ﺑﻧﯾﺔ ﺗﺣﺗﯾﺔ ﻣزﯾﺎﻧﺔ. - ﺑﺎﻟﺿﺑط، ھﺎد اﻟﻣﺷروع ﻏﺎدي ﯾدﯾر ﺳرداب طوﯾل وﻗﻧوات ﻛﺑﯾرة ﺑﺎش ﯾﺟﻣﻌو ﻣﯾﺎه اﻷﻣطﺎر واﻟﻣﯾﺎه اﻟﻌﺎدﻣﺔ وﯾدوزوھم ﺑطرﯾﻘﺔ ﻣﻧظﻣﺔ. - وﻋﻼش ھﺎد اﻟﻣﺷروع ﺑﺎﻟﺿﺑط؟ ﺷﺎك ﻓﮭﺎدﺷﻲ؟ - ﻻ، ھﺎد اﻟﻣﺷروع ﻣﻧظم ﻣن طرف ﻟﯾدك ﺑﺷراﻛﺔ ﻣﻊ اﻟﻌﻣﺎﻟﺔ واﻟﺳﻠطﺎت اﻟﻣﺣﻠﯾﺔ، ﺑﻐﺎو ﯾدﯾرو ﺷﻲ ﺣل ﺷﺎﻣل ﻟﻠﻣﺷﻛل دﯾﺎل اﻟﺗطﮭﯾر اﻟﺳﺎﺋل، وﺣﺗﻰ اﻟﻣﺎء اﻟﺷروب. - أه، زوﯾن ھﺎدﺷﻲ، ﺧﺎﺻﻧﺎ ﻧﻌﺎوﻧو ﻓﺎﻟﺗﻧﻣﯾﺔ دﯾﺎل اﻟﺑﻼد، ﺣﯾت ھﺎد اﻟﻣﺷﺎرﯾﻊ ﻣﮭﻣﺔ ﺑزاف. - ﺑﺎﻟﺿﺑط، ھﺎد اﻟﻣﺷروع ﻏﺎدي ﯾﻛﻠف ﺑزاف، وﻟﻛن ﻏﺎدي ﯾﻛون ﻋﻧدو ﺗﺄﺛﯾر إﯾﺟﺎﺑﻲ ﻋﻠﻰ اﻟﻣﻧطﻘﺔ، وﻏﺎدي ﯾﺣﺳن ﻣن ﺟودة اﻟﺣﯾﺎة دﯾﺎل اﻟﺳﺎﻛﻧﺔ. - اﻟﻠﮫ ﯾﺟﯾب اﻟﻠﻲ ﻓﯾﮫ اﻟﺧﯾر، ﺣﯾت ھﺎدﺷﻲ ﻛﯾﻌﺎون ﻋﻠﻰ اﻟﺗﻧﻣﯾﺔ دﯾﺎل اﻟﺑﻼد، ﻣﺎﺷﻲ ﺑﺣﺎل ﺷﻲ ﻧﺎس ﺑﺧﯾل ﻛﯾﻔﻛرو ﻏﯾر ﻓراﺳﮭم. - ھﮭﮭﮫ، ﻛﺎﯾﻧﯾن ﺷﻲ ﻧﺎس ﻛﯾﻔﻣﺎ ﻗﻠﺗﻲ، وﻟﻛن ﻛﺎﯾﻧﯾن ﺷﻲ ﻧﺎس ﻛﯾﺧدﻣو ﺑﺟد ﺑﺎش ﯾﺣﺳﻧو ﻣن اﻟوﺿﻌﯾﺔ دﯾﺎل اﻟﺑﻼد. - أه، اﻟﻠﮫ ﯾﻌﺎوﻧﮭم، ﺣﯾت ھﺎدﺷﻲ ﻣﺎﺷﻲ ﺳﺎھل، ﺧﺎﺻﮭم ﯾﺧدﻣو رﺟﻠﮭم ﺑﺎش ﯾطورو اﻟﺑﻼد. - ﺑﺎﻟﺿﺑط، ﺣﻧﺎ ﻛﻣﻐﺎرﺑﺔ ﺧﺎﺻﻧﺎ ﻧﺗﺣدو وﻧﺧدﻣو ﯾد ﻓﯾد ﺑﺎش ﻧﺗﻘدﻣو وﻧﺣﺳﻧو ﻣن اﻟوﺿﻌﯾﺔ دﯾﺎل اﻟﺑﻼد. - أه، ﻛﻼﻣك ﺻﺣﯾﺢ، ﺧﺎﺻﻧﺎ ﻧﻛوﻧو ﯾد وﺣدة ﺿد اﻟﺗﺧﻠف واﻟﺟﮭل، وﻧﺷﺟﻌو ﻋﻠﻰ اﻟﺗﻧﻣﯾﺔ واﻟﺗﻘدم. - ﺑﺎﻟﺿﺑط، ﺧﺎﺻﻧﺎ ﻧﻛوﻧو ﻓﺧورﯾن ﺑﺑﻼدﻧﺎ وﻧﺧدﻣو ﻋﻠﯾﮭﺎ، ﺣﯾت ھﺎدﺷﻲ واﺟب ﻋﻠﯾﻧﺎ. - أه، اﻟﻠﮫ ﯾﻌﺎون اﻟﺟﻣﯾﻊ، وﻧﺷوﻓو ﺑﻼدﻧﺎ ﻣزﯾﺎﻧﺔ وﻧﻘﯾﺔ ﻣن ﻛل ﻣﺷﺎﻛل. - إن ﺷﺎء اﻟﻠﮫ، واﻟﻠﮫ ﯾﺳﮭل ﻋﻠﻰ ﻛل واﺣد ﻛﯾﺧدم ﺑﺟد.Table A.3: Examples of culturally aware and dialectally diverse texts from various genres, generated by our teacher model. The model was provided with input representing a specific persona, local context, and a local linguistic cue following the methodology described in Section 3.1. Belebele Flores In-house MOR EGY ENG →EGY ENG →MOR ENG →MOR ENG →EGY NileChat-EGY 64.44 70.89 43.85 23.10 11.93 36.93 NileChat-MOR 70.67 72.56 39.94 37.45 30.82 29.98 NileChat 70.33 72.67 44.37 33.89 28.67 37.52 Table B.1: Comparison of the performance of the Egyptian SFT model ( NileChat-EGY ), the Moroccan SFT model (NileChat-MOR ), and their merged version, NileChat , on Belebele (accuracy), Flores (chrf), and In-house parallel data (chrf). 20 Model Name Size Release Date Less than 7B Qwen3-1.7B 1.7 Apr. 2025 ar-stablelm-2-chat 1.6 Jul. 2024 Atlas-Chat-2B 2.6 Sep. 2024 Llama-3.2-3B-Instruct 3.2 Sep. 2024 gemma-3-4b-it 4.3 Mar. 2025 Qwen3-4B 4 Apr. 2025 NLLB-200-3.3B 3.3 Jul. 2022 Qwen2.5-3B-Instruct 3.1 Sep. 2024 More than 7B AceGPT-7B-chat 7 Dec. 2023 ALLaM-7B-Instruct 7 Feb. 2025 Qwen2.5-7B-Instruct 7.6 Apr. 2025 Qwen3-8B 8.2 Apr. 2025 Atlas-Chat-9B 9.2 Sep. 2024 gemma-3-12b-it 12.2 Mar. 2025 AceGPT-13B-chat 13 Dec. 2023 jais-13b-chat 13 Aug. 2023 Table C.1: The LLMs used for comparison against NileChat in this evaluation were selected from a list of Arabic- aware models. Each LLM is listed with its corresponding size (in billion parameters) and release date. We utilized the instruct version for all LLMs except for NLLB, which is a machine translation-specific model. ModelMMLU HellaSwag Belebele EGY MOR EGY MOR EGY MORLess than 7BQwen3-1.7B 28.53 28.53 28.07
https://arxiv.org/abs/2505.18383v1
27.33 22.89 22.89 ar-stablelm-2-chat 39.54 38.32 34.33 33.40 24.22 22.78 Atlas-Chat-2B 42.65 45.06 29.62 34.78 54.67 59.00 Llama-3.2-3B-Instruct 31.10 30.92 28.86 28.39 49.67 40.89 gemma-3-4b-it 46.32 46.60 34.26 32.53 61.44 52.11 Qwen3-4B 28.59 28.52 30.21 29.53 22.89 22.89 Qwen2.5-3B-Instruct 35.71 37.67 31.17 29.62 61.11 44.89 NileChat (3B) 58.20 58.62 38.29 40.35 78.11 73.78More than 7BAceGPT-7B-chat 40.76 37.98 33.04 31.04 38.00 33.00 ALLaM-7B-Instruct 60.18 59.61 40.20 38.14 76.11 66.00 Qwen2.5-7B-Instruct 57.70 53.51 33.79 32.28 76.67 59.44 Qwen3-8B 28.53 28.53 31.72 30.95 22.89 22.89 Atlas-Chat-9B 57.17 60.27 34.75 44.47 78.44 79.33 gemma-3-12b-it 59.29 56.16 40.16 37.60 80.78 73.11 AceGPT-13B-chat 46.48 43.65 35.15 33.21 46.33 41.11 jais-13b-chat 49.33 48.28 38.99 37.45 59.89 53.78 Table D.1: 3-shot performance (accuracy) of models on understanding (MMLU, HellaSwag, and Belebele). Bold values indicate the highest score among models comparable in size to ours (<7B parameters). Underlined values represent the highest score in the entire column, including larger models. Results for zero-shot are presented in Table 1, Section 4.2. 21 Flores Dataset Performance Model MOR →EGY MOR →ENG MOR →FRA EGY →MOR EGY →ENG ENG →EGY ENG →EGY FRA →EGY Average sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4Less than 7BQwen3-1.7B 19.49 19.80 36.32 36.45 14.76 15.45 40.21 40.79 12.17 12.90 35.08 35.83 19.38 19.31 36.23 36.22 19.65 20.48 46.49 47.05 7.32 7.77 22.02 23.42 10.02 10.62 25.98 27.55 5.98 6.63 19.60 21.69 13.59 14.12 32.74 33.62 ar-stablelm-2-chat 18.29 18.57 34.31 34.46 7.36 7.03 28.97 28.47 3.51 2.85 19.36 18.08 14.16 16.35 29.25 33.32 8.05 7.93 31.12 30.67 3.25 11.51 8.22 29.25 10.41 18.68 22.89 35.77 3.81 9.55 9.06 26.13 8.60 11.56 22.90 29.52 Atlas-Chat-2B 19.67 20.03 36.67 36.90 24.64 26.18 49.36 50.32 19.03 20.18 41.53 42.49 19.35 19.88 36.88 37.65 23.44 25.41 49.17 50.28 12.06 14.89 29.66 33.48 10.74 15.16 27.45 33.04 8.80 11.72 25.73 29.77 17.22 19.18 37.06 39.24 Llama-3.2-3B-Instruct 18.75 10.79 35.15 21.71 15.71 0.08 38.75 0.85 12.16 0.54 29.55 3.40 17.67 14.00 34.59 28.64 20.89 0.06 45.19 0.64 5.86 0.91 16.92 5.21 9.74 0.77 24.54 5.29 3.92 1.12 15.14 5.64 13.09 3.53 29.98 8.92 gemma-3-4b-it 16.89 21.11 33.00 37.78 7.99 24.48 17.81 49.50 5.52 23.80 11.28 46.04 12.86 16.31 29.10 33.91 12.05 29.04 24.62 54.24 1.88 13.05 4.85 31.03 1.65 20.13 3.24 37.60 0.93 10.56 3.01 28.17 7.47 19.81 15.86 39.78 Qwen3-4B 20.41 20.57 37.17 37.15 18.98 20.67 44.68 46.07 17.30 18.39 39.96 41.33 16.06 18.86 31.90 35.84 23.99 25.75 50.47 51.83 11.43 11.30 27.93 27.98 15.46 15.83 33.00 33.20 7.44 9.54 21.46 25.90 16.38 17.62 35.82 37.41 NLLB-200-3.3B 20.92 - 38.74 - 30.89 - 53.64 - 27.89 - 48.69 - 17.06 - 35.14 - 34.62 - 58.07 - 17.46 - 34.89 - 26.93 - 43.86 - 11.58 - 28.90 - 23.42 - 42.74 - Qwen2.5-3B-Instruct 18.51 19.71 35.20 36.32 18.61 19.83 44.22 44.99 15.00 16.22 38.56 39.45 16.98 17.90 34.15 34.91 24.07 25.08 50.45 50.99 9.36 8.48 25.17 25.14 11.78 11.43 28.05 28.44 7.47 7.18 22.79 23.32
https://arxiv.org/abs/2505.18383v1
15.22 15.73 34.83 35.45 NileChat 23.81 22.68 40.25 39.25 28.81 29.50 52.98 53.43 24.05 23.49 46.30 45.56 20.90 20.51 38.22 37.64 30.58 31.90 54.53 56.19 15.15 17.90 30.76 36.00 23.39 25.03 40.37 41.94 13.18 14.91 29.61 32.52 22.49 23.24 41.63 42.82More than 7BAceGPT-7B-chat 19.92 18.93 36.91 35.72 17.67 20.04 44.09 45.71 14.78 15.92 38.08 38.87 19.11 17.99 36.04 35.24 23.11 25.81 50.28 51.96 8.93 12.29 21.59 29.72 16.11 18.25 33.32 35.56 5.95 10.28 16.65 27.07 15.70 17.44 34.62 37.48 ALLaM-7B-Instruct 24.63 25.12 40.93 41.39 26.02 28.13 51.65 52.92 18.91 20.80 42.64 44.04 19.85 19.53 37.32 37.00 29.63 31.82 55.53 56.75 14.63 16.56 32.38 34.79 23.18 25.93 39.85 42.70 13.17 14.09 30.75 32.15 21.25 22.75 41.38 42.72 Qwen2.5-7B-Instruct 16.28 18.87 32.19 35.08 21.59 23.62 47.28 48.55 18.98 19.96 42.12 42.93 13.24 16.09 29.34 32.91 26.39 27.83 52.72 53.74 9.73 11.19 25.32 28.28 12.54 15.00 27.80 31.34 7.73 9.75 22.71 26.46 15.81 17.79 34.93 37.41 Qwen3-8B 21.10 21.41 37.84 38.13 22.41 24.33 48.04 49.51 20.46 21.91 43.17 44.32 17.82 18.35 34.63 35.40 27.30 29.36 53.32 54.77 13.60 12.56 31.05 29.63 18.95 18.83 36.79 36.72 10.16 10.27 26.33 26.86 18.98 19.63 38.90 39.42 Atlas-Chat-9B 19.44 20.52 36.32 37.29 31.29 32.75 54.19 55.61 28.14 28.82 49.35 49.85 18.86 20.36 36.78 37.91 30.98 32.83 55.04 56.07 17.29 18.91 35.83 37.42 16.95 19.69 34.35 37.25 14.53 15.59 32.76 33.92 22.19 23.68 41.83 43.16 gemma-3-12b-it 19.67 24.48 36.28 41.24 23.68 30.29 47.75 54.47 15.28 29.99 26.64 51.36 10.73 17.30 24.98 36.28 27.38 32.71 53.46 57.42 2.51 14.60 6.56 34.00 6.34 24.27 11.17 41.69 1.45 12.47 3.70 31.30 13.38 23.26 26.32 43.47 AceGPT-13B-chat 20.14 19.58 37.04 36.57 20.82 24.30 46.12 49.11 18.58 20.52 40.45 42.77 19.54 17.99 36.62 35.50 26.08 29.14 52.52 54.71 13.02 13.42 29.23 31.52 18.81 20.18 36.04 37.93 9.51 11.72 23.19 29.39 18.31 19.61 37.65 39.69 jais-13b-chat 13.50 20.28 26.36 37.46 19.73 27.40 37.81 51.80 18.14 18.40 39.16 40.95 8.90 13.42 22.22 31.25 22.63 30.07 41.73 54.56 2.46 8.55 6.08 25.73 4.09 18.77 9.56 36.94 1.50 7.26 3.25 23.70 11.37 18.02 23.27 37.80 In-House Dataset Performance Model EGY →ENG EGY →MSA ENG →EGY ENG →MOR MOR →EGY MOR →MSA MSA →EGY MSA →MOR Average sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4 sB0 sB4 cF0 cF4Less than 7BQwen3-1.7B 12.14 13.14 31.46 31.98 19.11 20.32 34.24 35.09 4.63 4.96 15.10 15.43 2.52 2.64 11.14 10.99 6.19 6.12 22.58 23.01 6.45 6.04 19.25 18.12 18.20 18.26 34.20 33.86 6.20 6.07 18.38 17.94 9.43 9.69 23.29 23.30 ar-stablelm-2-chat 7.89 6.74 21.92 20.28 14.56 13.71 27.71 24.23 3.57 10.99 7.50 22.25 0.65 3.69 1.45 14.61 6.95 6.15 20.63 19.05 8.50 7.28 21.32 18.59 14.89 14.28 28.14 27.20 5.19 3.49 15.82 13.12 7.77 8.29 18.06 19.92 Atlas-Chat-2B 13.52 16.21 32.04 34.25 15.51 18.19 30.39 32.34 3.67 5.15 15.29 17.15 6.81 8.40 21.76 23.79 14.96 15.85 34.83 35.82 12.12 12.76 28.38 28.48 7.05 11.91 20.72 26.75 8.86 10.11 24.35 25.20 10.31 12.32 25.97 27.97 Llama-3.2-3B-Instruct 12.97 0.38 28.40 1.37
https://arxiv.org/abs/2505.18383v1