Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates
Paper
•
2512.04844
•
Published
•
4
This model is built on top of OLMo 2 1124 7B Instruct adapted for Nepali using 200M target language tokens sampled from MADLAD-400. The model is adapted using the HFT approach, a state-of-the-art static selective parameter update approach that updates exactly 50% of parameters using a fine-grained, per-layer strategy. Its freezing strategy is as follows: (1) for self-attention, it randomly freezes two of the four matrices ($W_Q, W_K, W_V, W_O$); (2) for feed-forward layers, it freezes two of three matrices ($W_{up}, W_{down}, W_{gate}$) in a random half of the layers and one matrix in the remaining half. This is based on https://aclanthology.org/2025.acl-long.626/.
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"ssu-project/OLMo-2-1124-7B-Instruct-ne-hft"
)
tokenizer = AutoTokenizer.from_pretrained(
"ssu-project/OLMo-2-1124-7B-Instruct-ne-hft"
)
@misc{yamaguchi2025mitigatingcatastrophicforgettingtarget,
title={Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates},
author={Atsuki Yamaguchi and Terufumi Morishita and Aline Villavicencio and Nikolaos Aletras},
year={2025},
eprint={2512.04844},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.04844},
}
Base model
allenai/OLMo-2-1124-7B