Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates
Paper
•
2512.04844
•
Published
•
4
This model is built on top of OLMo 2 1124 13B Instruct adapted for Igbo using 200M target language tokens sampled from MADLAD-400. The model is adapted using the SSU-Wanda approach (i.e., selecting parameters to update by column based on the aggregated Wanda scores).
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"ssu-project/OLMo-2-1124-13B-Instruct-ig-ssu"
)
tokenizer = AutoTokenizer.from_pretrained(
"ssu-project/OLMo-2-1124-13B-Instruct-ig-ssu"
)
@misc{yamaguchi2025mitigatingcatastrophicforgettingtarget,
title={Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates},
author={Atsuki Yamaguchi and Terufumi Morishita and Aline Villavicencio and Nikolaos Aletras},
year={2025},
eprint={2512.04844},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.04844},
}
Base model
allenai/OLMo-2-1124-7B