--- base_model: - allenai/Olmo-3-1025-7B - allenai/Olmo-3-7B-RL-Zero-Math - allenai/Olmo-3-7B-RL-Zero-Code - allenai/Olmo-3-7B-Think-SFT library_name: transformers tags: - mergekit - merge --- # merged-model This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [allenai/Olmo-3-1025-7B](https://huggingface.co/allenai/Olmo-3-1025-7B) as a base. ### Models Merged The following models were included in the merge: * [allenai/Olmo-3-7B-RL-Zero-Math](https://huggingface.co/allenai/Olmo-3-7B-RL-Zero-Math) * [allenai/Olmo-3-7B-RL-Zero-Code](https://huggingface.co/allenai/Olmo-3-7B-RL-Zero-Code) * [allenai/Olmo-3-7B-Think-SFT](https://huggingface.co/allenai/Olmo-3-7B-Think-SFT) ### Configuration The following YAML configuration was used to produce this model: ```yaml # Task arithmetic merge: Apply Math+Code task vectors to Think-SFT # # Mathematical formulation: # output = Think-SFT + 0.5*(RL-Zero-Math - base) + 0.5*(RL-Zero-Code - base) # # This is achieved by treating Think-SFT as a model with weight=1.0: # output = base + 1.0*(Think-SFT - base) + 0.5*(Math - base) + 0.5*(Code - base) # # Usage: # modal run modal_merge.py --config examples/olmo-think-math-code.yaml --hf-repo pmahdavi/Olmo-3-7B-Think-Math-Code merge_method: task_arithmetic base_model: allenai/Olmo-3-1025-7B models: - model: allenai/Olmo-3-7B-Think-SFT parameters: weight: 1.0 - model: allenai/Olmo-3-7B-RL-Zero-Math parameters: weight: 0.5 - model: allenai/Olmo-3-7B-RL-Zero-Code parameters: weight: 0.5 dtype: bfloat16 ```