Qwen3-0.6B-Math-Fr

This model is obtained by merging SamsungSAILMontreal/Qwen3-0.6B-Math, SamsungSAILMontreal/Qwen3-0.6B-Code and SamsungSAILMontreal/Qwen3-0.6B-Fr. The model is used in the experiments described in https://bknyaz.github.io/blog/2026/meta-merge/. Single A100 was used for merging and evaluation.

The following versions were used for merge/eval:

  • python >= 3.10
  • torch : 2.9.0+cu128
  • lm_eval : 0.4.9.1
  • vllm : 0.11.1
  • transformers : 4.57.6
  • datasets : 3.2.0
  • numpy : 2.2.6

Merging

Merging was done using parameter averaging implemented in merge_qwen.py.

Evaluation

Evaluation was done with lm_eval on the test split of gsm8k, french_bench (avg score), gsm8k-fr and humaneval (instruct):

python -m lm_eval --model vllm --model_args pretrained=${model},tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,data_parallel_size=1 \
 --tasks gsm8k,french_bench,gsm8k-fr,humaneval_instruct --batch_size 1 --apply_chat_template=True --confirm_run_unsafe_code --trust_remote_code

To evaluate on gsm8k-fr you can use our fork https://github.com/bknyaz/lm-evaluation-harness/tree/main/lm_eval/tasks/gsm8k.

Results

Model gsm8k french gsm8k-fr humaneval_instruct avg
Qwen3-0.6B 21.0 24.4 19.6 38.4 25.9
Qwen3-0.6B-Math 46.3 25.4 29.2 25.6 31.6
Qwen3-0.6B-Fr 36.1 26.5 26.5 38.4 31.9
Qwen3-0.6B-Code 36.8 23.8 24.9 46.3 33.0
Qwen3-0.6B-Math-Code-Fr 48.7 26.5 32.9 42.7 37.7

License

Please refer to the license of the base models SamsungSAILMontreal/Qwen3-0.6B-Math, SamsungSAILMontreal/Qwen3-0.6B-Code and SamsungSAILMontreal/Qwen3-0.6B-Fr.

Downloads last month
16
Safetensors
Model size
0.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SamsungSAILMontreal/Qwen3-0.6B-Math-Code-Fr

Collection including SamsungSAILMontreal/Qwen3-0.6B-Math-Code-Fr