Luth-0.6B-Instruct / README.md
GAD-cell's picture
Update README.md
8157cd2 verified
metadata
library_name: transformers
license: apache-2.0
datasets:
  - kurakurai/luth-sft
language:
  - fr
  - en
base_model:
  - Qwen/Qwen3-0.6B
pipeline_tag: text-generation

Kurakura AI Logo

Luth-0.6B-Instruct

Luth-0.6B-Instruct is a French fine-tuned version of Qwen3-0.6B, trained on the Luth-SFT dataset. The model has drastically improved its French capabilities in instruction following, math, and general knowledge. Additionally, its English capabilities have remained stable and have even increased in some areas.

Our Evaluation, training and data scripts are available on GitHub, along with the Blog we wrote.

Luth Graph

Model Details

Luth was trained using full fine-tuning on the Luth-SFT dataset with Axolotl. The resulting model was then merged with the base Qwen3-0.6B model. This process successfully retained the model's English capabilities while improving its performance on nearly all selected benchmarks in both French and English.

Benchmark Results

We used LightEval for evaluation, with custom tasks for the French benchmarks. The models were evaluated with a temperature=0.

French Benchmark Scores

Model IFEval
French
GPQA-Diamond
French
MMLU
French
Math500
French
Arc-Challenge
French
Hellaswag
French
Luth-0.6B-Instruct 48.24 34.52 40.12 44.00 33.88 45.58
Llama-3.2-1B 27.79 25.38 25.49 15.80 29.34 25.09
Qwen3-0.6B 44.86 26.90 27.13 29.20 31.57 25.10
Qwen2.5-0.5B-Instruct 22.00 25.89 35.04 12.00 28.23 51.45

English Benchmark Scores

Model IFEval
English
GPQA-Diamond
English
MMLU
English
Math500
English
Arc-Challenge
English
Hellaswag
English
Luth-0.6B-Instruct 53.73 25.76 48.12 48.80 36.09 47.03
Llama-3.2-1B 44.05 25.25 31.02 26.40 34.30 55.84
Qwen3-0.6B 57.18 29.29 36.79 43.40 33.70 42.92
Qwen2.5-0.5B-Instruct 29.70 29.29 43.80 32.00 32.17 49.56

Code Example

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("kurakurai/Luth-0.6B-Instruct")
model = AutoModelForCausalLM.from_pretrained("kurakurai/Luth-0.6B-Instruct")
messages = [
    {"role": "user", "content": "Quelle est la capitale de la France?"},
]
inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=100)
print(
    tokenizer.decode(
        outputs[0][inputs["input_ids"].shape[-1] :], skip_special_tokens=True
    )
)

Citation

@misc{luth2025kurakurai,
  title        = {Luth: Efficient French Specialization for Small Language Models and Cross-Lingual Transfer},
  author       = {Lasbordes, Maxence and Gad, Sinoué},
  year         = {2025},
  howpublished = {\url{https://arxiv.org/abs/2510.05846}},
  note         = {arXiv:2510.05846}
}