fine tuning instruct model

#13
by elenapop - opened

I'm fine-tuning LFM2.5-1.2B-Instruct with a strict output format requirement.
The model must output responses following a specific [[ ## field ## ]] structure with nested fields like [[ ## answer ## ]], [[ ## confidence ## ]], [[ ## references ## ]], etc.
After fine-tuning, the model completely ignores the format and outputs random JSON structures, hallucinated fields, and sometimes responds in english instead of french.

Dataset:
56,000 training examples
Format: prompt-completion pairs ({
"prompt": "SYSTEM + USER combined into one string, ending with <|im_start|>assistant\n",
"completion": "ONLY the assistant response"
})
Training method: completion-only loss (TRL's completion_only_loss=True)

Attempts made:
Original approach: Full system prompt (17,089 chars, ~4,405 tokens)
Result: Training very slow (150-170 sec/step)
Model output: Completely broken, random JSON, hallucinations

Condensed system prompt: Reduced to 1,762 chars ~440 tokens, 90% reduction

Result: Training 2x faster (~70 sec/step)
Model output: Still broken, same issues

16k token filtering: Only trained on examples <16k tokens (21,409 examples, 38% of data)
Result: Training 4x faster overall (~38-45 sec/step)
Model output: Still broken, model doesn't follow format at all

Training config:
Base model: unsloth/LFM2.5-1.2B-Instruct
Max seq length: 32,768 (reduced to 16,384 for test)
LoRA: r=16, alpha=16
Batch size: 1, gradient accumulation: 16
Learning rate: 5e-5, 1 epoch
Completion-only loss: enabled

Example broken outputs:
Outputs {"## today ##": "...", "## user_context ##": {}} instead of proper format
Hallucinates fields like "decision number", "date of decision"
Ignores required [[ ## field ## ]] markers entirely
Sometimes outputs in English instead of French

Question: Is LFM2.5-1.2B-Instruct known to struggle with complex structured outputs? and can you provide a solution to this issue?

Liquid AI org

No, it should be easy to fine-tune. It looks like a training-related issue with your specific configuration, but hard to say based on this.

I can recommend trying https://huggingface.co/LiquidAI/LFM2.5-1.2B-Base to double-check. If you still have broken outputs, it means there's an error with your setup.

Sign up or log in to comment