v8_stage3_xml-merged

Model Description

This model is Stage 3 of the Sequential Format Learning (v8 strategy) for structured data output.

Training Strategy

Based on Person U's approach that achieved 0.84 on the leaderboard:

  • Train one format at a time
  • Merge LoRA to base model after each stage
  • Use merged model as the base for the next stage

Stage 3 Focus: XML

  • Format: XML (500 samples)
  • Goal: 95%+ parse success rate for XML with proper & escaping
  • Base Model: kmd2525/v8_stage2_yaml-merged (Stage 2 merged model)

Previous Stages

  • Stage 1: JSON/CSV (800 samples) β†’ JSON 100%, CSV 100%
  • Stage 2: YAML (500 samples) β†’ YAML 100%

Training Parameters

  • MAX_SEQ_LEN: 1024
  • EPOCHS: 2
  • Learning Rate: 3e-05
  • LoRA R: 64, Alpha: 128

Sequential Format Learning Pipeline

Stage 1: JSON/CSV (800) βœ…
    ↓
Stage 2: YAML (500) βœ…
    ↓
Stage 3: XML (500) ← This model
    ↓
Stage 4: Mixed/TOML (1000)
    ↓
Final Model β†’ LB 0.8+

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("kmd2525/v8_stage3_xml-merged")
tokenizer = AutoTokenizer.from_pretrained("kmd2525/v8_stage3_xml-merged")

Next Stage

Use this model as the base for Stage 4 (Mixed/TOML final tuning):

os.environ["SFT_BASE_MODEL"] = "kmd2525/v8_stage3_xml-merged"
Downloads last month
19
Safetensors
Model size
4B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for kmd2525/v8_stage3_xml-merged

Finetuned
(1)
this model
Finetunes
1 model