Phi-3 Mini SQL Generator β€” Merged Model

This is the merged (standalone) version of Shizu0n/phi3-mini-sql-generator.

The LoRA adapter weights have been merged directly into the base model, making this a standard AutoModelForCausalLM β€” no PEFT dependency required for inference.

Why two versions?

Repo Purpose
Shizu0n/phi3-mini-sql-generator Original QLoRA adapter β€” documents the training pipeline
Shizu0n/phi3-mini-sql-generator-merged Merged standalone model β€” used for deployment and inference

Evaluation β€” Base vs Fine-tuned

Evaluated on 200 held-out examples from b-mc2/sql-create-context.

Model Exact Match
Phi-3-mini-4k-instruct (base) 2.0%
This model (fine-tuned) 73.5%

Training Details

  • Dataset: b-mc2/sql-create-context β€” 1,000 train / 200 validation examples
  • Method: QLoRA (4-bit NF4, LoRA rank 16, alpha 32)
  • Hardware: NVIDIA T4 (Google Colab free tier)
  • Training time: ~21 min
  • Final train loss: 0.6526

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "Shizu0n/phi3-mini-sql-generator-merged"

tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto",
    trust_remote_code=True,
)
model.eval()

prompt = (
    "Given the following SQL table, write a SQL query.\n\n"
    "Table: employees (id, name, department, salary)\n\n"
    "Question: What is the average salary per department?\n\nSQL:"
)

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.inference_mode():
    outputs = model.generate(**inputs, max_new_tokens=100, do_sample=False)
prompt_len = inputs["input_ids"].shape[-1]
print(tokenizer.decode(outputs[0][prompt_len:], skip_special_tokens=True))

Limitations

  • Fine-tuned on 1,000 examples β€” best suited for simple to medium complexity SELECT queries
  • Not tested on dialect-specific SQL (PostgreSQL/MySQL-specific functions)
  • May struggle with multi-table JOINs and nested subqueries
Downloads last month
-
Safetensors
Model size
4B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Shizu0n/phi3-mini-sql-generator-merged

Finetuned
(839)
this model