Shizu0n's picture
Update README.md
8399a3d verified
metadata
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: peft
license: apache-2.0
language:
  - en
tags:
  - sql
  - code-generation
  - text-to-sql
  - phi-3
  - lora
  - qlora
  - fine-tuned
  - peft
pipeline_tag: text-generation

Phi-3 Mini SQL Generator (QLoRA Fine-tuned)

Fine-tuned version of Phi-3-mini-4k-instruct for natural language → SQL generation using QLoRA on a T4 GPU (Google Colab, ~20 min).

Evaluation — Base vs Fine-tuned

Evaluated on 200 held-out examples from b-mc2/sql-create-context.

Model Exact Match
Phi-3-mini-4k-instruct (base) 2.0%
This adapter (fine-tuned) 73.5%

Training Details

  • Dataset: b-mc2/sql-create-context — 1,000 train / 200 validation examples
  • Epochs: 3
  • Effective batch size: 8
  • Learning rate: 0.0002
  • Hardware: NVIDIA T4 (Google Colab free tier)
  • Training time: 21.2 min
  • Final train loss: 0.6526
  • Best checkpoint: step 250 (by eval loss)

LoRA Config

Parameter Value
Rank (r) 16
Alpha 32
Dropout 0.05
Target modules ['down_proj', 'qkv_proj', 'gate_up_proj', 'o_proj']
Quantization 4-bit NF4 (QLoRA)

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

tokenizer = AutoTokenizer.from_pretrained("Shizu0n/phi3-mini-sql-generator", trust_remote_code=True)
base_model = AutoModelForCausalLM.from_pretrained(
    "microsoft/Phi-3-mini-4k-instruct",
    torch_dtype=torch.float16, device_map="auto", trust_remote_code=True,
)
model = PeftModel.from_pretrained(base_model, "Shizu0n/phi3-mini-sql-generator")
model.eval()

prompt = "Given the following SQL table, write a SQL query.\n\n"\
         "Table: employees (id, name, department, salary)\n\n"\
         "Question: What is the average salary per department?\n\nSQL:"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.inference_mode():
    outputs = model.generate(**inputs, max_new_tokens=100, do_sample=False)
prompt_len = inputs["input_ids"].shape[-1]
print(tokenizer.decode(outputs[0][prompt_len:], skip_special_tokens=True))

Limitations

  • Fine-tuned on 1,000 examples — best suited for simple to medium complexity SELECT queries
  • Not tested on dialect-specific SQL (PostgreSQL/MySQL-specific functions)
  • May struggle with multi-table JOINs and nested subqueries