Model Card for Qwen2.5-0.5B Text-to-SQL

Model Summary

This model converts natural language questions into SQL queries.
It is a fine-tuned version of Qwen2.5-0.5B, adapted specifically for the Text-to-SQL task using the LoRA (Low-Rank Adaptation) method.

The model is designed to be lightweight, efficient, and suitable for local experimentation and educational purposes.


Model Details

Model Description

  • Developed by: Melih Emin
  • Model type: Causal Language Model (Text-to-SQL)
  • Language(s): English
  • License: Apache 2.0
  • Finetuned from model: Qwen/Qwen2.5-0.5B
  • Fine-tuning method: LoRA (Low-Rank Adaptation)

This model was fine-tuned as part of a Generative Artificial Intelligence course assignment.
The primary goal was to explore parameter-efficient fine-tuning techniques on limited local hardware.

Model Sources


Uses

Direct Use

  • Converting English questions into SQL queries
  • Educational demonstrations of Text-to-SQL systems
  • Local experimentation with small language models

Downstream Use

  • Can be integrated into database query assistants
  • Can serve as a baseline for more advanced Text-to-SQL systems
  • Further fine-tuning with schema-specific datasets

Out-of-Scope Use

  • Production-grade database querying without validation
  • Complex multi-database or highly nested SQL queries
  • Security-critical or sensitive data environments

Bias, Risks, and Limitations

  • The model may generate syntactically valid but semantically incorrect SQL
  • It does not perform schema validation
  • Performance depends heavily on prompt structure
  • Trained on a limited dataset and may not generalize to unseen schemas

Recommendations

  • Always validate generated SQL before execution
  • Use schema-aware prompting for better results
  • Do not use directly in production without safeguards

How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "melihemin/qwen2.5-0.5b-text2sql-full"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

prompt = """### Question:
How many heads of the departments are older than 56?

### SQL:
"""

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
14
Safetensors
Model size
0.5B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Space using melihemin/qwen2.5-0.5b-text2sql-full 1