How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="adamabuhamdan/tinyllama-sql-lora")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("adamabuhamdan/tinyllama-sql-lora", dtype="auto")
Quick Links

๐Ÿค– TinyLlama Text-to-SQL (LoRA Adapter)

ู‡ุฐุง ุงู„ู†ู…ูˆุฐุฌ ุนุจุงุฑุฉ ุนู† LoRA Adapter ุชู… ุชุฏุฑูŠุจู‡ ู„ุชุนุฏูŠู„ ุณู„ูˆูƒ ู†ู…ูˆุฐุฌ TinyLlama-1.1B ู„ูŠุตุจุญ ู…ุชุฎุตุตุงู‹ ููŠ ุชุญูˆูŠู„ ุงู„ุฃุณุฆู„ุฉ ุจุงู„ู„ุบุฉ ุงู„ุทุจูŠุนูŠุฉ ุฅู„ู‰ ูƒูˆุฏ SQL ุฏู‚ูŠู‚ ุจู†ุงุกู‹ ุนู„ู‰ ู‡ูŠูƒู„ ุงู„ุฌุฏูˆู„ (Schema) ุงู„ู…ุนุทู‰ ู„ู‡ุŒ ุฏูˆู† ุฃูŠ ุซุฑุซุฑุฉ ุฒุงุฆุฏุฉ.

  • Developed by: Adam Abu Hamdan
  • Model type: PEFT (LoRA)
  • Language: English (Text & SQL)
  • Finetuned from model: TinyLlama/TinyLlama-1.1B-Chat-v1.0

๐Ÿ’ก ูƒูŠู ูŠุนู…ู„ ุงู„ู†ู…ูˆุฐุฌ (How to Get Started)

ูŠู…ูƒู†ูƒ ุชุดุบูŠู„ ู‡ุฐุง ุงู„ู†ู…ูˆุฐุฌ ูˆุฏู…ุฌ ุงู„ุฃุฏุงุจุชุฑ (ุงู„ู€ 20 ู…ูŠุฌุงุจุงูŠุช) ู…ุน ุงู„ู†ู…ูˆุฐุฌ ุงู„ุฃุณุงุณูŠ ุจุงุณุชุฎุฏุงู… ุงู„ูƒูˆุฏ ุงู„ุชุงู„ูŠ ููŠ ุจุงูŠุซูˆู†:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

BASE_MODEL = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
ADAPTER_MODEL = "adamabuhamdan/tinyllama-sql-lora"

# 1. ุชุญู…ูŠู„ ุงู„ู…ุชุฑุฌู… ูˆุงู„ู†ู…ูˆุฐุฌ ุงู„ุฃุณุงุณูŠ
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
base_model = AutoModelForCausalLM.from_pretrained(
    BASE_MODEL,
    torch_dtype=torch.float16,
    device_map="auto",
)

# 2. ุฏู…ุฌ ุฃูˆุฒุงู† LoRA ุงู„ุชูŠ ู‚ู…ู†ุง ุจุชุฏุฑูŠุจู‡ุง
model = PeftModel.from_pretrained(base_model, ADAPTER_MODEL)
model.eval()

# 3. ุชุฌุฑุจุฉ ุชูˆู„ูŠุฏ ูƒูˆุฏ SQL
schema = "CREATE TABLE employees (id INT, name TEXT, department TEXT, salary INT);"
question = "List the names of employees in Engineering earning more than 100000."

prompt = f"<|system|>\nYou are a SQL assistant. Given a table schema and a question, reply with ONLY the SQL query, nothing else.</s>\n<|user|>\nSchema:\n{schema}\n\nQuestion: {question}</s>\n<|assistant|>\n"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
    outputs = model.generate(**inputs, max_new_tokens=100, do_sample=False)

print(tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True).strip())
Downloads last month
44
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for adamabuhamdan/tinyllama-sql-lora

Adapter
(1490)
this model

Dataset used to train adamabuhamdan/tinyllama-sql-lora