How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="smolify/smolified-tiny-text-to-sql")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("smolify/smolified-tiny-text-to-sql")
model = AutoModelForCausalLM.from_pretrained("smolify/smolified-tiny-text-to-sql")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

🀏 smolified-tiny-text-to-sql

Intelligence, Distilled.

This is a Domain Specific Language Model (DSLM) generated by the Smolify Foundry.

It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM environments.

πŸ“¦ Asset Details

  • Origin: Smolify Foundry (Job ID: f7ade9aa)
  • Architecture: gemma-3-270m
  • Training Method: Proprietary Neural Distillation
  • Optimization: 4-bit Quantized / FP16 Mixed
  • Dataset: Link to Dataset

πŸš€ Usage (Inference)

This model is compatible with standard inference backends like vLLM, and Hugging Face Transformers.

# Example: Running your Sovereign Model
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "smolify/smolified-tiny-text-to-sql"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

messages = [
    {"role": "system", "content": '''Table 'orders' (id, customer_name, amount, status, date).'''},
    {"role": "user", "content": '''Find the total sum of all amounts for orders placed by Alice.'''}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True,
)
if "gemma-3-270m" == "gemma-3-270m":
    text = text.removeprefix('<bos>')

from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to(model.device),
    max_new_tokens = 1000,
    temperature = 1.0, top_p = 0.95, top_k = 64,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)

βš–οΈ License & Ownership

This model weights are a sovereign asset owned by smolify. Generated via Smolify.ai.

Downloads last month
33
Safetensors
Model size
0.3B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support