Gemma-3 Instruct Small (LoRA Merged)
Model Summary
Gemma-3 Instruct Small is a lightweight instruction-following language model fine-tuned from Google’s Gemma-3-270M-IT using LoRA and later merged into the base model for efficient inference.
The model is optimized for:
- Instruction following
- Basic mathematical reasoning
- Short-form question answering
- Educational and experimental use
Model Details
Model Description
- Developed by: Boopathiraj
- Organization: Self (Independent)
- Model type: Causal Language Model (Instruction-tuned)
- Language(s): English
- License: Apache 2.0
- Finetuned from:
google/gemma-3-270m-it
This model was trained using parameter-efficient fine-tuning (LoRA) and later merged into the base weights for standalone inference without PEFT dependencies.
Model Sources
- Base Model: https://huggingface.co/google/gemma-3-270m-it
- Repository: https://huggingface.co/boopathiraj/gemma-3-instruct-small
Uses
Direct Use
This model can be used directly for:
- Instruction-based text generation
- Simple math word problems
- Educational demos
- Lightweight inference on limited hardware
Example use cases:
- Chatbots
- Teaching assistants
- Rapid prototyping
Downstream Use
The model may be further fine-tuned for:
- Domain-specific Q&A
- Educational datasets
- Small-scale reasoning benchmarks
Out-of-Scope Use
This model is not intended for:
- Medical, legal, or financial advice
- High-stakes decision making
- Safety-critical applications
- Long-context reasoning
Bias, Risks, and Limitations
- Inherits biases from the base Gemma model and training data
- Limited reasoning depth due to small parameter count (270M)
- May produce incorrect or hallucinated answers
- Performance degrades on long or multi-step reasoning tasks
Recommendations
Users should:
- Validate outputs before use
- Avoid high-risk domains
- Treat results as assistive, not authoritative
How to Get Started
Inference Example
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained(
"boopathiraj/gemma-3-instruct-small",
use_fast=False
)
model = AutoModelForCausalLM.from_pretrained(
"boopathiraj/gemma-3-instruct-small",
device_map="auto",
dtype=torch.float16
)
model.eval()
prompt = "Solve the problem: What is 7 minus 3?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(**inputs, max_new_tokens=32)
print(tokenizer.decode(output[0], skip_special_tokens=True))
- Downloads last month
- 31