NISHKA SFT
Supervised fine-tuned model on 10,038 PQL examples for Policy Query Language code generation.
Model Details
- Base Model: microsoft/Phi-3-mini-4k-instruct
- Architecture: Phi-3 (3.8B parameters)
- Training: LoRA adapter merged into base model
- Format: Full model weights (no adapter needed)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"openpql/nishka-sft",
device_map="auto",
torch_dtype="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("openpql/nishka-sft")
# Generate
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
print(tokenizer.decode(outputs[0]))
Deployment
This model is ready for deployment with vLLM, TGI, or other inference servers.
# vLLM example
vllm serve openpql/nishka-sft --dtype float16
- Downloads last month
- 64
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for openpql/nishka-sft-phi3-merged
Base model
microsoft/Phi-3-mini-4k-instruct