⚙️🧠🔒 Controllable Reasoning Models - Checkpoints
Collection
Training dataset and LoRA checkpoints for the arXiv 2026 preprint Controllable Reasoning Models are Private Thinkers • 14 items • Updated
This model is a fine-tuned version of microsoft/Phi-4-reasoning
It was trained in 4-bit using bitsandbytes.
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
This model was trained with SFT.