Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for EpistemeAI/ReasoningCore-1.0-3B-Instruct-r01-Reflect-Math to start chattingInstall Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for EpistemeAI/ReasoningCore-1.0-3B-Instruct-r01-Reflect-Math to start chattingUsing HuggingFace Spaces for Unsloth
# No setup required# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for EpistemeAI/ReasoningCore-1.0-3B-Instruct-r01-Reflect-Math to start chattingLoad model with FastModel
pip install unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
model_name="EpistemeAI/ReasoningCore-1.0-3B-Instruct-r01-Reflect-Math",
max_seq_length=2048,
)This is a Reasoning Core 1.0 reasoning and reflect instruction-tuned generative model in 3B size (text in/text out).
This is next 1.0 version of the orignal ReasoningCore-3B-Instruct-r01-Reflect-Math
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) with GRPO fine tuning using unsloth, to align with human preferences for helpfulness and safety. Fine tune with s1 dataset from /simplescaling
Use with transformers
Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via pip install --upgrade transformers.
import torch
from transformers import pipeline
model_id = "EpistemeAI/ReasoningCore-3B-Instruct-r01-Reflect-Math"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a powerful assistant Respond in the following format:
<reasoning>
...
</reasoning>
<reflecting>
...
</reflecting>
<answer>
...
</answer>"},
{"role": "user", "content": "Which is bigger? 9.11 or 9.9?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Using SuperTransformer
import SuperTransformer
# Load SuperTransformer Class, (1) Loads Huggingface model, (2) System Prompt (3) Text/prompt (4)Max tokens
SuperTransformers = SuperTransformers("EpistemeAI/ReasoningCore-3B-Instruct-r01-Reflect-Math","You are a highly knowledgeable assistant with expertise in mathematics. <reasoning>...</reasoning><reflecting>...</reflecting><answer>...</answer>","What is the area of a circle, radius=16, reason step by step", 2026)
# 8-bit quantization
SuperTransformers.HuggingFaceTransformer8bit()
# or 4-bit quantization
SuperTransformers.HuggingFaceTransformer4bit()
Thank you
Thank you for simplescaling
Uploaded model
- Developed by: EpistemeAI
- License: apache-2.0
- Finetuned from model : EpistemeAI/ReasoningCore-3B-Instruct-r01-Reflect-Math
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- -

# Gated model: Login with a HF token with gated access permission hf auth login