How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="RuleReasoner/RuleReasoner-4B")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("RuleReasoner/RuleReasoner-4B")
model = AutoModelForCausalLM.from_pretrained("RuleReasoner/RuleReasoner-4B")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

If you use the model in your research, please cite the original papers as below.

@article{liu2025rulereasoner,
      title={RuleReasoner: Reinforced Rule-based Reasoning via Domain-aware Dynamic Sampling}, 
      author={Yang Liu and Jiaqi Li and Zilong Zheng},
      year={2025},
      eprint={2506.08672},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2506.08672}, 
}

Code: https://github.com/bigai-nlco/RuleReasoner

Downloads last month
42
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
Input a message to start chatting with RuleReasoner/RuleReasoner-4B.

Model tree for RuleReasoner/RuleReasoner-4B

Finetuned
(271)
this model
Quantizations
1 model

Dataset used to train RuleReasoner/RuleReasoner-4B

Paper for RuleReasoner/RuleReasoner-4B