PyRe is Experimental

Please note that this model is a WIP experiment into GRPO fine tuning on Python code problems for reasoning. The performance of this model varies greatly depending on task, prompt and parameters.

I recommend a very low temperature, like 0.1. You may also see more consistent results by encouraging the use of <think> and <answer> tags in the system prompt.

Example System Prompt

Think through complex problems carefully, before giving the user your final answer. Use <think> and </think> to encapsulate your thoughts.

This GGUF is based on theprint/PyRe-3B-v2.

Uploaded model

  • Developed by: theprint
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
19
GGUF
Model size
3B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support