Llama-3.2-3B Fine-tuned for Reasoning

  • Developed by: raviadi123
  • License: apache-2.0
  • Finetuned from model : Llama-3.2-3B

This model is a fine-tuned version of unsloth/Llama-3.2-3B, specifically trained on the OpenMathReasoning dataset to improve mathematical and logical reasoning.

This model was trained 2x faster with Unsloth and Hugging Face's TRL library.

Model Description

The model is designed to follow a "chain-of-thought" or "scratchpad" reasoning process. It first works through a problem, showing its steps, and then provides a final, clean solution. This is achieved by using a specific set of special tokens in the prompt and output.

How to Use

To properly trigger the model's reasoning capabilities, you must follow the prompt structure it was trained on.

Prompt Structure

The entire input prompt sent to the model should follow this exact format. You provide the system instruction, the user's question, and then the special token to cue the model to start its reasoning process.

You are given a problem.
Think about the problem and provide your working out.
Place it between <start_working_out> and <end_working_out>.
Then, provide your solution between <SOLUTION></SOLUTION>

{your_question_here}
<start_working_out>
Downloads last month
-
GGUF
Model size
3B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support