|
|
--- |
|
|
base_model: |
|
|
- unsloth/Llama-3.2-1B-Instruct |
|
|
tags: |
|
|
- text-generation-inference |
|
|
- transformers |
|
|
- reasoning |
|
|
- math |
|
|
- thinking |
|
|
- conversational |
|
|
- llama |
|
|
- meta |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- en |
|
|
datasets: |
|
|
- unsloth/OpenMathReasoning-mini |
|
|
library_name: transformers |
|
|
--- |
|
|
# ReasoningLlama-Math-1B-IT |
|
|
## Model Description: |
|
|
This is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the [unsloth/OpenMathReasoning-mini](https://huggingface.co/datasets/unsloth/OpenMathReasoning-mini) dataset which is a small version of the [nvidia/OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning) dataset which was used to win the [AIMO](https://www.kaggle.com/competitions/ai-mathematical-olympiad-progress-prize-2/leaderboard) (AI Mathematical Olympiad) challenge! |
|
|
- **recommended settings for inference:** min_p = 0.1 and temperature = 1.5 , Read this [Tweet](https://x.com/menhguin/status/1826132708508213629) to understand why. |
|
|
- **License :** apache-2.0 |
|
|
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct |