| | --- |
| | base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit |
| | tags: |
| | - text-generation-inference |
| | - transformers |
| | - unsloth |
| | - llama |
| | - trl |
| | - grpo |
| | license: apache-2.0 |
| | language: |
| | - en |
| | datasets: |
| | - theprint/PyRe |
| | --- |
| | # PyRe is Experimental |
| |
|
| | Please note that this model is a WIP experiment into GRPO fine tuning on Python code problems for reasoning. The performance of this model varies greatly depending on task, prompt and parameters. |
| |
|
| | I recommend a very low temperature, like 0.1. You may also see more consistent results by encouraging the use of `<think>` and `<answer>` tags in the system prompt. |
| |
|
| | ### Example System Prompt |
| | ``` |
| | Think through complex problems carefully, before giving the user your final answer. Use <think> and </think> to encapsulate your thoughts. |
| | ``` |
| |
|
| |
|
| | # Uploaded model |
| |
|
| | - **Developed by:** theprint |
| | - **License:** apache-2.0 |
| | - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit |
| |
|
| | This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
| |
|
| | [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |