| | --- |
| | base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit |
| | tags: |
| | - text-generation-inference |
| | - transformers |
| | - unsloth |
| | - qwen2 |
| | license: apache-2.0 |
| | language: |
| | - en |
| | datasets: |
| | - facebook/natural_reasoning |
| | --- |
| | |
| | # Coma 7B |
| |
|
| | Coma is based on Qwen 2.5 7B, GRPO-fine tuned on the natural reasoning data set from Meta. |
| |
|
| | ## GGUF Files |
| |
|
| | Get GGUF versions of this model at various quantization levels at [theprint/Coma-7B-GGUF](https://huggingface.co/theprint/Coma-7B-GGUF). |
| |
|
| | - **Developed by:** theprint |
| | - **License:** apache-2.0 |
| | - **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit |
| |
|