File size: 1,567 Bytes
ba16ea6
 
 
 
 
 
 
ec17444
ba16ea6
 
 
ec17444
 
ba16ea6
 
ec17444
 
18779b8
ba16ea6
4bac1f3
 
 
 
59520b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ba16ea6
 
 
 
 
 
ec17444
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- grpo
license: apache-2.0
language:
- en
datasets:
- facebook/natural_reasoning
---

# Coma 3B

Coma is based on Qwen 2.5 3B, GRPO-fine tuned on the natural reasoning data set from Meta.

## GGUF

There are quantized versions available at [theprint/Coma-3B-GGUF](https://huggingface.co/theprint/Coma-3B-GGUF) in GGUF format.

## Testing

The following system prompt was used in testing of the model:
```
Between the tags <think> and </think>: You will first think through your answer carefully step by step, including any potential risks or pit falls, the context of the user's request, and how best to present this. These are notes for yourself, so be detailed and honest in your assessment.

When you are done with the analysis, you must use your own guidelines from the previous section to construct the final response. The user will only see this section.

In summary, respond using this pattern:
<think>
  First, ...
</think>
  [your response]
```

Testing was done at `temperature=1.0`, `top_k=45` and `top_p=0.95`.

- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit

This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)