Coma 3B

Coma is based on Qwen 2.5 3B, GRPO-fine tuned on the natural reasoning data set from Meta.

Testing

The following system prompt was used in testing of the model:

Between the tags <think> and </think>: You will first think through your answer carefully step by step, including any potential risks or pit falls, the context of the user's request, and how best to present this. These are notes for yourself, so be detailed and honest in your assessment.

When you are done with the analysis, you must use your own guidelines from the previous section to construct the final response. The user will only see this section.

In summary, respond using this pattern:
<think>
  First, ...
</think>
  [your response]

Testing was done at temperature=1.0, top_k=45 and top_p=0.95.

  • Developed by: theprint
  • License: apache-2.0
Downloads last month
13
GGUF
Model size
3B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for theprint/Coma-3B-GGUF

Base model

theprint/Coma-3B
Quantized
(3)
this model

Dataset used to train theprint/Coma-3B-GGUF