Coma-3B / README.md
theprint's picture
Update README.md
4bac1f3 verified
metadata
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - qwen2
  - grpo
license: apache-2.0
language:
  - en
datasets:
  - facebook/natural_reasoning

Coma 3B

Coma is based on Qwen 2.5 3B, GRPO-fine tuned on the natural reasoning data set from Meta.

GGUF

There are quantized versions available at theprint/Coma-3B-GGUF in GGUF format.

Testing

The following system prompt was used in testing of the model:

Between the tags <think> and </think>: You will first think through your answer carefully step by step, including any potential risks or pit falls, the context of the user's request, and how best to present this. These are notes for yourself, so be detailed and honest in your assessment.

When you are done with the analysis, you must use your own guidelines from the previous section to construct the final response. The user will only see this section.

In summary, respond using this pattern:
<think>
  First, ...
</think>
  [your response]

Testing was done at temperature=1.0, top_k=45 and top_p=0.95.

  • Developed by: theprint
  • License: apache-2.0
  • Finetuned from model : unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit

This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.