|
|
--- |
|
|
base_model: |
|
|
- google/gemma-2-9b-it |
|
|
tags: |
|
|
- text-generation-inference |
|
|
- transformers |
|
|
- unsloth |
|
|
- gemma2 |
|
|
- trl |
|
|
- sft |
|
|
license: gemma |
|
|
language: |
|
|
- ar |
|
|
- zh |
|
|
- cs |
|
|
- da |
|
|
- nl |
|
|
- en |
|
|
- fi |
|
|
- fr |
|
|
- de |
|
|
- he |
|
|
- hu |
|
|
- it |
|
|
- ja |
|
|
- ko |
|
|
- 'no' |
|
|
- pl |
|
|
- pt |
|
|
- ru |
|
|
- es |
|
|
- sv |
|
|
- th |
|
|
- tr |
|
|
- uk |
|
|
datasets: |
|
|
- Pinkstack/Thinking-multilingual-big-10k-sft |
|
|
--- |
|
|
SFT only, trained on our own multilingual dataset. |
|
|
|
|
|
|
|
|
# Uploaded model |
|
|
|
|
|
- **Developed by:** Pinkstack |
|
|
- **License:** gemma |
|
|
- **Finetuned from model :** Pinkstack/Superthoughts-9B-sft |
|
|
|
|
|
This gemma2 model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |