Update README.md
Browse files
README.md
CHANGED
|
@@ -7,4 +7,38 @@ language:
|
|
| 7 |
base_model:
|
| 8 |
- unsloth/gemma-3-4b-it
|
| 9 |
pipeline_tag: image-text-to-text
|
| 10 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
base_model:
|
| 8 |
- unsloth/gemma-3-4b-it
|
| 9 |
pipeline_tag: image-text-to-text
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
## Model Overview
|
| 13 |
+
This model, **altaidevorg/merged_prometheus_gemma3**, is a fine-tuned version of the `unsloth/gemma-3-4b-it` base model.
|
| 14 |
+
It was trained on the **Feedback-Collection** dataset from the *Prometheus Eval*.
|
| 15 |
+
**Fine-tuning Framework:** Finetuned using Unsloth optimized LoRA adapters.
|
| 16 |
+
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
## 🧮 Performance Benchmark
|
| 20 |
+
|
| 21 |
+
| Model | Benchmark | Pearson r | Spearman ρ |
|
| 22 |
+
|:------|:-----------|:----------:|:------------:|
|
| 23 |
+
| 🟩 **merged_prometheus_gemma3** | Feedback Bench | **0.9198** | **0.9210** |
|
| 24 |
+
| 🟨 **Prometheus 2 (8×7B)** *(Kim et al., 2024)* | Feedback Bench / Preference Bench | ≈ 0.898 / – | ≈ 0.90 / – |
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
**Highlights:**
|
| 28 |
+
- Achieves a better score on *Feedback Bench* (+0.02 higher than Prometheus 2).
|
| 29 |
+
- Uses a **4B parameter model**, making it significantly lighter than Prometheus 2.
|
| 30 |
+
- Demonstrates strong **semantic consistency and evaluative precision**.
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
## 🧾 License
|
| 34 |
+
|
| 35 |
+
This model is released under the **Apache 2.0 License**.
|
| 36 |
+
However, because it is derived from **Google’s Gemma 3**, your use of this model must also comply with the **[Gemma Terms of Use](https://ai.google.dev/gemma/terms)**.
|
| 37 |
+
|
| 38 |
+
By using this model, you agree to:
|
| 39 |
+
- Follow Google’s **Gemma Model Terms of Use**, including restrictions on misuse and redistribution.
|
| 40 |
+
- Attribute Google as the original provider of the Gemma 3 base model.
|
| 41 |
+
|
| 42 |
+
For full details, see: [https://ai.google.dev/gemma/terms](https://ai.google.dev/gemma/terms)
|
| 43 |
+
|
| 44 |
+
---
|