Update README.md
Browse files
README.md
CHANGED
|
@@ -90,17 +90,17 @@ We recommend fine-tuning LFM2.5 for your specific use case to achieve the best r
|
|
| 90 |
|
| 91 |
## 📊 Performance
|
| 92 |
|
| 93 |
-
| Model | [JMMLU](https://arxiv.org/pdf/2402.14531) | [M-IFEval (ja)](https://arxiv.org/pdf/2502.04688) |
|
| 94 |
-
|-------|------|----------|
|
| 95 |
-
| **LFM2.5-1.2B-JP** | 49.8 | 60.6 |
|
| 96 |
-
| **[LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct)** | 47.5 | 41.8 |
|
| 97 |
-
| Qwen3-1.7B (Instruct mode) | 47.1 | 40.3 |
|
| 98 |
-
| Llama 3.2 1B Instruct | 33.8 | 24.1 |
|
| 99 |
-
| TinySwallow-1.5B-Instruct | 48.0 | 36.5 |
|
| 100 |
-
| Gemma-2-Llama-Swallow-2b-it-v0.1 | 48.1 | 33.4 |
|
| 101 |
-
| Gemma-3-1b-it | 34.5 | 26.3 |
|
| 102 |
-
| Granite-4.0-h-1b | 42.2 | 39.3 |
|
| 103 |
-
| Sarashina2.2-1b-instruct-v0.1 | 40.2 | 21.9 |
|
| 104 |
|
| 105 |
**Evaluation Notes**
|
| 106 |
|
|
|
|
| 90 |
|
| 91 |
## 📊 Performance
|
| 92 |
|
| 93 |
+
| Model | [JMMLU](https://arxiv.org/pdf/2402.14531) | [M-IFEval (ja)](https://arxiv.org/pdf/2502.04688) | [GSM8K (ja)](https://huggingface.co/datasets/SakanaAI/gsm8k-ja-test_250-1319) |
|
| 94 |
+
|-------|------|----------|----------|
|
| 95 |
+
| **LFM2.5-1.2B-JP** | 49.8 | 60.6 | TBD |
|
| 96 |
+
| **[LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct)** | 47.5 | 41.8 | 46.8 |
|
| 97 |
+
| Qwen3-1.7B (Instruct mode) | 47.1 | 40.3 | 46.0 |
|
| 98 |
+
| Llama 3.2 1B Instruct | 33.8 | 24.1 | 25.2 |
|
| 99 |
+
| TinySwallow-1.5B-Instruct | 48.0 | 36.5 | 47.2 |
|
| 100 |
+
| Gemma-2-Llama-Swallow-2b-it-v0.1 | 48.1 | 33.4 | 34.4 |
|
| 101 |
+
| Gemma-3-1b-it | 34.5 | 26.3 | 33.6 |
|
| 102 |
+
| Granite-4.0-h-1b | 42.2 | 39.3 | 42.8 |
|
| 103 |
+
| Sarashina2.2-1b-instruct-v0.1 | 40.2 | 21.9 | 44.4 |
|
| 104 |
|
| 105 |
**Evaluation Notes**
|
| 106 |
|