Update README.md
Browse files
README.md
CHANGED
|
@@ -37,6 +37,22 @@ tags:
|
|
| 37 |
This model was converted to GGUF format from [`aquigpt/open0-2-lite`](https://huggingface.co/aquigpt/open0-2-lite) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 38 |
Refer to the [original model card](https://huggingface.co/aquigpt/open0-2-lite) for more details on the model.
|
| 39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
## Use with llama.cpp
|
| 41 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 42 |
|
|
|
|
| 37 |
This model was converted to GGUF format from [`aquigpt/open0-2-lite`](https://huggingface.co/aquigpt/open0-2-lite) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 38 |
Refer to the [original model card](https://huggingface.co/aquigpt/open0-2-lite) for more details on the model.
|
| 39 |
|
| 40 |
+
## Performance Benchmarks
|
| 41 |
+
|
| 42 |
+
Aqui-open0-2 Lite demonstrates exceptional performance across multiple challenging benchmarks, significantly outperforming other models in its size class:
|
| 43 |
+
|
| 44 |
+
| Benchmark | Aqui-open0-2 Lite (1.72B) | Gemma 3 (1B) | Qwen3 (2.03B) | Llama 3.2 (1.24B) | LFM2 (1.17B) |
|
| 45 |
+
|-----------|---------------------------|---------------|----------------|-------------------|---------------|
|
| 46 |
+
| **MMLU (General Knowledge)** | **67.5%** | 40.1% | _59.1%_ | 46.6% | 55.2% |
|
| 47 |
+
| **GPQA (Science)** | **31.8%** | 19.2% | _27.7%_ | 19.6% | 31.5% |
|
| 48 |
+
| **IFEval (Instruction Following)** | **73.4%** | 62.9% | _68.4%_ | 52.4% | 74.5% |
|
| 49 |
+
| **GSM8K (Grade School Math)** | **63.2%** | _59.6%_ | 51.4% | 35.7% | 58.3% |
|
| 50 |
+
| **MGSM (Multilingual)** | **70.2%** | 43.6% | _66.6%_ | 29.1% | 55.0% |
|
| 51 |
+
| **Average Performance** | **61.2%** | 45.1% | _54.6%_ | 36.7% | 54.9% |
|
| 52 |
+
|
| 53 |
+
*Bold: Best performance, Italics: Second best*
|
| 54 |
+
|
| 55 |
+
|
| 56 |
## Use with llama.cpp
|
| 57 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 58 |
|