Text Generation
Transformers
Safetensors
GGUF
gemma3_text
turkish
türkiye
english
ai
lamapi
gemma3
next
next-x1
efficient
open-source
1b
huggingface
large-language-model
llm
causal
transformer
artificial-intelligence
machine-learning
ai-research
natural-language-processing
nlp
finetuned
lightweight
creative
summarization
question-answering
chat-model
generative-ai
optimized-model
unsloth
trl
sft
chemistry
biology
finance
legal
music
art
code
climate
medical
agent
text-generation-inference
conversational
Update README.md
Browse files
README.md
CHANGED
|
@@ -173,7 +173,7 @@ Ideal for **developers, students, and organizations** needing **fast, reliable,
|
|
| 173 |
</thead>
|
| 174 |
<tbody>
|
| 175 |
<tr class="next">
|
| 176 |
-
<td data-label="Model">Next 4B preview
|
| 177 |
<td data-label="MMLU (5-shot) %">84.6</td>
|
| 178 |
<td data-label="MMLU-Pro %">66.9</td>
|
| 179 |
<td data-label="GSM8K %">82.7</td>
|
|
@@ -205,7 +205,7 @@ Ideal for **developers, students, and organizations** needing **fast, reliable,
|
|
| 205 |
|
| 206 |
---
|
| 207 |
|
| 208 |
-
# Also, our Next
|
| 209 |
<table>
|
| 210 |
<thead>
|
| 211 |
<tr>
|
|
@@ -218,32 +218,32 @@ Ideal for **developers, students, and organizations** needing **fast, reliable,
|
|
| 218 |
</thead>
|
| 219 |
<tbody>
|
| 220 |
<tr class="next">
|
| 221 |
-
<td
|
| 222 |
-
<td
|
| 223 |
-
<td
|
| 224 |
-
<td
|
| 225 |
-
<td
|
| 226 |
</tr>
|
| 227 |
-
<tr
|
| 228 |
-
<td
|
| 229 |
-
<td
|
| 230 |
-
<td
|
| 231 |
-
<td
|
| 232 |
-
<td
|
| 233 |
</tr>
|
| 234 |
<tr>
|
| 235 |
-
<td
|
| 236 |
-
<td
|
| 237 |
-
<td
|
| 238 |
-
<td
|
| 239 |
-
<td
|
| 240 |
</tr>
|
| 241 |
<tr>
|
| 242 |
-
<td
|
| 243 |
-
<td
|
| 244 |
-
<td
|
| 245 |
-
<td
|
| 246 |
-
<td
|
| 247 |
</tr>
|
| 248 |
</tbody>
|
| 249 |
</table>
|
|
|
|
| 173 |
</thead>
|
| 174 |
<tbody>
|
| 175 |
<tr class="next">
|
| 176 |
+
<td data-label="Model">Next 4B preview</td>
|
| 177 |
<td data-label="MMLU (5-shot) %">84.6</td>
|
| 178 |
<td data-label="MMLU-Pro %">66.9</td>
|
| 179 |
<td data-label="GSM8K %">82.7</td>
|
|
|
|
| 205 |
|
| 206 |
---
|
| 207 |
|
| 208 |
+
# Also, our Next 14b model is leading to state-of-the-art models in some of the Benchmarks.
|
| 209 |
<table>
|
| 210 |
<thead>
|
| 211 |
<tr>
|
|
|
|
| 218 |
</thead>
|
| 219 |
<tbody>
|
| 220 |
<tr class="next">
|
| 221 |
+
<td><strong>Next 14B (Thinking)</strong></td>
|
| 222 |
+
<td><strong>94.6</strong></td>
|
| 223 |
+
<td><strong>93.2</strong></td>
|
| 224 |
+
<td><strong>98.8</strong></td>
|
| 225 |
+
<td>92.7</td>
|
| 226 |
</tr>
|
| 227 |
+
<tr>
|
| 228 |
+
<td>Next 12B</td>
|
| 229 |
+
<td>92.7</td>
|
| 230 |
+
<td>84.4</td>
|
| 231 |
+
<td>95.3</td>
|
| 232 |
+
<td>87.2</td>
|
| 233 |
</tr>
|
| 234 |
<tr>
|
| 235 |
+
<td>GPT-5</td>
|
| 236 |
+
<td>92.5</td>
|
| 237 |
+
<td>87.0</td>
|
| 238 |
+
<td>98.4</td>
|
| 239 |
+
<td><strong>96.0</strong></td>
|
| 240 |
</tr>
|
| 241 |
<tr>
|
| 242 |
+
<td>Claude Opus 4.1 (Thinking)</td>
|
| 243 |
+
<td>~92.0</td>
|
| 244 |
+
<td>87.8</td>
|
| 245 |
+
<td>84.7</td>
|
| 246 |
+
<td>95.4</td>
|
| 247 |
</tr>
|
| 248 |
</tbody>
|
| 249 |
</table>
|