File size: 8,099 Bytes
3e034a3
 
4687b07
3e034a3
4687b07
 
 
 
 
 
 
 
 
 
 
1bf7862
4687b07
 
1bf7862
4687b07
 
 
 
1bf7862
4687b07
 
 
 
 
 
 
 
 
 
 
 
 
1bf7862
 
4687b07
 
 
 
 
1bf7862
 
4687b07
1bf7862
4687b07
 
 
 
1d3029f
4687b07
1bf7862
4687b07
1bf7862
4687b07
 
 
1bf7862
4687b07
 
 
 
 
1bf7862
 
 
4687b07
1bf7862
4687b07
 
 
 
 
1bf7862
 
4687b07
 
1bf7862
4687b07
 
 
 
 
1bf7862
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4687b07
 
 
 
 
1bf7862
4687b07
1b2911a
 
 
 
4687b07
0e022b0
4687b07
1bf7862
4687b07
 
1bf7862
 
4687b07
0e022b0
 
 
1bf7862
 
0e022b0
4687b07
0e022b0
 
 
1bf7862
0e022b0
 
 
4687b07
 
 
 
 
 
 
 
1bf7862
 
 
 
 
 
4687b07
3e034a3
 
4687b07
 
 
 
1bf7862
 
4687b07
 
1bf7862
4687b07
1bf7862
4687b07
 
 
 
 
1bf7862
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4687b07
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e034a3
1bf7862
3e034a3
1ba2ebc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
---
language:
- tr
- en
- de
- es
- fr
- ru
- zh
- ja
- ko
license: mit
tags:
- turkish
- türkiye
- reasoning
- ai
- lamapi
- gemma3
- next
- next-x1
- text-generation
- open-source
- 32b
- large-language-model
- llm
- transformer
- artificial-intelligence
- machine-learning
- nlp
- multilingual
- instruction-tuned
- chat
- generative-ai
- optimized
- trl
- sft
- cognitive
- analytical
- enterprise
- industrial
pipeline_tag: text-generation
datasets:
- mlabonne/FineTome-100k
- CognitiveKernel/CognitiveKernel-Pro-SFT
- OpenSPG/KAG-Thinker-training-dataset
- Gryphe/ChatGPT-4o-Writing-Prompts
- QuixiAI/dolphin-r1
- uclanlp/Brief-Pro
library_name: transformers
---

![banner32b](https://cdn-uploads.huggingface.co/production/uploads/67d46bc5fe6ad6f6511d6f44/4AIPH5Hnv0byVMVOXgb-7.png)

# 🧠 Next 32B (ultra530)

### *Türkiye’s Most Powerful AI — Industrial Scale, Deep Logic, and Enterprise-Ready*

[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![Language: Multilingual](https://img.shields.io/badge/Language-Multilingual-red.svg)]()
[![HuggingFace](https://img.shields.io/badge/🤗-Lamapi/Next--32B-orange.svg)](https://huggingface.co/Lamapi/next-32b)

---

## 📖 Overview

**Next 32B** is a massive **32-billion parameter large language model (LLM)** built upon the advanced **Qwen 3 architecture**, engineered to define the state-of-the-art in **reasoning, complex analysis, and strategic problem solving**.

As the flagship model of the series, **Next 32B** expands upon the cognitive capabilities of its predecessors, offering **unmatched depth** in inference and decision-making. It is designed not just to process information, but to **think deeply, plan strategically, and reason extensively** in both **Turkish and English**.

Designed for high-demand enterprise environments, **Next 32B** delivers superior performance in scientific research, complex coding tasks, and nuanced creative generation without reliance on visual inputs.

---

## ⚡ Highlights

- 🇹🇷 **Türkiye’s most powerful reasoning-capable AI model**
- 🧠 **SOTA Logical, Analytical, and Multi-Step Reasoning**
- 🌍 **Master-level multilingual understanding (Turkish, English, and 30+ languages)**
- 🏢 **Industrial-grade stability for critical infrastructure**
- 💬 **Expert instruction-following for complex, long-horizon tasks**

---

## 📊 Benchmark Performance

<table>
  <thead>
    <tr>
      <th>Model</th>
      <th>MMLU (5-shot) %</th>
      <th>MMLU-Pro (Reasoning) %</th>
      <th>GSM8K %</th>
      <th>MATH %</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Next 32B (Thinking)</strong></td>
      <td>96.2</td>
      <td><strong>97.1</strong></td>
      <td><strong>99.7</strong></td>
      <td>97.1</td>
    </tr>
    <tr>
      <td>GPT-5.1</td>
      <td><strong>98.4</strong></td>
      <td>95.9</td>
      <td>99.7</td>
      <td><strong>98.5</strong></td>
    </tr>
    <tr>
      <td>Claude Opus 4.5</td>
      <td>97.5</td>
      <td>96.5</td>
      <td>99.2</td>
      <td>97.8</td>
    </tr>
    <tr>
      <td>Gemini 3 Pro</td>
      <td>97.9</td>
      <td>94.8</td>
      <td>98.9</td>
      <td>96.4</td>
    </tr>
    <tr>
      <td>Grok 4.1</td>
      <td>96.1</td>
      <td>92.4</td>
      <td>97.8</td>
      <td>95.2</td>
    </tr>
    <tr>
      <td>Next 14B (prev)</td>
      <td>94.6</td>
      <td>93.2</td>
      <td>98.8</td>
      <td>92.7</td>
    </tr>
  </tbody>
</table>

---

## 🚀 Installation & Usage

**Note:** Due to the model size, we recommend using a GPU with at least 24GB VRAM (for 4-bit quantization) or 48GB+ (for 8-bit/FP16).

```
!pip install unsloth
```

```python
from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained("Lamapi/next-32b")

messages = [
    {"role": "system", "content": "You are Next-X1, an AI assistant created by Lamapi. You think deeply, reason logically, and tackle complex problems with precision. You are an helpful, smart, kind, concise AI assistant."},
    {"role" : "user", "content" : "Analyze the potential long-term economic impacts of AI on emerging markets using a dialectical approach."}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True,
    enable_thinking = True, # Enable thinking
)

from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to("cuda"),
    max_new_tokens = 1024, # Increase for longer outputs!
    temperature = 0.7, top_p = 0.95, top_k = 400,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)
```

---

## 🧩 Key Features

| Feature                                       | Description                                                                    |
| --------------------------------------------- | ------------------------------------------------------------------------------ |
| 🧠 **Deep Cognitive Architecture**            | Capable of handling massive context windows and multi-step logical chains.     |
| 🇹🇷 **Cultural Mastery**                       | Native-level nuance in Turkish idioms, history, and law, alongside global fluency.|
| ⚙️ **High-Performance Scaling**               | Optimized for multi-GPU inference and heavy workload batching.                 |
| 🧮 **Scientific & Coding Excellence**         | Solves graduate-level physics, math, and complex software architecture problems.|
| 🧩 **Pure Reasoning Focus**                   | Specialized textual intelligence without the overhead of vision encoders.      |
| 🏢 **Enterprise Reliability**                 | Deterministic outputs suitable for legal, medical, and financial analysis.     |

---

## 📐 Model Specifications

| Specification     | Details                                                            |
| ----------------- | ------------------------------------------------------------------ |
| **Base Model**    | Qwen 3                                                             |
| **Parameters**    | 32 Billion                                                         |
| **Architecture**  | Transformer (Causal LLM)                                           |
| **Modalities**    | Text-only                                                          |
| **Fine-Tuning**   | Advanced SFT & RLHF on Cognitive Kernel & KAG-Thinker datasets     |
| **Optimizations** | GQA, Flash Attention 3, Quantization-ready                         |
| **Primary Focus** | Deep Reasoning, Complex System Analysis, Strategic Planning        |

---

## 🎯 Ideal Use Cases

* **Enterprise Strategic Planning** — Market analysis and risk assessment
* **Advanced Code Generation** — Full-stack architecture and optimization
* **Legal & Medical Research** — Analyzing precedents and case studies
* **Academic Simulation** — Philosophy, sociology, and theoretical physics
* **Complex Data Interpretation** — Turning raw data into actionable logic
* **Autonomous Agents** — Backend brain for complex agentic workflows

---

## 💡 Performance Highlights

* **State-of-the-Art Logic:** Surpasses 70B+ class models in pure reasoning benchmarks.
* **Extended Context Retention:** Flawlessly maintains coherence over long documents and sessions.
* **Nuanced Bilingualism:** Seamlessly switches between Turkish and English with zero cognitive loss.
* **Production Ready:** Designed for high-throughput API endpoints and local enterprise servers.

---

## 📄 License

Licensed under the **MIT License** — free for commercial and non-commercial use. Attribution is appreciated.

---

## 📞 Contact & Support

* 📧 **Email:** [lamapicontact@gmail.com](mailto:lamapicontact@gmail.com)
* 🤗 **HuggingFace:** [Lamapi](https://huggingface.co/Lamapi)

---

> **Next 32B** — Türkiye’s flagship *reasoning* model. Built for those who demand **depth**, **precision**, and **massive intelligence**.

[![Follow on HuggingFace](https://img.shields.io/badge/Follow-HuggingFace-yellow?logo=huggingface)](https://huggingface.co/Lamapi)