Update README.md
Browse files
README.md
CHANGED
|
@@ -19,9 +19,9 @@ base_model:
|
|
| 19 |
library_name: transformers
|
| 20 |
---
|
| 21 |
|
| 22 |
-
# Model Card for Zero-Mistral
|
| 23 |
|
| 24 |
-
Zero-Mistral is an improved TEXT-only version of [mistralai/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503),
|
| 25 |
primarily adapted for Russian and English languages. The original Mistral model contains vision features which were removed from this model.
|
| 26 |
The training involved SFT stage primarily on [Big Russian Dataset](https://huggingface.co/datasets/ZeroAgency/ru-big-russian-dataset) dataset
|
| 27 |
and proprietary dataset from [Shkolkovo.online](https://shkolkovo.online/?utm_source=hf).
|
|
@@ -142,9 +142,11 @@ system prompt:
|
|
| 142 |
system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
|
| 143 |
Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30.
|
| 144 |
When you're not sure about some information, you say that you don't have the information and don't make up anything.
|
| 145 |
-
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")
|
| 146 |
```
|
| 147 |
|
|
|
|
|
|
|
| 148 |
**_Installation_**
|
| 149 |
|
| 150 |
Make sure you install [`vLLM >= 0.8.4`](https://github.com/vllm-project/vllm/releases/tag/v0.8.4):
|
|
@@ -168,7 +170,7 @@ We recommand that you use ZeroAgency/Zero-Mistral-24B in a server/client setting
|
|
| 168 |
1. Spin up a server:
|
| 169 |
|
| 170 |
```
|
| 171 |
-
vllm
|
| 172 |
```
|
| 173 |
|
| 174 |
**Note:** Running Zero-Mistral-24B on GPU requires ~55 GB of GPU RAM in bf16 or fp16.
|
|
@@ -189,11 +191,15 @@ model = "ZeroAgency/Zero-Mistral-24B"
|
|
| 189 |
messages = [
|
| 190 |
{
|
| 191 |
"role": "system",
|
| 192 |
-
"content": "
|
|
|
|
|
|
|
|
|
|
|
|
|
| 193 |
},
|
| 194 |
-
{
|
| 195 |
"role": "user",
|
| 196 |
-
"content": "
|
| 197 |
},
|
| 198 |
]
|
| 199 |
|
|
@@ -202,144 +208,10 @@ data = {"model": model, "messages": messages}
|
|
| 202 |
response = requests.post(url, headers=headers, data=json.dumps(data))
|
| 203 |
print(response.json()["choices"][0]["message"]["content"])
|
| 204 |
|
| 205 |
-
|
| 206 |
-
#
|
| 207 |
-
# 1. À plus tard
|
| 208 |
-
# 2. À plus
|
| 209 |
-
# 3. Salut
|
| 210 |
-
# 4. À toute
|
| 211 |
-
# 5. Bisous
|
| 212 |
-
#
|
| 213 |
-
# ```
|
| 214 |
-
# /\_/\
|
| 215 |
-
# ( o.o )
|
| 216 |
-
# > ^ <
|
| 217 |
-
# ```
|
| 218 |
-
```
|
| 219 |
-
|
| 220 |
-
### Function calling
|
| 221 |
-
|
| 222 |
-
Zero-Mistral-24B is excellent at function / tool calling tasks via vLLM. *E.g.:*
|
| 223 |
-
|
| 224 |
-
<details>
|
| 225 |
-
<summary>Example</summary>
|
| 226 |
-
|
| 227 |
-
```py
|
| 228 |
-
import requests
|
| 229 |
-
import json
|
| 230 |
-
from huggingface_hub import hf_hub_download
|
| 231 |
-
from datetime import datetime, timedelta
|
| 232 |
-
|
| 233 |
-
url = "http://<your-url>:8000/v1/chat/completions"
|
| 234 |
-
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
|
| 235 |
-
|
| 236 |
-
model = "ZeroAgency/Zero-Mistral-24B"
|
| 237 |
-
|
| 238 |
-
|
| 239 |
-
def load_system_prompt(repo_id: str, filename: str) -> str:
|
| 240 |
-
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
|
| 241 |
-
with open(file_path, "r") as file:
|
| 242 |
-
system_prompt = file.read()
|
| 243 |
-
today = datetime.today().strftime("%Y-%m-%d")
|
| 244 |
-
yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
|
| 245 |
-
model_name = repo_id.split("/")[-1]
|
| 246 |
-
return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
|
| 247 |
-
|
| 248 |
-
|
| 249 |
-
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
|
| 250 |
-
|
| 251 |
-
|
| 252 |
-
tools = [
|
| 253 |
-
{
|
| 254 |
-
"type": "function",
|
| 255 |
-
"function": {
|
| 256 |
-
"name": "get_current_weather",
|
| 257 |
-
"description": "Get the current weather in a given location",
|
| 258 |
-
"parameters": {
|
| 259 |
-
"type": "object",
|
| 260 |
-
"properties": {
|
| 261 |
-
"city": {
|
| 262 |
-
"type": "string",
|
| 263 |
-
"description": "The city to find the weather for, e.g. 'San Francisco'",
|
| 264 |
-
},
|
| 265 |
-
"state": {
|
| 266 |
-
"type": "string",
|
| 267 |
-
"description": "The state abbreviation, e.g. 'CA' for California",
|
| 268 |
-
},
|
| 269 |
-
"unit": {
|
| 270 |
-
"type": "string",
|
| 271 |
-
"description": "The unit for temperature",
|
| 272 |
-
"enum": ["celsius", "fahrenheit"],
|
| 273 |
-
},
|
| 274 |
-
},
|
| 275 |
-
"required": ["city", "state", "unit"],
|
| 276 |
-
},
|
| 277 |
-
},
|
| 278 |
-
},
|
| 279 |
-
{
|
| 280 |
-
"type": "function",
|
| 281 |
-
"function": {
|
| 282 |
-
"name": "rewrite",
|
| 283 |
-
"description": "Rewrite a given text for improved clarity",
|
| 284 |
-
"parameters": {
|
| 285 |
-
"type": "object",
|
| 286 |
-
"properties": {
|
| 287 |
-
"text": {
|
| 288 |
-
"type": "string",
|
| 289 |
-
"description": "The input text to rewrite",
|
| 290 |
-
}
|
| 291 |
-
},
|
| 292 |
-
},
|
| 293 |
-
},
|
| 294 |
-
},
|
| 295 |
-
]
|
| 296 |
-
|
| 297 |
-
messages = [
|
| 298 |
-
{"role": "system", "content": SYSTEM_PROMPT},
|
| 299 |
-
{
|
| 300 |
-
"role": "user",
|
| 301 |
-
"content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.",
|
| 302 |
-
},
|
| 303 |
-
{
|
| 304 |
-
"role": "assistant",
|
| 305 |
-
"content": "",
|
| 306 |
-
"tool_calls": [
|
| 307 |
-
{
|
| 308 |
-
"id": "bbc5b7ede",
|
| 309 |
-
"type": "function",
|
| 310 |
-
"function": {
|
| 311 |
-
"name": "rewrite",
|
| 312 |
-
"arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}',
|
| 313 |
-
},
|
| 314 |
-
}
|
| 315 |
-
],
|
| 316 |
-
},
|
| 317 |
-
{
|
| 318 |
-
"role": "tool",
|
| 319 |
-
"content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}',
|
| 320 |
-
"tool_call_id": "bbc5b7ede",
|
| 321 |
-
"name": "rewrite",
|
| 322 |
-
},
|
| 323 |
-
{
|
| 324 |
-
"role": "assistant",
|
| 325 |
-
"content": "---\n\nOpenAI is a FOR-profit company.",
|
| 326 |
-
},
|
| 327 |
-
{
|
| 328 |
-
"role": "user",
|
| 329 |
-
"content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?",
|
| 330 |
-
},
|
| 331 |
-
]
|
| 332 |
-
|
| 333 |
-
data = {"model": model, "messages": messages, "tools": tools}
|
| 334 |
-
|
| 335 |
-
response = requests.post(url, headers=headers, data=json.dumps(data))
|
| 336 |
-
import ipdb; ipdb.set_trace()
|
| 337 |
-
print(response.json()["choices"][0]["message"]["tool_calls"])
|
| 338 |
-
# [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}]
|
| 339 |
```
|
| 340 |
|
| 341 |
-
</details>
|
| 342 |
-
|
| 343 |
#### Offline
|
| 344 |
|
| 345 |
```py
|
|
@@ -347,9 +219,17 @@ from vllm import LLM
|
|
| 347 |
from vllm.sampling_params import SamplingParams
|
| 348 |
from datetime import datetime, timedelta
|
| 349 |
|
| 350 |
-
SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
|
| 351 |
|
| 352 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 353 |
|
| 354 |
messages = [
|
| 355 |
{
|
|
@@ -362,26 +242,13 @@ messages = [
|
|
| 362 |
},
|
| 363 |
]
|
| 364 |
|
| 365 |
-
# note that running this model on GPU requires over 60 GB of GPU RAM
|
| 366 |
-
llm = LLM(model="ZeroAgency/Zero-Mistral-24B", tokenizer_mode="mistral", tensor_parallel_size=8)
|
| 367 |
|
| 368 |
-
sampling_params = SamplingParams(max_tokens=512, temperature=0.
|
| 369 |
outputs = llm.chat(messages, sampling_params=sampling_params)
|
| 370 |
|
|
|
|
| 371 |
print(outputs[0].outputs[0].text)
|
| 372 |
-
|
| 373 |
-
#
|
| 374 |
-
# 1. À plus tard
|
| 375 |
-
# 2. À plus
|
| 376 |
-
# 3. Salut
|
| 377 |
-
# 4. À toute
|
| 378 |
-
# 5. Bisous
|
| 379 |
-
#
|
| 380 |
-
# ```
|
| 381 |
-
# /\_/\
|
| 382 |
-
# ( o.o )
|
| 383 |
-
# > ^ <
|
| 384 |
-
# ```
|
| 385 |
```
|
| 386 |
|
| 387 |
### Transformers
|
|
@@ -396,7 +263,10 @@ messages = [
|
|
| 396 |
{"role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French."},
|
| 397 |
]
|
| 398 |
chatbot = pipeline("text-generation", model="ZeroAgency/Zero-Mistral-24B", max_new_tokens=256, torch_dtype=torch.bfloat16)
|
| 399 |
-
chatbot(messages)
|
|
|
|
|
|
|
|
|
|
| 400 |
```
|
| 401 |
|
| 402 |
### llama-server
|
|
|
|
| 19 |
library_name: transformers
|
| 20 |
---
|
| 21 |
|
| 22 |
+
# Model Card for Zero-Mistral-24B
|
| 23 |
|
| 24 |
+
**Zero-Mistral-24B** is an improved TEXT-only version of [mistralai/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503),
|
| 25 |
primarily adapted for Russian and English languages. The original Mistral model contains vision features which were removed from this model.
|
| 26 |
The training involved SFT stage primarily on [Big Russian Dataset](https://huggingface.co/datasets/ZeroAgency/ru-big-russian-dataset) dataset
|
| 27 |
and proprietary dataset from [Shkolkovo.online](https://shkolkovo.online/?utm_source=hf).
|
|
|
|
| 142 |
system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
|
| 143 |
Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30.
|
| 144 |
When you're not sure about some information, you say that you don't have the information and don't make up anything.
|
| 145 |
+
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")
|
| 146 |
```
|
| 147 |
|
| 148 |
+
**Note 3**: flash_attn or flashinfer-python preferred for better performance.
|
| 149 |
+
|
| 150 |
**_Installation_**
|
| 151 |
|
| 152 |
Make sure you install [`vLLM >= 0.8.4`](https://github.com/vllm-project/vllm/releases/tag/v0.8.4):
|
|
|
|
| 170 |
1. Spin up a server:
|
| 171 |
|
| 172 |
```
|
| 173 |
+
vllm serveZeroAgency/Zero-Mistral-24B --enable-prefix-caching --dtype bfloat16 --max-model-len 32768 --tool-call-parser mistral --enable-auto-tool-choice
|
| 174 |
```
|
| 175 |
|
| 176 |
**Note:** Running Zero-Mistral-24B on GPU requires ~55 GB of GPU RAM in bf16 or fp16.
|
|
|
|
| 191 |
messages = [
|
| 192 |
{
|
| 193 |
"role": "system",
|
| 194 |
+
"content": """Ты виртуальный ассистент. Ты отвечаешь на вопросы людей, помогаешь им и поддерживаешь. Ты создан, чтобы быть полезным, безобидным и честным. Ты отвечаешь на том языке, на котором был задан вопрос или попросил пользователь. Реши задачу по инструкции ниже. Не извиняйся, не строй диалог.
|
| 195 |
+
|
| 196 |
+
Answer in the following format:
|
| 197 |
+
<think>Reasoning: ...</think>
|
| 198 |
+
..."""
|
| 199 |
},
|
| 200 |
+
{ # Task from https://3.shkolkovo.online/catalog/2552/93150
|
| 201 |
"role": "user",
|
| 202 |
+
"content": """Первый рабочий за час делает на 9 деталей больше, чем второй, и выполняет заказ, состоящий из 216 деталей, на 4 часа быстрее, чем второй рабочий, выполняющий такой же заказ. Сколько деталей в час делает первый рабочий?"""
|
| 203 |
},
|
| 204 |
]
|
| 205 |
|
|
|
|
| 208 |
response = requests.post(url, headers=headers, data=json.dumps(data))
|
| 209 |
print(response.json()["choices"][0]["message"]["content"])
|
| 210 |
|
| 211 |
+
#<think> Пусть x — количество деталей, которые делает второй рабочий за час. Тогда первый рабочий делает x + 9 деталей за час. Составим таблицу: Первый рабочий Второй рабочий Количество деталей в час x + 9 x Количество часов 216 : (x + 9) 216 : x Разность количества часов 4 216 : (x + 9) − 216 : x = 4 216x − 216(x + 9) = 4x(x + 9) 216x − 216x − 1944 = 4x^2 + 36x 1944 = 4x^2 + 36x 4x^2 + 36x − 1944 = 0 D = 36^2 + 4 · 4 · 1944 = 1296 + 31104 = 32400 = 180^2 x1 = −36 + 180 : 8 = 144 : 8 = 18 x2 = −36 − 180 : 8 < 0 — не подходит по смыслу задачи. Тогда первый рабочий делает 18 + 9 = 27 деталей в час. </think>
|
| 212 |
+
#27
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 213 |
```
|
| 214 |
|
|
|
|
|
|
|
| 215 |
#### Offline
|
| 216 |
|
| 217 |
```py
|
|
|
|
| 219 |
from vllm.sampling_params import SamplingParams
|
| 220 |
from datetime import datetime, timedelta
|
| 221 |
|
|
|
|
| 222 |
|
| 223 |
+
# note that running this model on GPU requires over 60 GB of GPU RAM
|
| 224 |
+
llm = LLM(model="ZeroAgency/Zero-Mistral-24B", tokenizer_mode="mistral", tensor_parallel_size=8)
|
| 225 |
+
|
| 226 |
+
SYSTEM_PROMPT = """Ты виртуальный ассистент. Ты отвечаешь на вопросы людей, помогаешь им и поддерживаешь. Ты создан, чтобы быть полезным, безобидным и честным. Ты отвечаешь на том языке, на котором был задан вопрос или попросил пользователь.
|
| 227 |
+
|
| 228 |
+
Answer in the following format:
|
| 229 |
+
<think>Reasoning: ...</think>
|
| 230 |
+
..."""
|
| 231 |
+
|
| 232 |
+
user_prompt = """Что больше 9.9 или 9.11?"""
|
| 233 |
|
| 234 |
messages = [
|
| 235 |
{
|
|
|
|
| 242 |
},
|
| 243 |
]
|
| 244 |
|
|
|
|
|
|
|
| 245 |
|
| 246 |
+
sampling_params = SamplingParams(max_tokens=512, temperature=0.0, top_p=1, top_k=-1)
|
| 247 |
outputs = llm.chat(messages, sampling_params=sampling_params)
|
| 248 |
|
| 249 |
+
|
| 250 |
print(outputs[0].outputs[0].text)
|
| 251 |
+
#<think> Задача: Сравните 9.9 и 9.11 для определения того, какой из них больше Подход: Десятичное сравнение с выравниванием десятичных точек Сложность: Низкий к среднему Я должен тщательно выровнять десятичные точки и сравнить цифры по месту. 1. Выровнять десятичные точки: 9.90 9.11 2. Сравните целые числа: оба имеют 9, поэтому они равны 3. Сравните десятые места: 9.90 имеет 9, 9.11 имеет 1 9 > 1, поэтому 9.90 больше 4. Сравните сотые места: 9.90 имеет 0, 9.11 имеет 1 0 < 1, но это не имеет значения, поскольку десятое место уже определило большее число<reflection>Я правильно выровнял десятичные точки и сравнил цифры по месту. Я заметил, что десятое место (9 против 1) определило, что 9.9 больше, чем 9.11. Сотые места не были необходимы для этого сравнения.</reflection> <self_improvement>В будущих сравнениях я буду уделять первоочередное внимание самым левым цифрам, где есть разница, чтобы оптимизировать процесс сравнения.</self_improvement> </think> 9.9 больше, чем 9.11. Когда вы сравниваете десятичные числа, вы начинаете с целых чисел, затем переходите к десятым местам, сотым местам и так далее. В этом случае 9.9 имеет 9 в десятом месте, в то время как 9.11 имеет 1 в десятом месте. Поскольку 9 > 1, 9.9 больше, чем 9.11.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 252 |
```
|
| 253 |
|
| 254 |
### Transformers
|
|
|
|
| 263 |
{"role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French."},
|
| 264 |
]
|
| 265 |
chatbot = pipeline("text-generation", model="ZeroAgency/Zero-Mistral-24B", max_new_tokens=256, torch_dtype=torch.bfloat16)
|
| 266 |
+
response = chatbot(messages, temperature=0.1)
|
| 267 |
+
print(response[0]['generated_text'][1]['content'])
|
| 268 |
+
# 9.9 больше, чем 9.11.
|
| 269 |
+
|
| 270 |
```
|
| 271 |
|
| 272 |
### llama-server
|