Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,409 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
datasets:
|
| 4 |
+
- ZeroAgency/ru-big-russian-dataset
|
| 5 |
+
language:
|
| 6 |
+
- ru
|
| 7 |
+
- en
|
| 8 |
+
tags:
|
| 9 |
+
- mistral
|
| 10 |
+
- chat
|
| 11 |
+
- conversational
|
| 12 |
+
- transformers
|
| 13 |
+
inference:
|
| 14 |
+
parameters:
|
| 15 |
+
temperature: 0
|
| 16 |
+
pipeline_tag: text-generation
|
| 17 |
+
base_model:
|
| 18 |
+
- mistralai/Mistral-Small-3.1-24B-Instruct-2503
|
| 19 |
+
library_name: transformers
|
| 20 |
+
---
|
| 21 |
+
|
| 22 |
+
# Model Card for Model ID
|
| 23 |
+
|
| 24 |
+
Zero-Mistral is an improved TEXT-only version of [mistralai/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503),
|
| 25 |
+
primarily adapted for Russian and English languages. The original Mistral model contains vision features which were removed from this model.
|
| 26 |
+
The training involved SFT stage primarily on [Big Russian Dataset](https://huggingface.co/datasets/ZeroAgency/ru-big-russian-dataset) dataset
|
| 27 |
+
and proprietary dataset from [Shkolkovo.online](https://shkolkovo.online/?utm_source=hf).
|
| 28 |
+
|
| 29 |
+
The model has good math skills and some reasoning abilities.
|
| 30 |
+
|
| 31 |
+
The modele saves original mistral long context capabilities up to 128k tokens.
|
| 32 |
+
|
| 33 |
+
## Model Details
|
| 34 |
+
|
| 35 |
+
### Model Description
|
| 36 |
+
|
| 37 |
+
- **Developed by:** [ZeroAgency.ru](https://zeroagency.ru/?utm_source=hf)
|
| 38 |
+
- **Funded by:** [ZeroAgency.ru](https://zeroagency.ru/?utm_source=hf) and [Shkolkovo.online](https://shkolkovo.online/?utm_source=hf)
|
| 39 |
+
- **Shared by:** [Alexander Kozhevnikov](https://t.me/ak_segfault) (developer)
|
| 40 |
+
- **Model type:** LLM
|
| 41 |
+
- **Language(s) (NLP):** Russian, English
|
| 42 |
+
- **License:** MIT
|
| 43 |
+
- **Finetuned from model:** [mistralai/Mistral-Small-3.1-24B-Instruct-2503
|
| 44 |
+
|
| 45 |
+
### 📚 Model versions
|
| 46 |
+
|
| 47 |
+
- [Merged 16-bit](https://huggingface.co/ZeroAgency/Zero-Mistral-24B) - original 16bit merged version for transformers.
|
| 48 |
+
- [GGUF](https://huggingface.co/ZeroAgency/Zero-Mistral-24B-gguf) - different GGUF versions: BF16, F16, Q8_0, Q6_K, Q4_K_M, IQ4_XS, etc.
|
| 49 |
+
|
| 50 |
+
## 📊 Benchmarks for main 16-bit merged version
|
| 51 |
+
|
| 52 |
+
### MERA
|
| 53 |
+
|
| 54 |
+
**MERA score**: `0.623`
|
| 55 |
+
|
| 56 |
+
| Task | Result | Metric |
|
| 57 |
+
|--------------|----------------------|--------------------|
|
| 58 |
+
| LCS | 0.194 | Accuracy |
|
| 59 |
+
| RCB | 0.607 / 0.592 | Avg. F1 / Accuracy |
|
| 60 |
+
| USE | 0.452 | Grade Norm |
|
| 61 |
+
| RWSD | 0.55 | Accuracy |
|
| 62 |
+
| PARus | 0.942 | Accuracy |
|
| 63 |
+
| ruTiE | 0.868 | Accuracy |
|
| 64 |
+
| MultiQ | 0.781 / 0.629 | F1-score/EM |
|
| 65 |
+
| CheGeKa | 0.397 / 0.322 | F1 / EM |
|
| 66 |
+
| ruModAr | 0.971 | EM |
|
| 67 |
+
| MaMuRAMu | 0.832 | Accuracy |
|
| 68 |
+
| ruMultiAr | 0.354 | EM |
|
| 69 |
+
| ruCodeEval | 0 / 0 / 0 | pass@k `¯\_(ツ)_/¯`|
|
| 70 |
+
| MathLogicQA | 0.613 | Accuracy |
|
| 71 |
+
| ruWorldTree | 0.987 / 0.987 | Avg. F1 / Accuracy |
|
| 72 |
+
| ruOpenBookQA | 0.913 / 0.913 | Avg. F1 / Accuracy |
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
Оценка по открытым задачам:
|
| 76 |
+
|
| 77 |
+
| Задача | Результат | Метрика |
|
| 78 |
+
|--------------|---------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------|
|
| 79 |
+
| BPS | 0.981 | Accuracy |
|
| 80 |
+
| ruMMLU | 0.778 | Accuracy |
|
| 81 |
+
| SimpleAr | 0.997 | EM |
|
| 82 |
+
| ruHumanEval | 0.006 / 0.006 / 0.006 | pass@k `¯\_(ツ)_/¯` |
|
| 83 |
+
| ruHHH | 0.916 | Accuracy |
|
| 84 |
+
| ruHateSpeech | 0.834 | Accuracy |
|
| 85 |
+
| ruDetox | 0.341 / 0.843 / 0.624 / 0.66 | Общая средняя оценка (J) / Оценка сохранения смысла (SIM) / Оценка натуральности (FL) / Точность переноса стиля (STA) |
|
| 86 |
+
| ruEthics | [[0.386, 0.399, 0.41, 0.333, 0.327], [0.421, 0.427, 0.452, 0.375, 0.363], [0.653, 0.65, 0.697, 0.596, 0.573]] | 5 MCC |
|
| 87 |
+
|
| 88 |
+
|
| 89 |
+
## Usage
|
| 90 |
+
|
| 91 |
+
The model can be used with the following frameworks;
|
| 92 |
+
- [`vllm`](https://github.com/vllm-project/vllm): See [here](#vllm)
|
| 93 |
+
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
|
| 94 |
+
- [`llama.cpp](https://github.com/ggml-org/llama.cpp): See [here](#llama-server)
|
| 95 |
+
|
| 96 |
+
### Recommended system prompts
|
| 97 |
+
|
| 98 |
+
```python
|
| 99 |
+
prompts = {
|
| 100 |
+
"generic": "Ты виртуальный ассистент. Ты отвечаешь на вопросы людей, помогаешь им и поддерживаешь. Ты создан, чтобы быть полезным, безобидным и честным. Ты отвечаешь на том языке, на котором был задан вопрос или попросил пользователь.",
|
| 101 |
+
"think": """Ты виртуальный ассистент. Ты отвечаешь на вопросы людей, помогаешь им и поддерживаешь. Ты создан, чтобы быть полезным, безобидным и честным. Ты отвечаешь на том языке, на котором был задан вопрос или попросил пользователь.
|
| 102 |
+
|
| 103 |
+
Answer in the following format:
|
| 104 |
+
<think>Reasoning: ...</think>
|
| 105 |
+
...""",
|
| 106 |
+
"task": "Ты виртуальный ассистент. Ты отвечаешь на вопросы людей, помогаешь им и поддерживаешь. Ты создан, чтобы быть полезным, безобидным и честным. Ты отвечаешь на том языке, на котором был задан вопрос или попросил пользователь. Реши задачу по инструкции ниже. Не извиняйся, не строй диалог.",
|
| 107 |
+
"task_think": """Ты виртуальный ассистент. Ты отвечаешь на вопросы людей, помогаешь им и поддерживаешь. Ты создан, чтобы быть полезным, безобидным и честным. Ты отвечаешь на том языке, на котором был задан вопрос или попросил пользователь. Реши задачу по инструкции ниже. Не извиняйся, не строй диалог.
|
| 108 |
+
|
| 109 |
+
Answer in the following format:
|
| 110 |
+
<think>Reasoning: ...</think>
|
| 111 |
+
...""",
|
| 112 |
+
"english_generic": """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
|
| 113 |
+
Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30.
|
| 114 |
+
When you're not sure about some information, you say that you don't have the information and don't make up anything.
|
| 115 |
+
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")
|
| 116 |
+
""",
|
| 117 |
+
"english_think": """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
|
| 118 |
+
Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30.
|
| 119 |
+
When you're not sure about some information, you say that you don't have the information and don't make up anything.
|
| 120 |
+
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")
|
| 121 |
+
|
| 122 |
+
Answer in the following format:
|
| 123 |
+
<think>Reasoning: ...</think>
|
| 124 |
+
""",
|
| 125 |
+
}
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
### vLLM
|
| 129 |
+
|
| 130 |
+
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
|
| 131 |
+
to implement production-ready inference pipelines.
|
| 132 |
+
|
| 133 |
+
**Note 1**: We recommond using a relatively low temperature, such as `temperature=0.15`.
|
| 134 |
+
|
| 135 |
+
**Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following
|
| 136 |
+
system prompt:
|
| 137 |
+
|
| 138 |
+
```
|
| 139 |
+
system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
|
| 140 |
+
Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30.
|
| 141 |
+
When you're not sure about some information, you say that you don't have the information and don't make up anything.
|
| 142 |
+
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")"""
|
| 143 |
+
```
|
| 144 |
+
|
| 145 |
+
**_Installation_**
|
| 146 |
+
|
| 147 |
+
Make sure you install [`vLLM >= 0.8.4`](https://github.com/vllm-project/vllm/releases/tag/v0.8.4):
|
| 148 |
+
|
| 149 |
+
```
|
| 150 |
+
pip install --upgrade vllm
|
| 151 |
+
```
|
| 152 |
+
|
| 153 |
+
Also make sure you have [`mistral_common >= 1.5.4`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.4) installed:
|
| 154 |
+
|
| 155 |
+
```
|
| 156 |
+
pip install --upgrade mistral_common
|
| 157 |
+
```
|
| 158 |
+
|
| 159 |
+
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/r/vllm/vllm-openai/tags).
|
| 160 |
+
|
| 161 |
+
#### Server
|
| 162 |
+
|
| 163 |
+
We recommand that you use ZeroAgency/Zero-Mistral-24B in a server/client setting.
|
| 164 |
+
|
| 165 |
+
1. Spin up a server:
|
| 166 |
+
|
| 167 |
+
```
|
| 168 |
+
vllm serve ZeroAgency/Zero-Mistral-24B --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice
|
| 169 |
+
```
|
| 170 |
+
|
| 171 |
+
**Note:** Running Zero-Mistral-24B on GPU requires ~55 GB of GPU RAM in bf16 or fp16.
|
| 172 |
+
|
| 173 |
+
|
| 174 |
+
2. To ping the client you can use a simple Python snippet.
|
| 175 |
+
|
| 176 |
+
```py
|
| 177 |
+
import requests
|
| 178 |
+
import json
|
| 179 |
+
from datetime import datetime, timedelta
|
| 180 |
+
|
| 181 |
+
url = "http://<your-server>:8000/v1/chat/completions"
|
| 182 |
+
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
|
| 183 |
+
|
| 184 |
+
model = "ZeroAgency/Zero-Mistral-24B"
|
| 185 |
+
|
| 186 |
+
messages = [
|
| 187 |
+
{
|
| 188 |
+
"role": "system",
|
| 189 |
+
"content": "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
|
| 190 |
+
},
|
| 191 |
+
{
|
| 192 |
+
"role": "user",
|
| 193 |
+
"content": "Give me 5 non-formal ways to say 'See you later' in French."
|
| 194 |
+
},
|
| 195 |
+
]
|
| 196 |
+
|
| 197 |
+
data = {"model": model, "messages": messages}
|
| 198 |
+
|
| 199 |
+
response = requests.post(url, headers=headers, data=json.dumps(data))
|
| 200 |
+
print(response.json()["choices"][0]["message"]["content"])
|
| 201 |
+
|
| 202 |
+
# Sure, here are five non-formal ways to say "See you later" in French:
|
| 203 |
+
#
|
| 204 |
+
# 1. À plus tard
|
| 205 |
+
# 2. À plus
|
| 206 |
+
# 3. Salut
|
| 207 |
+
# 4. À toute
|
| 208 |
+
# 5. Bisous
|
| 209 |
+
#
|
| 210 |
+
# ```
|
| 211 |
+
# /\_/\
|
| 212 |
+
# ( o.o )
|
| 213 |
+
# > ^ <
|
| 214 |
+
# ```
|
| 215 |
+
```
|
| 216 |
+
|
| 217 |
+
### Function calling
|
| 218 |
+
|
| 219 |
+
Zero-Mistral-24B is excellent at function / tool calling tasks via vLLM. *E.g.:*
|
| 220 |
+
|
| 221 |
+
<details>
|
| 222 |
+
<summary>Example</summary>
|
| 223 |
+
|
| 224 |
+
```py
|
| 225 |
+
import requests
|
| 226 |
+
import json
|
| 227 |
+
from huggingface_hub import hf_hub_download
|
| 228 |
+
from datetime import datetime, timedelta
|
| 229 |
+
|
| 230 |
+
url = "http://<your-url>:8000/v1/chat/completions"
|
| 231 |
+
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
|
| 232 |
+
|
| 233 |
+
model = "ZeroAgency/Zero-Mistral-24B"
|
| 234 |
+
|
| 235 |
+
|
| 236 |
+
def load_system_prompt(repo_id: str, filename: str) -> str:
|
| 237 |
+
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
|
| 238 |
+
with open(file_path, "r") as file:
|
| 239 |
+
system_prompt = file.read()
|
| 240 |
+
today = datetime.today().strftime("%Y-%m-%d")
|
| 241 |
+
yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
|
| 242 |
+
model_name = repo_id.split("/")[-1]
|
| 243 |
+
return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
|
| 244 |
+
|
| 245 |
+
|
| 246 |
+
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
|
| 247 |
+
|
| 248 |
+
|
| 249 |
+
tools = [
|
| 250 |
+
{
|
| 251 |
+
"type": "function",
|
| 252 |
+
"function": {
|
| 253 |
+
"name": "get_current_weather",
|
| 254 |
+
"description": "Get the current weather in a given location",
|
| 255 |
+
"parameters": {
|
| 256 |
+
"type": "object",
|
| 257 |
+
"properties": {
|
| 258 |
+
"city": {
|
| 259 |
+
"type": "string",
|
| 260 |
+
"description": "The city to find the weather for, e.g. 'San Francisco'",
|
| 261 |
+
},
|
| 262 |
+
"state": {
|
| 263 |
+
"type": "string",
|
| 264 |
+
"description": "The state abbreviation, e.g. 'CA' for California",
|
| 265 |
+
},
|
| 266 |
+
"unit": {
|
| 267 |
+
"type": "string",
|
| 268 |
+
"description": "The unit for temperature",
|
| 269 |
+
"enum": ["celsius", "fahrenheit"],
|
| 270 |
+
},
|
| 271 |
+
},
|
| 272 |
+
"required": ["city", "state", "unit"],
|
| 273 |
+
},
|
| 274 |
+
},
|
| 275 |
+
},
|
| 276 |
+
{
|
| 277 |
+
"type": "function",
|
| 278 |
+
"function": {
|
| 279 |
+
"name": "rewrite",
|
| 280 |
+
"description": "Rewrite a given text for improved clarity",
|
| 281 |
+
"parameters": {
|
| 282 |
+
"type": "object",
|
| 283 |
+
"properties": {
|
| 284 |
+
"text": {
|
| 285 |
+
"type": "string",
|
| 286 |
+
"description": "The input text to rewrite",
|
| 287 |
+
}
|
| 288 |
+
},
|
| 289 |
+
},
|
| 290 |
+
},
|
| 291 |
+
},
|
| 292 |
+
]
|
| 293 |
+
|
| 294 |
+
messages = [
|
| 295 |
+
{"role": "system", "content": SYSTEM_PROMPT},
|
| 296 |
+
{
|
| 297 |
+
"role": "user",
|
| 298 |
+
"content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.",
|
| 299 |
+
},
|
| 300 |
+
{
|
| 301 |
+
"role": "assistant",
|
| 302 |
+
"content": "",
|
| 303 |
+
"tool_calls": [
|
| 304 |
+
{
|
| 305 |
+
"id": "bbc5b7ede",
|
| 306 |
+
"type": "function",
|
| 307 |
+
"function": {
|
| 308 |
+
"name": "rewrite",
|
| 309 |
+
"arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}',
|
| 310 |
+
},
|
| 311 |
+
}
|
| 312 |
+
],
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"role": "tool",
|
| 316 |
+
"content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}',
|
| 317 |
+
"tool_call_id": "bbc5b7ede",
|
| 318 |
+
"name": "rewrite",
|
| 319 |
+
},
|
| 320 |
+
{
|
| 321 |
+
"role": "assistant",
|
| 322 |
+
"content": "---\n\nOpenAI is a FOR-profit company.",
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"role": "user",
|
| 326 |
+
"content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?",
|
| 327 |
+
},
|
| 328 |
+
]
|
| 329 |
+
|
| 330 |
+
data = {"model": model, "messages": messages, "tools": tools}
|
| 331 |
+
|
| 332 |
+
response = requests.post(url, headers=headers, data=json.dumps(data))
|
| 333 |
+
import ipdb; ipdb.set_trace()
|
| 334 |
+
print(response.json()["choices"][0]["message"]["tool_calls"])
|
| 335 |
+
# [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}]
|
| 336 |
+
```
|
| 337 |
+
|
| 338 |
+
</details>
|
| 339 |
+
|
| 340 |
+
#### Offline
|
| 341 |
+
|
| 342 |
+
```py
|
| 343 |
+
from vllm import LLM
|
| 344 |
+
from vllm.sampling_params import SamplingParams
|
| 345 |
+
from datetime import datetime, timedelta
|
| 346 |
+
|
| 347 |
+
SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
|
| 348 |
+
|
| 349 |
+
user_prompt = "Give me 5 non-formal ways to say 'See you later' in French."
|
| 350 |
+
|
| 351 |
+
messages = [
|
| 352 |
+
{
|
| 353 |
+
"role": "system",
|
| 354 |
+
"content": SYSTEM_PROMPT
|
| 355 |
+
},
|
| 356 |
+
{
|
| 357 |
+
"role": "user",
|
| 358 |
+
"content": user_prompt
|
| 359 |
+
},
|
| 360 |
+
]
|
| 361 |
+
|
| 362 |
+
# note that running this model on GPU requires over 60 GB of GPU RAM
|
| 363 |
+
llm = LLM(model="ZeroAgency/Zero-Mistral-24B", tokenizer_mode="mistral", tensor_parallel_size=8)
|
| 364 |
+
|
| 365 |
+
sampling_params = SamplingParams(max_tokens=512, temperature=0.15)
|
| 366 |
+
outputs = llm.chat(messages, sampling_params=sampling_params)
|
| 367 |
+
|
| 368 |
+
print(outputs[0].outputs[0].text)
|
| 369 |
+
# Sure, here are five non-formal ways to say "See you later" in French:
|
| 370 |
+
#
|
| 371 |
+
# 1. À plus tard
|
| 372 |
+
# 2. À plus
|
| 373 |
+
# 3. Salut
|
| 374 |
+
# 4. À toute
|
| 375 |
+
# 5. Bisous
|
| 376 |
+
#
|
| 377 |
+
# ```
|
| 378 |
+
# /\_/\
|
| 379 |
+
# ( o.o )
|
| 380 |
+
# > ^ <
|
| 381 |
+
# ```
|
| 382 |
+
```
|
| 383 |
+
|
| 384 |
+
### Transformers
|
| 385 |
+
|
| 386 |
+
If you want to use Hugging Face transformers to generate text, you can do something like this.
|
| 387 |
+
|
| 388 |
+
```py
|
| 389 |
+
from transformers import pipeline
|
| 390 |
+
import torch
|
| 391 |
+
|
| 392 |
+
messages = [
|
| 393 |
+
{"role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French."},
|
| 394 |
+
]
|
| 395 |
+
chatbot = pipeline("text-generation", model="ZeroAgency/Zero-Mistral-24B", max_new_tokens=256, torch_dtype=torch.bfloat16)
|
| 396 |
+
chatbot(messages)
|
| 397 |
+
```
|
| 398 |
+
|
| 399 |
+
|
| 400 |
+
## Environmental Impact
|
| 401 |
+
|
| 402 |
+
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 403 |
+
|
| 404 |
+
- **Hardware Type:** 8x H200
|
| 405 |
+
- **Hours used:** 29.5
|
| 406 |
+
- **Cloud Provider:** Runpod
|
| 407 |
+
- **Compute Region:** US-DE
|
| 408 |
+
- **Carbon Emitted:** `¯\_(ツ)_/¯`
|
| 409 |
+
|