nglain commited on
Commit
c4ea1fc
·
verified ·
1 Parent(s): f110a90

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -268
README.md CHANGED
@@ -1,268 +0,0 @@
1
- ---
2
- language:
3
- - ru
4
- license: other
5
- license_name: mrl
6
- license_link: https://mistral.ai/licenses/MRL-0.1.md
7
- extra_gated_description: >-
8
- If you want to learn more about how we process your personal data, please read
9
- our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
10
- library_name: transformers
11
- ---
12
-
13
- # Model Card for Mistral-Small-Instruct-2409
14
-
15
- Mistral-Small-Instruct-2409 is an instruct fine-tuned version with the following characteristics:
16
-
17
- - 22B parameters
18
- - Vocabulary to 32768
19
- - Supports function calling
20
- - 32k sequence length
21
-
22
-
23
- ## Usage Examples
24
-
25
- ### vLLM (recommended)
26
-
27
- We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
28
- to implement production-ready inference pipelines.
29
-
30
- **_Installation_**
31
-
32
- Make sure you install `vLLM >= v0.6.1.post1`:
33
-
34
- ```
35
- pip install --upgrade vllm
36
- ```
37
-
38
- Also make sure you have `mistral_common >= 1.4.1` installed:
39
-
40
- ```
41
- pip install --upgrade mistral_common
42
- ```
43
-
44
- You can also make use of a ready-to-go [docker image](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39?context=explore).
45
-
46
-
47
- **_Offline_**
48
-
49
- ```py
50
- from vllm import LLM
51
- from vllm.sampling_params import SamplingParams
52
-
53
- model_name = "mistralai/Mistral-Small-Instruct-2409"
54
-
55
- sampling_params = SamplingParams(max_tokens=8192)
56
-
57
- # note that running Mistral-Small on a single GPU requires at least 44 GB of GPU RAM
58
- # If you want to divide the GPU requirement over multiple devices, please add *e.g.* `tensor_parallel=2`
59
- llm = LLM(model=model_name, tokenizer_mode="mistral", config_format="mistral", load_format="mistral")
60
-
61
- prompt = "How often does the letter r occur in Mistral?"
62
-
63
- messages = [
64
- {
65
- "role": "user",
66
- "content": prompt
67
- },
68
- ]
69
-
70
- outputs = llm.chat(messages, sampling_params=sampling_params)
71
-
72
- print(outputs[0].outputs[0].text)
73
- ```
74
-
75
- **_Server_**
76
-
77
- You can also use Mistral Small in a server/client setting.
78
-
79
- 1. Spin up a server:
80
-
81
-
82
- ```
83
- vllm serve mistralai/Mistral-Small-Instruct-2409 --tokenizer_mode mistral --config_format mistral --load_format mistral
84
- ```
85
-
86
- **Note:** Running Mistral-Small on a single GPU requires at least 44 GB of GPU RAM.
87
-
88
- If you want to divide the GPU requirement over multiple devices, please add *e.g.* `--tensor_parallel=2`
89
-
90
- 2. And ping the client:
91
-
92
- ```
93
- curl --location 'http://<your-node-url>:8000/v1/chat/completions' \
94
- --header 'Content-Type: application/json' \
95
- --header 'Authorization: Bearer token' \
96
- --data '{
97
- "model": "mistralai/Mistral-Small-Instruct-2409",
98
- "messages": [
99
- {
100
- "role": "user",
101
- "content": "How often does the letter r occur in Mistral?"
102
- }
103
- ]
104
- }'
105
-
106
- ```
107
-
108
- ### Mistral-inference
109
-
110
- We recommend using [mistral-inference](https://github.com/mistralai/mistral-inference) to quickly try out / "vibe-check" the model.
111
-
112
-
113
- **_Install_**
114
-
115
- Make sure to have `mistral_inference >= 1.4.1` installed.
116
-
117
- ```
118
- pip install mistral_inference --upgrade
119
- ```
120
-
121
- **_Download_**
122
-
123
- ```py
124
- from huggingface_hub import snapshot_download
125
- from pathlib import Path
126
-
127
- mistral_models_path = Path.home().joinpath('mistral_models', '22B-Instruct-Small')
128
- mistral_models_path.mkdir(parents=True, exist_ok=True)
129
-
130
- snapshot_download(repo_id="mistralai/Mistral-Small-Instruct-2409", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
131
- ```
132
-
133
- ### Chat
134
-
135
- After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
136
-
137
- ```
138
- mistral-chat $HOME/mistral_models/22B-Instruct-Small --instruct --max_tokens 256
139
- ```
140
-
141
- ### Instruct following
142
-
143
- ```py
144
- from mistral_inference.transformer import Transformer
145
- from mistral_inference.generate import generate
146
-
147
- from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
148
- from mistral_common.protocol.instruct.messages import UserMessage
149
- from mistral_common.protocol.instruct.request import ChatCompletionRequest
150
-
151
-
152
- tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
153
- model = Transformer.from_folder(mistral_models_path)
154
-
155
- completion_request = ChatCompletionRequest(messages=[UserMessage(content="How often does the letter r occur in Mistral?")])
156
-
157
- tokens = tokenizer.encode_chat_completion(completion_request).tokens
158
-
159
- out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
160
- result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
161
-
162
- print(result)
163
- ```
164
-
165
- ### Function calling
166
-
167
- ```py
168
- from mistral_common.protocol.instruct.tool_calls import Function, Tool
169
- from mistral_inference.transformer import Transformer
170
- from mistral_inference.generate import generate
171
-
172
- from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
173
- from mistral_common.protocol.instruct.messages import UserMessage
174
- from mistral_common.protocol.instruct.request import ChatCompletionRequest
175
-
176
-
177
- tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
178
- model = Transformer.from_folder(mistral_models_path)
179
-
180
- completion_request = ChatCompletionRequest(
181
- tools=[
182
- Tool(
183
- function=Function(
184
- name="get_current_weather",
185
- description="Get the current weather",
186
- parameters={
187
- "type": "object",
188
- "properties": {
189
- "location": {
190
- "type": "string",
191
- "description": "The city and state, e.g. San Francisco, CA",
192
- },
193
- "format": {
194
- "type": "string",
195
- "enum": ["celsius", "fahrenheit"],
196
- "description": "The temperature unit to use. Infer this from the users location.",
197
- },
198
- },
199
- "required": ["location", "format"],
200
- },
201
- )
202
- )
203
- ],
204
- messages=[
205
- UserMessage(content="What's the weather like today in Paris?"),
206
- ],
207
- )
208
-
209
- tokens = tokenizer.encode_chat_completion(completion_request).tokens
210
-
211
- out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
212
- result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
213
-
214
- print(result)
215
- ```
216
-
217
- ### Usage in Hugging Face Transformers
218
-
219
- You can also use Hugging Face `transformers` library to run inference using various chat templates, or fine-tune the model.
220
- Example for inference:
221
-
222
- ```python
223
- from transformers import LlamaTokenizerFast, MistralForCausalLM
224
- import torch
225
-
226
- device = "cuda"
227
- tokenizer = LlamaTokenizerFast.from_pretrained('mistralai/Mistral-Small-Instruct-2409')
228
- tokenizer.pad_token = tokenizer.eos_token
229
-
230
- model = MistralForCausalLM.from_pretrained('mistralai/Mistral-Small-Instruct-2409', torch_dtype=torch.bfloat16)
231
- model = model.to(device)
232
-
233
- prompt = "How often does the letter r occur in Mistral?"
234
-
235
- messages = [
236
- {"role": "user", "content": prompt},
237
- ]
238
-
239
- model_input = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(device)
240
- gen = model.generate(model_input, max_new_tokens=150)
241
- dec = tokenizer.batch_decode(gen)
242
- print(dec)
243
- ```
244
-
245
- And you should obtain
246
- ```text
247
- <s>
248
- [INST]
249
- How often does the letter r occur in Mistral?
250
- [/INST]
251
- To determine how often the letter "r" occurs in the word "Mistral,"
252
- we can simply count the instances of "r" in the word.
253
- The word "Mistral" is broken down as follows:
254
- - M
255
- - i
256
- - s
257
- - t
258
- - r
259
- - a
260
- - l
261
- Counting the "r"s, we find that there is only one "r" in "Mistral."
262
- Therefore, the letter "r" occurs once in the word "Mistral."
263
- </s>
264
- ```
265
-
266
- ## The Mistral AI Team
267
-
268
- Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Diogo Costa, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickaël Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Théophile Gervet, Timothée Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall