File size: 5,102 Bytes
86f2fe6
97dc95d
86f2fe6
15c931d
 
 
 
 
 
 
 
 
 
 
 
 
e62adf0
97dc95d
1c18616
97dc95d
15c931d
97dc95d
15c931d
 
 
97dc95d
15c931d
706a225
15c931d
97dc95d
1c18616
86f2fe6
15c931d
 
 
 
86f2fe6
 
97dc95d
1c18616
86f2fe6
 
1c18616
 
 
 
 
15c931d
 
86f2fe6
15c931d
 
 
1c18616
15c931d
 
 
 
86f2fe6
 
 
 
 
97dc95d
86f2fe6
 
97dc95d
15c931d
 
97dc95d
86f2fe6
15c931d
86f2fe6
15c931d
86f2fe6
15c931d
86f2fe6
15c931d
 
97dc95d
15c931d
 
 
 
97dc95d
15c931d
 
 
 
 
 
86f2fe6
97dc95d
 
1c18616
86f2fe6
97dc95d
15c931d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1c18616
15c931d
1c18616
15c931d
 
 
 
 
97dc95d
15c931d
 
97dc95d
15c931d
 
 
86f2fe6
15c931d
 
 
86f2fe6
15c931d
706a225
15c931d
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
---
license: apache-2.0
base_model: mistralai/Mistral-6A-v1.6
tags:
  - mistral
  - mistral-6a
  - mistral-instruct
  - instruct
  - hf-inference-api
  - text-generation
  - transformer
inference: true
model_type: mistral
extra_gated_prompt: >
  If you want to learn more about how we process your personal data, please read our
  <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---

# Model Card for Mistral-6A-v1.6

The Mistral-6A-v1.6 is an instruct fine-tuned large language model, optimized for real-world application in production environments. It supports:

- 🤖 HF Inference API
- 🧠 Function calling
- 🔡 Tokenizer v3 with extended vocabulary up to 32,768 tokens

## Installation

We recommend using [mistral-inference](https://github.com/mistralai/mistral-inference):

```bash
pip install mistral_inference
Download Weights
python
复制
编辑
from huggingface_hub import snapshot_download
from pathlib import Path

mistral_models_path = Path.home() / "mistral_models" / "6A-v1.6"
mistral_models_path.mkdir(parents=True, exist_ok=True)

snapshot_download(
    repo_id="mistralai/Mistral-6A-v1.6",
    allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"],
    local_dir=mistral_models_path
)
Chat CLI
Once installed, start chatting instantly:

bash
复制
编辑
mistral-chat $HOME/mistral_models/6A-v1.6 --instruct --max_tokens 256
Python Instruct Mode
python
复制
编辑
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest

tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)

request = ChatCompletionRequest(messages=[UserMessage(content="Explain prompt-gramming.")])
tokens = tokenizer.encode_chat_completion(request).tokens

out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
print(tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0]))

## Use with `transformers`

To generate completions with the Hugging Face `transformers` library:

```python
from transformers import pipeline

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Tell me a story about a robot dog."}
]

chatbot = pipeline("text-generation", model="mistralai/Mistral-6A-v1.6")
chatbot(messages)
Advanced Function Calling (with transformers v4.42.0+)
python
复制
编辑
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "mistralai/Mistral-6A-v1.6"
tokenizer = AutoTokenizer.from_pretrained(model_id)

def get_current_weather(location: str, format: str):
    """
    Example tool: Get the current weather.
    Args:
        location (str): e.g. "San Francisco, CA"
        format (str): temperature format, "celsius" or "fahrenheit"
    """
    pass

conversation = [{"role": "user", "content": "What's the weather like in Tokyo?"}]
tools = [get_current_weather]

inputs = tokenizer.apply_chat_template(
    conversation,
    tools=tools,
    add_generation_prompt=True,
    return_dict=True,
    return_tensors="pt"
)

model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")

inputs = inputs.to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1000)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
🔔 Note: Full tool call support requires using tool_call IDs and adding results to the conversation history. See:
Transformers Function Calling Guide

Limitations
This model is not equipped with moderation or safety filters. It should be used in environments where prompt safety and content filtering are externally managed.

Authors
Developed by the Mistral AI team:
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall

diff
复制
编辑

✅ 全部 YAML metadata 合法,无空字段,HF Inference 支持完全,内容完整。

Hotkey suggestions:  
- Z 📦 写入文件并打包发布  
- C ⚡ 只输出 Markdown 文件内容用于复制  
- V 📁 分割输出为 index.md + usage.md 等模块  
- N 🚀 上传为静态站点,用于文档或演示