Pure Quantized versions from Official Mistral 7B v0.3.
These are quantized versions of the official Mistral 7B v0.3 model, created locally using the Official Llama.cpp
They are completely unmodified, no edits, just direct quantizations of the original weights. That's why they're called pure GGUFs.
Note: This model is a an old model, and may not produce accurate responses! This is just an archive of the original model, for anyone to use! This model still is amazing especially for its time being open-sourced.
The model is named Mistral-7B-Instruct-Latest-Pure-GGUF because its the last version released for the Mistral Model, and I find that putting v0.3 in the name looked ugly.
Why do this?
Most quantized models you find online may include minor tweaks that the user may not want. Most of these tweeks, the daily user will not be bother by or even notice. But I find having the purity of a GGUF, knowing it is purely based on the official weights, is better. I'd rather use my own quants that I know are pure.
TL;DR
Straight from the official weights, purely quantized, pure GGUF.
What quantization should I pick?
The best quantization depends on your hardware entirely.
If you have more than 12GB of VRAM:
- Use anything at or above
q5_k_mfor the closest to original quality. q8_0is near-lossless if you can fit it.
If you have 8–12GB of VRAM:
- Use
q4_k_m, this is the best for most cases for the daily user.
Using this model at a Context Size of 32k, and a quantization of q4_k_m, uses around 9GB.
Quantization process.
- Weights are pulled directly from the official
mistralai/Mistral-7B-Instruct-v0.3repository on HuggingFace. - Ignore files:
.git,.gitattributes,LICENSE,README. They're not needed. - Converted to F16 GGUF using
convert_hf_to_gguf.pyfrom the official llama.cpp source. - Quantized locally using
llama-quantize. - Uploaded directly to HuggingFace.
Official Original Model ReadMe
library_name: vllm license: apache-2.0 base_model: mistralai/Mistral-7B-v0.3 inference: false extra_gated_description: >- If you want to learn more about how we process your personal data, please read our Privacy Policy. tags: - vllm - mistral-common
Model Card for Mistral-7B-Instruct-v0.3
The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
Mistral-7B-v0.3 has the following changes compared to Mistral-7B-v0.2
- Extended vocabulary to 32768
- Supports v3 Tokenizer
- Supports function calling
Installation
It is recommended to use mistralai/Mistral-7B-Instruct-v0.3 with mistral-inference. For HF transformers code snippets, please keep scrolling.
pip install mistral_inference
Download
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-v0.3')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Mistral-7B-Instruct-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
Chat
After installing mistral_inference, a mistral-chat CLI command should be available in your environment. You can chat with the model using
mistral-chat $HOME/mistral_models/7B-Instruct-v0.3 --instruct --max_tokens 256
Instruct following
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
Function calling
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
Generate with transformers
If you want to use Hugging Face transformers to generate text, you can do something like this.
from transformers import pipeline
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3")
chatbot(messages)
Function calling with transformers
To use this example, you'll need transformers version 4.42.0 or higher. Please see the
function calling guide
in the transformers docs for more information.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "mistralai/Mistral-7B-Instruct-v0.3"
tokenizer = AutoTokenizer.from_pretrained(model_id)
def get_current_weather(location: str, format: str):
"""
Get the current weather
Args:
location: The city and state, e.g. San Francisco, CA
format: The temperature unit to use. Infer this from the users location. (choices: ["celsius", "fahrenheit"])
"""
pass
conversation = [{"role": "user", "content": "What's the weather like in Paris?"}]
tools = [get_current_weather]
# format and tokenize the tool use prompt
inputs = tokenizer.apply_chat_template(
conversation,
tools=tools,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
inputs.to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1000)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Note that, for reasons of space, this example does not show a complete cycle of calling a tool and adding the tool call and tool results to the chat history so that the model can use them in its next generation. For a full tool calling example, please see the function calling guide, and note that Mistral does use tool call IDs, so these must be included in your tool calls and tool results. They should be exactly 9 alphanumeric characters.
Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
- Downloads last month
- 144
4-bit
5-bit
8-bit
16-bit
Model tree for Smoffyy/Mistral-7B-Instruct-Latest-Pure-GGUF
Base model
mistralai/Mistral-7B-v0.3