GGUF
conversational
How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="AI-Engine/Mistral-Nemo-Instruct-2407-GGUF",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

GGUF llama.cpp quantized version of:

Recommended Prompt Format (Mistral)

<s>[INST]Provide some context and/or instructions to the model.[/INST]
AI message goes here</s>
[INST] The user’s message goes here [/INST]

Quant Version: b3437 with imatrix

Downloads last month
19
GGUF
Model size
12B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support