How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="cognitivetech/Mistral-7b-Inst-0.2-Bulleted-Notes_GGUF",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

Mistral 7b Instruct v0.2 - Bulleted Notes

https://github.com/cognitivetech/ollama-ebook-summary

Template:

ChatML

Modelfile:

<|im_start|>system
<|im_start|>user
{{ .Prompt }} <|im_end|>
<|im_start|>assistant
{{ .Response }}<|im_end|>
"
PARAMETER stop <|im_start|>
PARAMETER stop <|im_end|>
PARAMETER num_ctx 8000
PARAMETER num_gpu -1
PARAMETER num_predict 4000

Uploaded model

  • Developed by: cognitivetech
  • License: apache-2.0
  • Finetuned from model : mistralai/Mistral-7B-Instruct-v0.2

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
841
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for cognitivetech/Mistral-7b-Inst-0.2-Bulleted-Notes_GGUF

Quantized
(100)
this model

Space using cognitivetech/Mistral-7b-Inst-0.2-Bulleted-Notes_GGUF 1

Collection including cognitivetech/Mistral-7b-Inst-0.2-Bulleted-Notes_GGUF