How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="Kquant03/Michel-13B-GGUF",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

Michel - 13B

BASE MODEL HERE A Uncensored fine tune model general tasks focused of NousHermes-Llama2-13B.

  • Uses Llama2 prompt template.
  • It has been fine tune with a newer dataset :)
  • Next one will be more interesting :}

Thank you h2m for the compute

Have Fun :)

Downloads last month
53
GGUF
Model size
13B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support