GGUF
imatrix
conversational
How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="MarsupialAI/Lusca-33B_iMat_GGUF",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

iMatrix GGUFs for https://huggingface.co/MarsupialAI/Lusca-33B

iMat generated using Kalomaze's groups_merged.txt

Downloads last month
44
GGUF
Model size
33B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for MarsupialAI/Lusca-33B_iMat_GGUF

Quantized
(45)
this model