GGUF
conversational
How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="FBTMAML/KAT-Dev-72B-Exp",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

after loading the models you can merge into a single file .\llama.cpp\build\bin\Release\llama-gguf-split.exe --merge .\mymodels\model_name-00001-of-00004.gguf .\mymodels\model_name-merged.gguf

base model: https://huggingface.co/Kwaipilot/KAT-Dev-72B-Exp

Downloads last month
7
GGUF
Model size
73B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for FBTMAML/KAT-Dev-72B-Exp

Quantized
(17)
this model