How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="MarsupialAI/JerseyDevil-14b_iMatrix_GGUF",
	filename="",
)
output = llm(
	"Once upon a time,",
	max_tokens=512,
	echo=True
)
print(output)

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

GGUFs for Jersey Devil 14b - https://huggingface.co/MarsupialAI/JerseyDevil-14b

iMatrix GGUFs generated with Kalomaze's semi-random groups_merged.txt

Downloads last month
12
GGUF
Model size
14B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support