GGUF
How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="Severian/Jamba-UltraInteract-Instruct-1B-gguf",
	filename="Jamba-1B.bf16.gguf",
)
output = llm(
	"Once upon a time,",
	max_tokens=512,
	echo=True
)
print(output)

1B version of Jamba trained on 10k of the Ultra-Instruct dataset. It is broken though and repeats itself constantly. Keeping it up for posterity and/or experimentation purposes

Downloads last month
24
GGUF
Model size
1B params
Architecture
jamba
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including Severian/Jamba-UltraInteract-Instruct-1B-gguf