How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="ai21labs/Jamba-tiny-random")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("ai21labs/Jamba-tiny-random")
model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-tiny-random")
Quick Links

This is a tiny, dummy version of Jamba, used for debugging and experimentation over the Jamba architecture.

It has 128M parameters (instead of 52B), and is initialized with random weights and did not undergo any training.

Downloads last month
9,551
Safetensors
Model size
0.1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support