# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("ai21labs/Jamba-tiny-random")
model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-tiny-random")Quick Links
This is a tiny, dummy version of Jamba, used for debugging and experimentation over the Jamba architecture.
It has 128M parameters (instead of 52B), and is initialized with random weights and did not undergo any training.
- Downloads last month
- 9,551
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ai21labs/Jamba-tiny-random")