Text Generation
Transformers
Safetensors
jamba
custom_code
How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="Severian/Jamba-UltraInteract-Instruct-1B", trust_remote_code=True)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Severian/Jamba-UltraInteract-Instruct-1B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Severian/Jamba-UltraInteract-Instruct-1B", trust_remote_code=True)
Quick Links

This Jamba model has been pruned to just 1B parameters. It was then trained on the first 50k examples of the Ultra Interact Pair dataset for Instruction based fine-tuning.

Initial tests work but may be inconsistent. More info and examples will be posted later

Training

  • 50k Examples
  • 6 hours x A100
Downloads last month
13
Safetensors
Model size
1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Severian/Jamba-UltraInteract-Instruct-1B

Finetuned
(5)
this model

Dataset used to train Severian/Jamba-UltraInteract-Instruct-1B

Collection including Severian/Jamba-UltraInteract-Instruct-1B