HF Export: Banyan 5B Deep (T5 tokenizer)
Contents
- model-00001-of-00001.safetensors, model.safetensors.index.json
- config.json (llama architecture, 48 layers, 2560 hidden, 24/8 heads)
- tokenizer.json, tokenizer_config.json, special_tokens_map.json, spiece.model (custom T5)
- generation_config.json
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
path = "outputs/checkpoint/step-78100-hf2"
tok = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, dtype="auto", device_map="auto")
prompt = "Why is the sky blue?"
enc = tok(prompt, return_tensors="pt", add_special_tokens=False).to(model.device)
out = model.generate(**enc, max_new_tokens=64, do_sample=True, temperature=0.8)
print(tok.decode(out[0], skip_special_tokens=False))
Notes
- The tokenizer is SentencePiece-based (T5). Do not add EOS at prompt time; use
add_special_tokens=Falsewhen tokenizing prompts for generation. - The model config is tailored to vocab_size=32100 and rope_theta=500000.
- If you prefer multi-shard weights, provide a
model.safetensors.index.jsonand re-save.
- Downloads last month
- 5