BinaryLLM (HF export)

Tokenizer-free / base-N model export.

Load

from transformers import AutoModelForCausalLM
m = AutoModelForCausalLM.from_pretrained("./hf_binaryllm_repo", trust_remote_code=True)
Downloads last month
58
Safetensors
Model size
79.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for PhysiQuanty/Patent-Test-Radix-65536-AutoTokenizer_FineTune

Finetuned
(1)
this model

Space using PhysiQuanty/Patent-Test-Radix-65536-AutoTokenizer_FineTune 1