NeuronSpark V3 — 1.1B SFT (step 2000)
Architecture: SNN (Spiking Neural Network) decoder with PonderNet-style adaptive K time-step routing. Custom model, not a transformer variant.
- Hidden dim D = 1024 · K_max = 12 · 24 layers
- ~1.24B parameters (model.safetensors ≈ 2.47 GB bf16)
- Tokenizer vocab = 128387 (multilingual)
- SFT step 2000, base = pretrain step 108000
- Optimizer: AdamW + DeepSpeed ZeRO-2 (8 ranks)
- SFT data: uniform-bucketed [1000, 2048] tokens, ~35% thinking / 65% non-thinking, multi-turn ZH/EN balanced (OpenThoughts, QwQ, Congliu, smoltalk2, WildChat-1M, no_robots)
Load (inference)
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Brain2nd/NeuronSpark-V3-1.1B-SFT")
model = AutoModelForCausalLM.from_pretrained(
"Brain2nd/NeuronSpark-V3-1.1B-SFT",
trust_remote_code=True,
).cuda().eval()
msgs = [{"role": "user", "content": "What is the capital of France?"}]
text = tokenizer.apply_chat_template(msgs, tokenize=False,
add_generation_prompt=True,
enable_thinking=True)
ids = tokenizer(text, return_tensors="pt").input_ids.cuda()
out = model.generate_cached(input_ids=ids, max_new_tokens=512,
temperature=0.7, top_p=0.9, top_k=50, repetition_penalty=1.1)
print(tokenizer.decode(out[0, ids.shape[1]:], skip_special_tokens=False))
Resume SFT
deepspeed/ contains 8-rank ZeRO-2 sharded optimizer state. Use:
deepspeed --num_gpus=8 train_sft.py \
--deepspeed_config ds_config.json \
--resume ./ckpt_step2000
Files
model.safetensors |
HF-format bf16 weights |
config.json / generation_config.json |
model config |
tokenizer.json / tokenizer_config.json |
tokenizer (vocab=128387) |
modeling_neuronspark.py / configuration_neuronspark.py |
custom arch (trust_remote_code=True) |
deepspeed/ |
ZeRO-2 optimizer state (8 ranks) |
training_state.pth |
step / epoch / tokens_seen |
zero_to_fp32.py |
DeepSpeed helper |
- Downloads last month
- 45