NeuronSpark V3 — 1.1B Pretrain (in-progress, step 74000)
Architecture: SNN (Spiking Neural Network) decoder with PonderNet-style adaptive K time-step routing. Custom model, not a transformer variant.
- Hidden dim D = 1024 · K_max = 12 · 24 layers
- ~1.92B parameters (model.safetensors ≈ 2.47 GB bf16)
- Tokenizer vocab = 128387 (multilingual)
- Pretrain step 74000 / 380206 (~21% complete)
- Tokens seen: ~5.6B
- Optimizer: AdamW + DeepSpeed ZeRO-2
This is an in-training snapshot; not a finished model.
Load (inference / fine-tune)
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"Brain2nd/NeuronSpark-V3-1.1B-Pretrain",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
"Brain2nd/NeuronSpark-V3-1.1B-Pretrain"
)
Resume pretraining
The deepspeed/ directory contains 8-rank ZeRO-2 sharded optimizer state.
Use with DeepSpeed launcher to continue training from this exact point:
git lfs install
git clone https://huggingface.co/Brain2nd/NeuronSpark-V3-1.1B-Pretrain ./ckpt
deepspeed --num_gpus=8 train_pretrain.py \
--deepspeed_config ds_config.json \
--resume ./ckpt
Files
model.safetensors |
HF-format bf16 weights |
config.json / generation_config.json |
model config |
tokenizer.json / tokenizer_config.json |
tokenizer |
modeling_neuronspark.py / configuration_neuronspark.py / __init__.py |
custom architecture (trust_remote_code=True) |
deepspeed/ |
ZeRO-2 optimizer state (8 ranks, fp32 master + Adam moments) |
training_state.pth |
step / tokens_seen / epoch metadata |
zero_to_fp32.py |
DeepSpeed helper to consolidate sharded state |
- Downloads last month
- 54