tessella-moe-bench âš¡

Synthetic 32L × 64E Qwen2-MoE model with random weights for Tessella benchmarking.

Same structure as Qwen1.5-MoE-A2.7B but with hidden_size=512 to fit in 4GB VRAM. Do not use for inference — weights are random, output is noise.

Value
Layers 32
Experts 64 (2 active/token)
Hidden size 512
Size ~1.6GB F16
# worker.toml
default_model = "victorespada/tessella-moe-bench"
Downloads last month
17
Safetensors
Model size
0.9B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support