Eve-2-MoE-NanoPII-272M - GGUF

GGUF quantizations of anthonym21/Eve-2-MoE-NanoPII-272M.

Quantization Variants

Quantization Filename Size
Q8_0 Eve-2-MoE-NanoPII-272M-Q8_0.gguf 290.9 MB
Q4_K_M Eve-2-MoE-NanoPII-272M-Q4_K_M.gguf 189.5 MB

Usage with Ollama

ollama run anthonym21/eve-2-moe-nanopii-272m

Usage with llama.cpp

llama-cli -m Eve-2-MoE-NanoPII-272M-Q4_K_M.gguf -p "Your prompt here"

Architecture

  • Type: DeepSeek-style Mixture of Experts (MoE)
  • Parameters: 272M total
  • Layers: 12
  • Hidden dim: 512
  • Experts: 8 routed (top-2) + 1 shared per layer
  • Context: 2048 tokens
  • Tokenizer: GPT-2

Parent Model

This is a quantized version of anthonym21/Eve-2-MoE-NanoPII-272M.

Downloads last month
60
GGUF
Model size
0.3B params
Architecture
deepseek
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for anthonym21/Eve-2-MoE-NanoPII-272M-GGUF

Quantized
(1)
this model

Collection including anthonym21/Eve-2-MoE-NanoPII-272M-GGUF