anthonym21's picture
Add GGUF quantizations (Q8_0, Q4_K_M)
0441b84 verified
metadata
base_model: anthonym21/Eve-2-MoE-NanoSQL-272M
tags:
  - gguf
  - quantized
  - moe
  - eve-2
license: apache-2.0

Eve-2-MoE-NanoSQL-272M - GGUF

GGUF quantizations of anthonym21/Eve-2-MoE-NanoSQL-272M.

Quantization Variants

Quantization Filename Size
Q8_0 Eve-2-MoE-NanoSQL-272M-Q8_0.gguf 290.9 MB
Q4_K_M Eve-2-MoE-NanoSQL-272M-Q4_K_M.gguf 189.5 MB

Usage with Ollama

ollama run anthonym21/eve-2-moe-nanosql-272m

Usage with llama.cpp

llama-cli -m Eve-2-MoE-NanoSQL-272M-Q4_K_M.gguf -p "Your prompt here"

Architecture

  • Type: DeepSeek-style Mixture of Experts (MoE)
  • Parameters: 272M total
  • Layers: 12
  • Hidden dim: 512
  • Experts: 8 routed (top-2) + 1 shared per layer
  • Context: 2048 tokens
  • Tokenizer: GPT-2

Parent Model

This is a quantized version of anthonym21/Eve-2-MoE-NanoSQL-272M.