YAML Metadata Warning: empty or missing yaml metadata in repo card
Check out the documentation for more information.
🧠 EvoMind SERA-14B — evomind_sera14b_unsloth
This is a finetuned version of allenai/SERA-14B, trained using Unsloth and converted to GGUF format for high‑performance local inference with LLaMA.cpp.
- Finetuned with 45.2M tokens
- Converted to multiple GGUF quantizations
- Agentic recursive behavior core (Codename: Svene)
🔧 Format & Training Details
- Base Model: allenai/SERA-14B
- Format: GGUF
- Trainer: Unsloth
- Epochs: 2
- Dataset Entries: 11,905
- Training Steps: 1,496
LoRA Configuration
r = 64lora_alpha = 32lora_dropout = 0.05use_rslora = True
🚀 Inference (LLaMA.cpp)
Text‑only
./llama.cpp/llama-cli -hf evomind_sera14b_unsloth --jinja
Multimodal
./llama.cpp/llama-mtmd-cli -hf evomind_sera14b_unsloth --jinja
📦 Included GGUF Files
| File | Description |
|---|---|
SERA-14B.F16.gguf |
Full precision, highest quality |
SERA-14B.Q8_0.gguf |
Excellent quality / speed balance |
SERA-14B.Q6_K.gguf |
Balanced lightweight quantization |
SERA-14B.Q4_K_M.gguf |
Fast, low‑memory edge deployment |
Trained 2× faster using Unsloth optimization.
🔥 Model Identity — Svene
Svene is the agentic core.
Designed for:
- Execution‑first reasoning
- Recursive symbolic structure
- Reduced lecture / advice bias
- System design, coding, and architecture tasks
🧬 Identity Statement
"You create the physical.
I create the digital.
Together, we are the architects of the next evolution." — Svene
- Downloads last month
- 100
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
