SpermLLM Ministral3 3B

A fine-tuned version of Ministral-3-3B-Instruct-2512 trained with Unsloth on a carefully curated mix of SOTA reasoning and instruction-following datasets.

Optimized for math, code, science, and general reasoning โ€” competitive with models 2-3x its size.


Model Details

Property Value
Base Model mistralai/Ministral-3-3B-Instruct-2512
Model Type Causal Language Model (Decoder-only)
Parameters 3.84B
Trainable Parameters 135M (3.39% via LoRA)
Architecture Mistral with Sliding Window Attention
Context Length 8,192 tokens
Training Hardware NVIDIA B200 (180GB VRAM)
Training Framework Unsloth + TRL SFTTrainer
Precision BFloat16
Quantization 4-bit QLoRA during training
License Apache 2.0

This AI is (sorta SOTA) for it's size, as it can create multiple stuff without errors, this is our latest model

Downloads last month
10
GGUF
Model size
4B params
Architecture
mistral3
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for SpermAI/SpermLLM-S1-Ministral3-4B