Mistral-Small-24B-Instruct-MNN

Pre-converted Mistral Small 24B Instruct in MNN format for on-device inference with TokForge.

Original model by [Mistral AI](https://huggingface.co/Mistral AI) β€” converted to MNN Q4 for mobile deployment.

Model Details

Architecture Mistral (sliding window attention, 40 layers)
Parameters 24B (4-bit quantized)
Format MNN (Alibaba Mobile Neural Network)
Quantization W4A16 (4-bit weights, block size 128)
Vocab 32,768 tokens
Source mistralai/Mistral-Small-24B-Instruct-2501

Description

Mistral AI's Small 24B β€” a knowledge-dense 24B model that fits on a single high-end GPU. Excellent for complex reasoning, function calling, and multi-step tasks. Runs on flagship phones with 24GB RAM. The most capable model in this collection.

Files

File Description
llm.mnn Model computation graph
llm.mnn.weight Quantized weight data (Q4, block=128)
llm_config.json Model config with Jinja chat template
tokenizer.txt Tokenizer vocabulary
config.json MNN runtime config

Usage with TokForge

This model is optimized for TokForge β€” a free Android app for private, on-device LLM inference.

  1. Download TokForge from the Play Store
  2. Open the app β†’ Models β†’ Download this model
  3. Start chatting β€” runs 100% locally, no internet required

Recommended Settings

Setting Value
Backend OpenCL (Qualcomm) / Vulkan (MediaTek) / CPU (fallback)
Precision Low
Threads 4
Thinking Off (or On for thinking-capable models)

Performance

Actual speed varies by device, thermal state, and generation length. Typical ranges for this model size:

Device SoC Backend tok/s
RedMagic 11 Pro (24GB) SM8850 OpenCL ~5-6 tok/s

Note: Requires 24GB+ RAM. Best for flagship phones with 24GB RAM and minimal background apps.

Attribution

This is an MNN conversion of Mistral Small 24B Instruct by [Mistral AI](https://huggingface.co/Mistral AI). All credit for the model architecture, training, and fine-tuning goes to the original author(s). This conversion only changes the runtime format for mobile deployment.

Limitations

  • Intended for TokForge / MNN on-device inference on Android
  • This is a runtime bundle, not a standard Transformers training checkpoint
  • Quantization (Q4) may slightly reduce quality compared to the full-precision original
  • Abliterated/uncensored models have had safety filters removed β€” use responsibly

Community

Export Details

Converted using MNN's llmexport pipeline:

python llmexport.py --path mistralai/Mistral-Small-24B-Instruct-2501 --export mnn --quant_bit 4 --quant_block 128
Downloads last month
164
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for darkmaniac7/Mistral-Small-24B-Instruct-MNN

Collection including darkmaniac7/Mistral-Small-24B-Instruct-MNN