AAIT-86M-GGUF
AAIT-86M-GGUF contains quantized GGUF exports for the published AAIT-86M model.
Canonical model repo:
augmem/AAIT-86M
Files in this repo:
AAIT-86M_q8_0.ggufAAIT-86M_q5_1.ggufgguf_manifest.json
These are custom triembed GGUF exports for a trimodal retrieval-plus-anchor model.
They are useful for:
- compact storage
- transport
- custom runtime integration work
They are not generic llama.cpp text-model artifacts.
For the full model package, loader, and combined safetensors artifact, use:
augmem/AAIT-86M
- Downloads last month
- 98
Hardware compatibility
Log In to add your hardware
5-bit
8-bit
Model tree for augmem/AAIT-86M-GGUF
Base model
augmem/AAIT-86M