AAIT-86M-GGUF

AAIT-86M-GGUF contains quantized GGUF exports for the published AAIT-86M model.

Canonical model repo:

  • augmem/AAIT-86M

Files in this repo:

  • AAIT-86M_q8_0.gguf
  • AAIT-86M_q5_1.gguf
  • gguf_manifest.json

These are custom triembed GGUF exports for a trimodal retrieval-plus-anchor model.

They are useful for:

  • compact storage
  • transport
  • custom runtime integration work

They are not generic llama.cpp text-model artifacts.

For the full model package, loader, and combined safetensors artifact, use:

  • augmem/AAIT-86M
Downloads last month
98
GGUF
Model size
0.1B params
Architecture
triembed
Hardware compatibility
Log In to add your hardware

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for augmem/AAIT-86M-GGUF

Base model

augmem/AAIT-86M
Quantized
(1)
this model