bge-m3 GGUF

GGUF format of BAAI/bge-m3 for use with CrispEmbed.

BGE-M3. Dense + sparse + ColBERT multi-vector retrieval in one model. 100+ languages, 8192 context.

Files

File Quantization Size
bge-m3-q4_k.gguf Q4_K 438 MB
bge-m3-q8_0.gguf Q8_0 583 MB
bge-m3.gguf F32 2175 MB

Quick Start

# Download
huggingface-cli download cstr/bge-m3-GGUF bge-m3-q4_k.gguf --local-dir .

# Run with CrispEmbed
./crispembed -m bge-m3-q4_k.gguf "Hello world"

# Or with auto-download
./crispembed -m bge-m3 "Hello world"

Model Details

Property Value
Architecture XLM-R
Parameters 568M
Embedding Dimension 1024
Layers 24
Pooling mean
Tokenizer SentencePiece
Base Model BAAI/bge-m3

Verification

Verified bit-identical to HuggingFace sentence-transformers (cosine similarity >= 0.999 on test texts).

Usage with CrispEmbed

CrispEmbed is a lightweight C/C++ text embedding inference engine using ggml. No Python runtime, no ONNX. Supports BERT, XLM-R, Qwen3, and Gemma3 architectures.

# Build CrispEmbed
git clone https://github.com/CrispStrobe/CrispEmbed
cd CrispEmbed
cmake -S . -B build && cmake --build build -j

# Encode
./build/crispembed -m bge-m3-q4_k.gguf "query text"

# Server mode
./build/crispembed-server -m bge-m3-q4_k.gguf --port 8080
curl -X POST http://localhost:8080/v1/embeddings \
    -d '{"input": ["Hello world"], "model": "bge-m3"}'

Credits

  • Original model: BAAI/bge-m3
  • Inference engine: CrispEmbed (ggml-based)
  • Conversion: convert-bert-embed-to-gguf.py
Downloads last month
963
GGUF
Model size
0.6B params
Architecture
bert
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for cstr/bge-m3-GGUF

Base model

BAAI/bge-m3
Quantized
(83)
this model