✨ Overview

Boogr is derived from BAAI's bge-small-en-v1.5, part of the BGE (BAAI General Embedding) family.

The upstream model family is designed for dense retrieval and text embedding tasks such as:

  • semantic search
  • document retrieval
  • chunk similarity
  • passage ranking
  • clustering
  • sentence-level representation learning

Within Chonky, Boogr is the lightweight local English embedding option and is best suited for:

  • default local installations
  • offline embedding workflows
  • rapid experimentation
  • development and testing
  • vectorizing chunked corpora on lower-resource systems

🧠 Why Boogr Exists

Chonky supports both hosted and local embedding workflows. Boogr exists to give Chonky users a fully local, low-friction embedding path that avoids dependence on hosted provider APIs for common semantic-search tasks.

Boogr is especially useful when you want:

  • local-only embeddings
  • offline or restricted-network operation
  • lower memory use than larger embedding models
  • an English-first default embedder
  • a model that is straightforward to distribute with the application

πŸ”¬ Base Model Lineage

Boogr is derived from:

  • Upstream base model: BAAI/bge-small-en-v1.5
  • Model family: BGE / FlagEmbedding
  • Primary task family: feature extraction / text embeddings
  • Language focus: English
  • License: MIT

The v1.5 revision of the BGE family was introduced to improve retrieval behavior and address similarity-distribution issues observed in earlier releases.


πŸ—οΈ Position in the Chonky Local Stack

Boogr is intended to occupy the following role among Chonky's local GGUF embedders:

  • Boogr β†’ lightweight local default
  • Nomnom β†’ retrieval-oriented Nomic-based local option
  • Bobo β†’ heavier, higher-quality local option

This makes Boogr the best starting point when you want a local model that is practical, fast enough, and easy to deploy broadly.


πŸ“¦ Expected Local File Path

Chonky expects Boogr at the following location:

models/
└── boogr/
    └── boogr-small-en-v1.5-q8_0.gguf
Downloads last month
29
GGUF
Model size
33.2M params
Architecture
bert
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for leeroy-jankins/boogr

Quantized
(19)
this model