Full ColBERT & H-Pool — Qwen2.5-VL-3B

This checkpoint supports two inference modes from the same weights: (1) Full ColBERT (uncompressed) — use all token-level vectors for late interaction; (2) H-Pool — a parameter-free compression that applies Ward hierarchical clustering at inference to reduce document vectors to a fixed budget (e.g. 32). No extra parameters; you switch behavior by setting pooling="colbert" or pooling="hierarchical_clustering". Weights are initialized from Qwen2.5-VL-3B-Instruct, finetuned on MSR-VTT for text-to-video retrieval with bidirectional attention.

arXiv GitHub License

Method Overview

Full ColBERT keeps the full multi-vector representation (all token embeddings) and scores with ColBERT-style MaxSim. H-Pool compresses document tokens to a fixed number of vectors via Ward hierarchical clustering (cosine similarity → distance, then cluster and average within clusters); queries stay uncompressed. Both use the same checkpoint; only the pooling option changes.

Method

Results on MSR-VTT

Baseline (Ours, uncompressed) and H-Pool in the table below are from this checkpoint.

Method Tokens R@1 R@10 nDCG@10
OmniEmbed-7B 1 51.5 83.2 67.1
Video-ColBERT 26 51.5 85.5 67.7
Baseline (Ours, uncompressed) 1318 55.7 88.3 71.9
SeqResize 32 53.3 86.9 69.9
MemTok 32 54.2 86.4 69.9
H-Pool (this checkpoint) 32 54.1 87.3 70.4
AGC 32 56.9 87.0 71.5

Model Details

Initial weights Qwen2.5-VL-3B-Instruct
Architecture Qwen2.5-VL with bidirectional attention
Hidden dimension 2048
Pooling colbert (full) or hierarchical_clustering (H-Pool)
Budget H-Pool: 32 vectors per document
Scoring ColBERT-style MaxSim (late interaction)
Normalization L2-normalized embeddings
Query prefix "Query: "
Passage prefix "Passage: "
Precision bfloat16
Training video frames 24

Usage

Use Full ColBERT (uncompressed) with pooling="colbert", or H-Pool with pooling="hierarchical_clustering" and num_repr_vectors=32. Same checkpoint; only the pooling argument changes.

import torch
from transformers import AutoProcessor
from qwen_vl_utils import process_vision_info

from src.arguments import ModelArguments
from src.encoder.multivec_encoder import MultiVecEncoder
from src.models.qwen2_5_vl_embed.qwen2_5_vl_embed import Qwen2_5ForEmbedding

MODEL_ID = "hltcoe/ColBERT_qwen2.5-vl_msrvtt"
VIDEO_PATH = "PLACEHOLDER"

# Full (uncompressed) ColBERT:
# model_args = ModelArguments(model_name_or_path=MODEL_ID, pooling="colbert", normalize=True, attn_implementation="flash_attention_2")
# H-Pool (32 vectors):
model_args = ModelArguments(
    model_name_or_path=MODEL_ID,
    pooling="hierarchical_clustering",
    normalize=True,
    num_repr_vectors=32,
    attn_implementation="flash_attention_2",
)

processor = AutoProcessor.from_pretrained(MODEL_ID)
model = MultiVecEncoder.load(
    Qwen2_5ForEmbedding,
    model_args,
    attn_implementation=model_args.attn_implementation,
    dtype=torch.bfloat16,
)
model = model.to("cuda").eval()

# --- Encode a video document ---
passage_messages = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "Passage: "},
            {"type": "video", "video": VIDEO_PATH, "nframes": 24, "max_pixels": 84672, "min_pixels": 75264},
        ],
    }
]
text = processor.apply_chat_template(passage_messages, tokenize=False, add_generation_prompt=False)
image_inputs, video_inputs = process_vision_info(passage_messages)
passage_inputs = processor(
    text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt",
).to("cuda")

with torch.amp.autocast(device_type="cuda", dtype=torch.bfloat16):
    with torch.inference_mode():
        doc_embeddings, doc_mask = model.encode(passage_inputs, is_query=False)
        print(doc_embeddings.shape)
        # colbert: (1, seq_len, 2048); hierarchical_clustering: (1, 32, 2048)

# --- Encode a text query ---
query_messages = [{"role": "user", "content": [{"type": "text", "text": "Query: a person is cooking"}]}]
query_text = processor.apply_chat_template(query_messages, tokenize=False, add_generation_prompt=False)
query_inputs = processor(text=[query_text], padding=True, return_tensors="pt").to("cuda")

with torch.amp.autocast(device_type="cuda", dtype=torch.bfloat16):
    with torch.inference_mode():
        query_embeddings, query_mask = model.encode(query_inputs, is_query=True)

# --- ColBERT MaxSim scoring ---
score = model.compute_similarity(query_embeddings, doc_embeddings, query_mask, doc_mask)
print(f"Similarity score: {score.item():.4f}")

Command line usage

For running inference and evaluation from the command line, see the Quick Start section.

Citation

@misc{qin2026multivectorindexcompressionmodality,
      title={Multi-Vector Index Compression in Any Modality}, 
      author={Hanxiang Qin and Alexander Martin and Rohan Jha and Chunsheng Zuo and Reno Kriz and Benjamin Van Durme},
      year={2026},
      eprint={2602.21202},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2602.21202}, 
}
Downloads last month
14
Safetensors
Model size
756k params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hltcoe/ColBERT_qwen2.5-vl_msrvtt

Finetuned
(691)
this model

Dataset used to train hltcoe/ColBERT_qwen2.5-vl_msrvtt

Collection including hltcoe/ColBERT_qwen2.5-vl_msrvtt

Paper for hltcoe/ColBERT_qwen2.5-vl_msrvtt