mlx-community/Qwen3-VL-Reranker-2B-bf16
The Model mlx-community/Qwen3-VL-Reranker-2B-bf16 was converted to MLX format from Qwen/Qwen3-VL-Reranker-2B using mlx-lm version 0.1.0.
Use with mlx
pip install mlx-embeddings
from mlx_embeddings import load, generate
import mlx.core as mx
model, tokenizer = load("mlx-community/Qwen3-VL-Reranker-2B-bf16")
# For image-text embeddings
images = [
"./images/cats.jpg", # cats
]
texts = ["a photo of cats", "a photo of a desktop setup", "a photo of a person"]
# Process all image-text pairs
outputs = generate(model, processor, texts, images=images)
logits_per_image = outputs.logits_per_image
probs = mx.sigmoid(logits_per_image) # probabilities for this image
for i, image in enumerate(images):
print(f"Image {i+1}:")
for j, text in enumerate(texts):
print(f" {probs[i][j]:.1%} match with '{text}'")
print()
- Downloads last month
- 15
Model size
2B params
Tensor type
BF16
·
Hardware compatibility
Log In to add your hardware
Quantized
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for mlx-community/Qwen3-VL-Reranker-2B-bf16
Base model
Qwen/Qwen3-VL-2B-Instruct