mlx-community/Qwen3-Reranker-4B-mxfp8

The Model mlx-community/Qwen3-Reranker-4B-mxfp8 was converted to MLX format from Qwen/Qwen3-Reranker-4B using mlx-embeddings version 0.0.3.

Use with mlx

pip install mlx-embeddings
from mlx_embeddings import load, generate
import mlx.core as mx

model, tokenizer = load("mlx-community/Qwen3-Reranker-4B-mxfp8")

# For text embeddings
output = generate(model, processor, texts=["I like grapes", "I like fruits"])
embeddings = output.text_embeds  # Normalized embeddings

# Compute dot product between normalized embeddings
similarity_matrix = mx.matmul(embeddings, embeddings.T)

print("Similarity matrix between texts:")
print(similarity_matrix)

Downloads last month
224
Safetensors
Model size
1B params
Tensor type
U8
U32
BF16
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for mlx-community/Qwen3-Reranker-4B-mxfp8

Base model

Qwen/Qwen3-4B-Base
Finetuned
(198)
this model