Text Ranking
Transformers
Safetensors
MLX
qwen3_vl
image-text-to-text
multimodal rerank
text rerank
4-bit precision
Instructions to use arthurcollet/Qwen3-VL-Reranker-2B-mlx-nvfp4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use arthurcollet/Qwen3-VL-Reranker-2B-mlx-nvfp4 with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("arthurcollet/Qwen3-VL-Reranker-2B-mlx-nvfp4") model = AutoModelForImageTextToText.from_pretrained("arthurcollet/Qwen3-VL-Reranker-2B-mlx-nvfp4") - MLX
How to use arthurcollet/Qwen3-VL-Reranker-2B-mlx-nvfp4 with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir Qwen3-VL-Reranker-2B-mlx-nvfp4 arthurcollet/Qwen3-VL-Reranker-2B-mlx-nvfp4
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
arthurcollet/Qwen3-VL-Reranker-2B-mlx-nvfp4
This model was converted to MLX format from Qwen/Qwen3-VL-Reranker-2B using mlx-vlm version 0.3.11.
Refer to the original model card for more details on the model.
Use with mlx
pip install -U mlx-vlm
python -m mlx_vlm.generate --model arthurcollet/Qwen3-VL-Reranker-2B-mlx-nvfp4 --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
- Downloads last month
- 24
Model size
0.8B params
Tensor type
U8
路
U32 路
BF16 路
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support
Model tree for arthurcollet/Qwen3-VL-Reranker-2B-mlx-nvfp4
Base model
Qwen/Qwen3-VL-2B-Instruct