metadata
base_model: Qwen/Qwen3-Reranker-8B
language:
- en
license: apache-2.0
pipeline_tag: text-classification
library_name: furiosa-llm
tags:
- furiosa-ai
- qwen
- qwen-3
- reranker
Model Overview
- Model Architecture: Qwen3
- Input: Text (query-document pairs)
- Output: Relevance Score
- Model Optimizations:
- Maximum Context Length: 8k tokens
- Maximum Sequence Length: 8192 tokens
- Task Type: Reranking (relevance scoring)
- Intended Use Cases: Document reranking for retrieval-augmented generation (RAG) and search applications. Same as Qwen/Qwen3-Reranker-8B.
- Release Date: 01/20/2026
- Version: v2026.1
- License(s): Apache 2.0 License
- Supported Inference Engine(s): Furiosa LLM
- Supported Hardware Compatibility: FuriosaAI RNGD
- Preferred Operating System(s): Linux
Description:
This model is the pre-compiled version of the Qwen/Qwen3-Reranker-8B, which is a reranker model designed for scoring the relevance between queries and documents.
Usage
To run this model with Furiosa-LLM, follow the example command below after installing Furiosa-LLM and its prerequisites.
from furiosa_llm import LLM
llm = LLM.from_artifacts("furiosa-ai/Qwen3-Reranker-8B")
scores = llm.score([("query", "document1"), ("query", "document2")])