Instructions to use Qwen/Qwen3-Reranker-0.6B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Qwen/Qwen3-Reranker-0.6B with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-Reranker-0.6B") model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Reranker-0.6B") - sentence-transformers
How to use Qwen/Qwen3-Reranker-0.6B with sentence-transformers:
from sentence_transformers import CrossEncoder model = CrossEncoder("Qwen/Qwen3-Reranker-0.6B") query = "Which planet is known as the Red Planet?" passages = [ "Venus is often called Earth's twin because of its similar size and proximity.", "Mars, known for its reddish appearance, is often referred to as the Red Planet.", "Jupiter, the largest planet in our solar system, has a prominent red spot.", "Saturn, famous for its rings, is sometimes mistaken for the Red Planet." ] scores = model.predict([(query, passage) for passage in passages]) print(scores) - Notebooks
- Google Colab
- Kaggle
Add `id2label` and `label2id` configs
#23 opened about 2 months ago
by
kozistr
Working GGUF for llama.cpp (native Windows/Linux, no WSL needed)
10
#22 opened 2 months ago
by
Voodisss
Qwen/Qwen3-Reranker-0.6B compatibility question
#21 opened 3 months ago
by
dqdw
Performance Optimization recommendations for Qwen3 Reranker 0.6B on A100/H100 GPUs
1
#20 opened 5 months ago
by
rajshah14
testing
#19 opened 7 months ago
by
weiseng188
Update README.md
❤️ 2
#18 opened 8 months ago
by
aynot
Working 0.6B-Reranker Quants
❤️👍 2
#16 opened 9 months ago
by
JonathanMiddleton
which inference service can run qwen3-reranker-0.6B now?
1
#15 opened 10 months ago
by
wangruiai2023
06b_reranker
#14 opened 10 months ago
by
linlxiu
Why using prefix and suffix as input?
➕ 6
#13 opened 11 months ago
by
FrankQ
Training supported
#12 opened 11 months ago
by
russwest404
Why in my test data 0.6B model not better than v2-m3
2
#11 opened 11 months ago
by
hookzeng
max_len_tokens
#10 opened 11 months ago
by
mansurealism
有没有计划给他转换成CrossEncoder 用SentenceTransfomrer调用?
2
#9 opened 11 months ago
by
oaksharks
ValueError: Cannot handle batch sizes > 1 if no padding token is defined. Raised when running `query_engine.query(query)` in llamaindex
👍 2
1
#8 opened 11 months ago
by
theforgehermit
vllm online serving
4
#7 opened 11 months ago
by
aperez900907
reranker0.6b and embedding0.6b are the same model weights?
2
#6 opened 11 months ago
by
chaochaoli
Add pipeline tag, link to paper and project page
#4 opened 11 months ago
by
nielsr
Converting a reranker model to a single label classification model
🔥 8
4
#3 opened 11 months ago
by
ccdv
TypeError: argument of type 'NoneType' is not iterable
1
#2 opened 11 months ago
by
galabala
vllm open ai api
👍 2
2
#1 opened 11 months ago
by
prudant