How to use Qwen/Qwen3-Reranker-4B with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-Reranker-4B") model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Reranker-4B")
How to use Qwen/Qwen3-Reranker-4B with sentence-transformers:
from sentence_transformers import CrossEncoder model = CrossEncoder("Qwen/Qwen3-Reranker-4B") query = "Which planet is known as the Red Planet?" passages = [ "Venus is often called Earth's twin because of its similar size and proximity.", "Mars, known for its reddish appearance, is often referred to as the Red Planet.", "Jupiter, the largest planet in our solar system, has a prominent red spot.", "Saturn, famous for its rings, is sometimes mistaken for the Red Planet." ] scores = model.predict([(query, passage) for passage in passages]) print(scores)
比如这个:
而bge系列是没问题的
即使加了知道提示词,也不行,大家有想法嘛,对于这个
在我们的场景里中文排序,即使加入官网的提示词指令,实际效果也不如BAAI/bge-reranker-v2-m3
· Sign up or log in to comment