How to use from the
Use from the
sentence-transformers library
from sentence_transformers import CrossEncoder

model = CrossEncoder("webis/tiny-bert-ranker")

query = "Which planet is known as the Red Planet?"
passages = [
	"Venus is often called Earth's twin because of its similar size and proximity.",
	"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
	"Jupiter, the largest planet in our solar system, has a prominent red spot.",
	"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
]

scores = model.predict([(query, passage) for passage in passages])
print(scores)

tiny-bert-ranker model card

This model is a fine-tuned version of prajjwal1/bert-tiny as part of our submission to ReNeuIR 2024.

Model Details

Model Description

The model is based on the pre-trained prajjwal1/bert-tiny. It is fine-tuned on a 1GB subset of data extracted from msmarco's Train Triples Small.

Tiny-bert-ranker is part of our investigation into the tradeoffs between efficiency and effectiveness in ranking models. This approach does not involve BM25 score injection or distillation.

  • Developed by: Team FSU at ReNeuIR 2024
  • Model type: sequence-to-sequence model
  • License: mit
  • Finetuned from model: prajjwal1/bert-tiny
Downloads last month
34
Safetensors
Model size
4.39M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support