How to use from the
Use from the
sentence-transformers library
from sentence_transformers import CrossEncoder

model = CrossEncoder("cross-encoder/quora-roberta-base")

query = "Which planet is known as the Red Planet?"
passages = [
	"Venus is often called Earth's twin because of its similar size and proximity.",
	"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
	"Jupiter, the largest planet in our solar system, has a prominent red spot.",
	"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
]

scores = model.predict([(query, passage) for passage in passages])
print(scores)

Cross-Encoder for Quora Duplicate Questions Detection

This model was trained using SentenceTransformers Cross-Encoder class.

Training Data

This model was trained on the Quora Duplicate Questions dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates.

Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rather low score, as these are not duplicates.

Usage and Performance

Pre-trained models can be used like this:

from sentence_transformers import CrossEncoder

model = CrossEncoder('cross-encoder/quora-roberta-base')
scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')])

You can use this model also without sentence_transformers and by just using Transformers AutoModel class

Downloads last month
27,544
Safetensors
Model size
0.1B params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for cross-encoder/quora-roberta-base

Quantized
(10)
this model
Adapters
2 models
Quantizations
1 model

Dataset used to train cross-encoder/quora-roberta-base