Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

cross-encoder
/
stsb-roberta-base

Text Ranking
sentence-transformers
PyTorch
JAX
ONNX
Safetensors
OpenVINO
Transformers
English
roberta
text-classification
text-embeddings-inference
Model card Files Files and versions
xet
Community
15

Instructions to use cross-encoder/stsb-roberta-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use cross-encoder/stsb-roberta-base with sentence-transformers:

    from sentence_transformers import CrossEncoder
    
    model = CrossEncoder("cross-encoder/stsb-roberta-base")
    
    query = "Which planet is known as the Red Planet?"
    passages = [
    	"Venus is often called Earth's twin because of its similar size and proximity.",
    	"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
    	"Jupiter, the largest planet in our solar system, has a prominent red spot.",
    	"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
    ]
    
    scores = model.predict([(query, passage) for passage in passages])
    print(scores)
  • Transformers

    How to use cross-encoder/stsb-roberta-base with Transformers:

    # Load model directly
    from transformers import AutoTokenizer, AutoModelForSequenceClassification
    
    tokenizer = AutoTokenizer.from_pretrained("cross-encoder/stsb-roberta-base")
    model = AutoModelForSequenceClassification.from_pretrained("cross-encoder/stsb-roberta-base")
  • Notebooks
  • Google Colab
  • Kaggle
stsb-roberta-base
Ctrl+K
Ctrl+K
  • 5 contributors
History: 9 commits
tomaarsen's picture
tomaarsen HF Staff
Push tokenizer again
f201854 verified about 1 year ago
  • .gitattributes
    445 Bytes
    Adding `safetensors` variant of this model (#1) over 1 year ago
  • CECorrelationEvaluator_sts-dev_results.csv
    310 Bytes
    upload over 5 years ago
  • README.md
    1.14 kB
    Update model metadata about 1 year ago
  • config.json
    608 Bytes
    upload over 5 years ago
  • flax_model.msgpack
    499 MB
    xet
    upload flax model almost 5 years ago
  • merges.txt
    456 kB
    upload over 5 years ago
  • model.safetensors
    499 MB
    xet
    Adding `safetensors` variant of this model (#1) over 1 year ago
  • pytorch_model.bin
    499 MB
    xet
    upload over 5 years ago
  • special_tokens_map.json
    1.01 kB
    Push tokenizer again about 1 year ago
  • tokenizer.json
    3.56 MB
    Push tokenizer again about 1 year ago
  • tokenizer_config.json
    1.34 kB
    Push tokenizer again about 1 year ago
  • vocab.json
    798 kB
    Push tokenizer again about 1 year ago