Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

cross-encoder
/
stsb-roberta-large

Text Ranking
sentence-transformers
PyTorch
JAX
ONNX
Safetensors
OpenVINO
Transformers
English
roberta
text-classification
text-embeddings-inference
Model card Files Files and versions
xet
Community
2

Instructions to use cross-encoder/stsb-roberta-large with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use cross-encoder/stsb-roberta-large with sentence-transformers:

    from sentence_transformers import CrossEncoder
    
    model = CrossEncoder("cross-encoder/stsb-roberta-large")
    
    query = "Which planet is known as the Red Planet?"
    passages = [
    	"Venus is often called Earth's twin because of its similar size and proximity.",
    	"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
    	"Jupiter, the largest planet in our solar system, has a prominent red spot.",
    	"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
    ]
    
    scores = model.predict([(query, passage) for passage in passages])
    print(scores)
  • Transformers

    How to use cross-encoder/stsb-roberta-large with Transformers:

    # Load model directly
    from transformers import AutoTokenizer, AutoModelForSequenceClassification
    
    tokenizer = AutoTokenizer.from_pretrained("cross-encoder/stsb-roberta-large")
    model = AutoModelForSequenceClassification.from_pretrained("cross-encoder/stsb-roberta-large")
  • Notebooks
  • Google Colab
  • Kaggle
stsb-roberta-large
Ctrl+K
Ctrl+K
  • 6 contributors
History: 9 commits
tomaarsen's picture
tomaarsen HF Staff
Push tokenizer again
a088788 verified about 1 year ago
  • .gitattributes
    445 Bytes
    Adding `safetensors` variant of this model (#1) over 1 year ago
  • CECorrelationEvaluator_sts-dev_results.csv
    223 Bytes
    upload over 5 years ago
  • README.md
    1.14 kB
    Update model metadata about 1 year ago
  • config.json
    629 Bytes
    upload over 5 years ago
  • flax_model.msgpack
    1.42 GB
    xet
    upload flax model almost 5 years ago
  • merges.txt
    456 kB
    upload over 5 years ago
  • model.safetensors
    1.42 GB
    xet
    Adding `safetensors` variant of this model (#1) over 1 year ago
  • pytorch_model.bin
    1.42 GB
    xet
    upload over 5 years ago
  • special_tokens_map.json
    1.01 kB
    Push tokenizer again about 1 year ago
  • tokenizer.json
    3.56 MB
    Push tokenizer again about 1 year ago
  • tokenizer_config.json
    1.34 kB
    Push tokenizer again about 1 year ago
  • vocab.json
    798 kB
    Push tokenizer again about 1 year ago