Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

cross-encoder-testing
/
reranker-bert-tiny-gooaq-bce-v6

Text Ranking
sentence-transformers
Safetensors
English
bert
cross-encoder
text-classification
Generated from Trainer
dataset_size:578402
loss:BinaryCrossEntropyLoss
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use cross-encoder-testing/reranker-bert-tiny-gooaq-bce-v6 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use cross-encoder-testing/reranker-bert-tiny-gooaq-bce-v6 with sentence-transformers:

    from sentence_transformers import CrossEncoder
    
    model = CrossEncoder("cross-encoder-testing/reranker-bert-tiny-gooaq-bce-v6")
    
    query = "Which planet is known as the Red Planet?"
    passages = [
    	"Venus is often called Earth's twin because of its similar size and proximity.",
    	"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
    	"Jupiter, the largest planet in our solar system, has a prominent red spot.",
    	"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
    ]
    
    scores = model.predict([(query, passage) for passage in passages])
    print(scores)
  • Notebooks
  • Google Colab
  • Kaggle
reranker-bert-tiny-gooaq-bce-v6
18.5 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 5 commits
tomaarsen's picture
tomaarsen HF Staff
Update modules.json
bac0cf4 verified 2 months ago
  • .gitattributes
    1.52 kB
    initial commit 5 months ago
  • README.md
    19.7 kB
    Update README.md 5 months ago
  • config.json
    706 Bytes
    Uploading CrossEncoder model. 5 months ago
  • config_sentence_transformers.json
    230 Bytes
    Uploading CrossEncoder model. 5 months ago
  • model.safetensors
    17.5 MB
    xet
    Uploading CrossEncoder model. 5 months ago
  • modules.json
    119 Bytes
    Update modules.json 2 months ago
  • sentence_bert_config.json
    234 Bytes
    Uploading CrossEncoder model. 5 months ago
  • special_tokens_map.json
    732 Bytes
    Uploading CrossEncoder model. 5 months ago
  • tokenizer.json
    712 kB
    Uploading CrossEncoder model. 5 months ago
  • tokenizer_config.json
    1.53 kB
    Uploading CrossEncoder model. 5 months ago
  • train_script.py
    7.16 kB
    Create train_script.py 5 months ago
  • vocab.txt
    232 kB
    Uploading CrossEncoder model. 5 months ago