Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

matulichpt
/
radlit-crossencoder

Text Classification
sentence-transformers
Safetensors
English
bert
cross-encoder
reranker
retrieval
sentence-similarity
Eval Results (legacy)
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use matulichpt/radlit-crossencoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use matulichpt/radlit-crossencoder with sentence-transformers:

    from sentence_transformers import CrossEncoder
    
    model = CrossEncoder("matulichpt/radlit-crossencoder")
    
    query = "Which planet is known as the Red Planet?"
    passages = [
    	"Venus is often called Earth's twin because of its similar size and proximity.",
    	"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
    	"Jupiter, the largest planet in our solar system, has a prominent red spot.",
    	"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
    ]
    
    scores = model.predict([(query, passage) for passage in passages])
    print(scores)
  • Notebooks
  • Google Colab
  • Kaggle
radlit-crossencoder
134 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 8 commits
matulichpt's picture
matulichpt
Update README.md
2326ab3 verified 3 months ago
  • .gitattributes
    1.52 kB
    initial commit 4 months ago
  • LICENSE
    10.9 kB
    Upload folder using huggingface_hub 4 months ago
  • README.md
    11.7 kB
    Update README.md 3 months ago
  • config.json
    824 Bytes
    Upload folder using huggingface_hub 4 months ago
  • model.safetensors
    133 MB
    xet
    Initial model upload with benchmarks 4 months ago
  • special_tokens_map.json
    732 Bytes
    Initial model upload with benchmarks 4 months ago
  • tokenizer.json
    712 kB
    Initial model upload with benchmarks 4 months ago
  • tokenizer_config.json
    1.33 kB
    Initial model upload with benchmarks 4 months ago
  • vocab.txt
    232 kB
    Initial model upload with benchmarks 4 months ago