Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

dleemiller
/
MiniLMX-sts-xs

Text Ranking
sentence-transformers
Safetensors
bert
cross-encoder
reranker
Generated from Trainer
dataset_size:5749
loss:BinaryCrossEntropyLoss
Eval Results (legacy)
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use dleemiller/MiniLMX-sts-xs with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use dleemiller/MiniLMX-sts-xs with sentence-transformers:

    from sentence_transformers import CrossEncoder
    
    model = CrossEncoder("dleemiller/MiniLMX-sts-xs")
    
    query = "Which planet is known as the Red Planet?"
    passages = [
    	"Venus is often called Earth's twin because of its similar size and proximity.",
    	"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
    	"Jupiter, the largest planet in our solar system, has a prominent red spot.",
    	"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
    ]
    
    scores = model.predict([(query, passage) for passage in passages])
    print(scores)
  • Notebooks
  • Google Colab
  • Kaggle
MiniLMX-sts-xs
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
dleemiller's picture
dleemiller
Upload folder using huggingface_hub
89cf932 verified 10 months ago
  • eval
    Upload folder using huggingface_hub 10 months ago
  • .gitattributes
    1.52 kB
    initial commit 10 months ago
  • README.md
    13 kB
    Upload folder using huggingface_hub 10 months ago
  • config.json
    796 Bytes
    Upload folder using huggingface_hub 10 months ago
  • model.safetensors
    90.9 MB
    xet
    Upload folder using huggingface_hub 10 months ago
  • special_tokens_map.json
    695 Bytes
    Upload folder using huggingface_hub 10 months ago
  • tokenizer.json
    712 kB
    Upload folder using huggingface_hub 10 months ago
  • tokenizer_config.json
    1.41 kB
    Upload folder using huggingface_hub 10 months ago
  • vocab.txt
    232 kB
    Upload folder using huggingface_hub 10 months ago