Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Adarsh921
/
cross-encoder

Text Ranking
sentence-transformers
Safetensors
Transformers
English
bert
text-classification
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use Adarsh921/cross-encoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use Adarsh921/cross-encoder with sentence-transformers:

    from sentence_transformers import CrossEncoder
    
    model = CrossEncoder("Adarsh921/cross-encoder")
    
    query = "Which planet is known as the Red Planet?"
    passages = [
    	"Venus is often called Earth's twin because of its similar size and proximity.",
    	"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
    	"Jupiter, the largest planet in our solar system, has a prominent red spot.",
    	"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
    ]
    
    scores = model.predict([(query, passage) for passage in passages])
    print(scores)
  • Transformers

    How to use Adarsh921/cross-encoder with Transformers:

    # Load model directly
    from transformers import AutoTokenizer, AutoModelForSequenceClassification
    
    tokenizer = AutoTokenizer.from_pretrained("Adarsh921/cross-encoder")
    model = AutoModelForSequenceClassification.from_pretrained("Adarsh921/cross-encoder")
  • Notebooks
  • Google Colab
  • Kaggle
cross-encoder
91.8 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
Adarsh921's picture
Adarsh921
Upload CrossEncoder model
df5b8ad verified 6 months ago
  • .gitattributes
    1.52 kB
    initial commit 6 months ago
  • README.md
    3.67 kB
    Upload CrossEncoder model 6 months ago
  • config.json
    829 Bytes
    Upload CrossEncoder model 6 months ago
  • model.safetensors
    90.9 MB
    xet
    Upload CrossEncoder model 6 months ago
  • special_tokens_map.json
    695 Bytes
    Upload CrossEncoder model 6 months ago
  • tokenizer.json
    711 kB
    Upload CrossEncoder model 6 months ago
  • tokenizer_config.json
    1.27 kB
    Upload CrossEncoder model 6 months ago
  • vocab.txt
    232 kB
    Upload CrossEncoder model 6 months ago