Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

ALJIACHI
/
Mizan-Rerank-v1

Text Ranking
sentence-transformers
Safetensors
Arabic
English
modernbert
text-embeddings-inference
Model card Files Files and versions
xet
Community
2

Instructions to use ALJIACHI/Mizan-Rerank-v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use ALJIACHI/Mizan-Rerank-v1 with sentence-transformers:

    from sentence_transformers import CrossEncoder
    
    model = CrossEncoder("ALJIACHI/Mizan-Rerank-v1")
    
    query = "Which planet is known as the Red Planet?"
    passages = [
    	"Venus is often called Earth's twin because of its similar size and proximity.",
    	"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
    	"Jupiter, the largest planet in our solar system, has a prominent red spot.",
    	"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
    ]
    
    scores = model.predict([(query, passage) for passage in passages])
    print(scores)
  • Notebooks
  • Google Colab
  • Kaggle
Mizan-Rerank-v1
603 MB
Ctrl+K
Ctrl+K
  • 2 contributors
History: 13 commits
ALJIACHI's picture
ALJIACHI
tomaarsen's picture
tomaarsen HF Staff
Update model metadata to set pipeline tag to the new `text-ranking` (#2)
bcf8a92 verified about 1 year ago
  • .gitattributes
    1.65 kB
    Initial model upload about 1 year ago
  • .gitignore
    52 Bytes
    Initial model upload about 1 year ago
  • README.md
    7.34 kB
    Update model metadata to set pipeline tag to the new `text-ranking` (#2) about 1 year ago
  • config.json
    1.58 kB
    Initial model upload about 1 year ago
  • model.safetensors
    598 MB
    xet
    Initial model upload about 1 year ago
  • special_tokens_map.json
    694 Bytes
    Initial model upload about 1 year ago
  • tokenizer.json
    4.88 MB
    Initial model upload about 1 year ago
  • tokenizer_config.json
    1.74 kB
    Initial model upload about 1 year ago