Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Derify
/
ChemRanker-alpha-qed-sim

Text Ranking
sentence-transformers
Safetensors
modchembert
cross-encoder
reranker
cheminformatics
smiles
Generated from Trainer
dataset_size:3193917
loss:MultipleNegativesRankingLoss
custom_code
Eval Results (legacy)
Model card Files Files and versions
xet
Community

Instructions to use Derify/ChemRanker-alpha-qed-sim with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use Derify/ChemRanker-alpha-qed-sim with sentence-transformers:

    from sentence_transformers import CrossEncoder
    
    model = CrossEncoder("Derify/ChemRanker-alpha-qed-sim", trust_remote_code=True)
    
    query = "Which planet is known as the Red Planet?"
    passages = [
    	"Venus is often called Earth's twin because of its similar size and proximity.",
    	"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
    	"Jupiter, the largest planet in our solar system, has a prominent red spot.",
    	"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
    ]
    
    scores = model.predict([(query, passage) for passage in passages])
    print(scores)
  • Notebooks
  • Google Colab
  • Kaggle
ChemRanker-alpha-qed-sim
408 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 5 commits
eacortes's picture
eacortes
Upload README.md
2ba34da verified 3 months ago
  • .gitattributes
    1.52 kB
    initial commit 5 months ago
  • CrossEncoderRerankingEvaluator_results_@10.csv
    562 Bytes
    Add README and evaluation results 3 months ago
  • README.md
    39.8 kB
    Upload README.md 3 months ago
  • config.json
    1.83 kB
    Upload 7 files 5 months ago
  • configuration_modchembert.py
    5.24 kB
    Upload 7 files 5 months ago
  • model.safetensors
    408 MB
    xet
    Upload 7 files 5 months ago
  • modeling_modchembert.py
    35.2 kB
    Upload 7 files 5 months ago
  • special_tokens_map.json
    694 Bytes
    Upload 7 files 5 months ago
  • tokenizer.json
    54.5 kB
    Upload 7 files 5 months ago
  • tokenizer_config.json
    1.4 kB
    Upload 2 files 3 months ago