Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

dleemiller
/
NeoCE-sts

Text Classification
sentence-transformers
Safetensors
English
neobert
cross-encoder
stsb
stsbenchmark-sts
custom_code
Eval Results (legacy)
Model card Files Files and versions
xet
Community

Instructions to use dleemiller/NeoCE-sts with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use dleemiller/NeoCE-sts with sentence-transformers:

    from sentence_transformers import CrossEncoder
    
    model = CrossEncoder("dleemiller/NeoCE-sts", trust_remote_code=True)
    
    query = "Which planet is known as the Red Planet?"
    passages = [
    	"Venus is often called Earth's twin because of its similar size and proximity.",
    	"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
    	"Jupiter, the largest planet in our solar system, has a prominent red spot.",
    	"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
    ]
    
    scores = model.predict([(query, passage) for passage in passages])
    print(scores)
  • Notebooks
  • Google Colab
  • Kaggle
NeoCE-sts
890 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 8 commits
dleemiller's picture
dleemiller
Update README.md
e9abc08 verified about 1 year ago
  • .gitattributes
    1.52 kB
    initial commit about 1 year ago
  • CECorrelationEvaluator_sts-test-eval_results.csv
    99 Bytes
    Upload CECorrelationEvaluator_sts-test-eval_results.csv about 1 year ago
  • CECorrelationEvaluator_sts-validation_results.csv
    757 Bytes
    Upload 9 files about 1 year ago
  • README.md
    4.71 kB
    Update README.md about 1 year ago
  • config.json
    2.84 kB
    Upload 9 files about 1 year ago
  • model.py
    14.9 kB
    Upload 9 files about 1 year ago
  • model.safetensors
    889 MB
    xet
    Upload 9 files about 1 year ago
  • rotary.py
    2.58 kB
    Upload 9 files about 1 year ago
  • special_tokens_map.json
    695 Bytes
    Upload 9 files about 1 year ago
  • tokenizer.json
    712 kB
    Upload 9 files about 1 year ago
  • tokenizer_config.json
    1.5 kB
    Upload 9 files about 1 year ago
  • vocab.txt
    232 kB
    Upload 9 files about 1 year ago