Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing
    • Website
      • Tasks
      • HuggingChat
      • Collections
      • Languages
      • Organizations
    • Community
      • Blog
      • Posts
      • Daily Papers
      • Learn
      • Discord
      • Forum
      • GitHub
    • Solutions
      • Team & Enterprise
      • Hugging Face PRO
      • Enterprise Support
      • Inference Providers
      • Inference Endpoints
      • Storage Buckets

  • Log In
  • Sign Up

cross-encoder
/
ms-marco-MiniLM-L2-v2

Text Ranking
sentence-transformers
PyTorch
JAX
ONNX
Safetensors
OpenVINO
Transformers
English
bert
text-classification
text-embeddings-inference
Model card Files Files and versions
xet
Community
3

Instructions to use cross-encoder/ms-marco-MiniLM-L2-v2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • sentence-transformers

    How to use cross-encoder/ms-marco-MiniLM-L2-v2 with sentence-transformers:

    from sentence_transformers import CrossEncoder
    
    model = CrossEncoder("cross-encoder/ms-marco-MiniLM-L2-v2")
    
    query = "Which planet is known as the Red Planet?"
    passages = [
    	"Venus is often called Earth's twin because of its similar size and proximity.",
    	"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
    	"Jupiter, the largest planet in our solar system, has a prominent red spot.",
    	"Saturn, famous for its rings, is sometimes mistaken for the Red Planet."
    ]
    
    scores = model.predict([(query, passage) for passage in passages])
    print(scores)
  • Transformers

    How to use cross-encoder/ms-marco-MiniLM-L2-v2 with Transformers:

    # Load model directly
    from transformers import AutoTokenizer, AutoModelForSequenceClassification
    
    tokenizer = AutoTokenizer.from_pretrained("cross-encoder/ms-marco-MiniLM-L2-v2")
    model = AutoModelForSequenceClassification.from_pretrained("cross-encoder/ms-marco-MiniLM-L2-v2")
  • Notebooks
  • Google Colab
  • Kaggle
ms-marco-MiniLM-L2-v2
188 MB
Ctrl+K
Ctrl+K
  • 6 contributors
History: 11 commits
tomaarsen's picture
tomaarsen HF Staff
Push tokenizer again
8135f0f verified about 1 year ago
  • .gitattributes
    790 Bytes
    Adding `safetensors` variant of this model (#2) over 1 year ago
  • README.md
    3.66 kB
    Update model metadata about 1 year ago
  • config.json
    794 Bytes
    upload about 5 years ago
  • flax_model.msgpack
    62.5 MB
    xet
    upload flax model almost 5 years ago
  • model.safetensors
    62.5 MB
    xet
    Adding `safetensors` variant of this model (#2) over 1 year ago
  • pytorch_model.bin

    Detected Pickle imports (4)

    • "torch.FloatStorage",
    • "torch._utils._rebuild_tensor_v2",
    • "torch.LongStorage",
    • "collections.OrderedDict"

    What is a pickle import?

    62.5 MB
    xet
    upload about 5 years ago
  • special_tokens_map.json
    132 Bytes
    Push tokenizer again about 1 year ago
  • tokenizer.json
    711 kB
    Push tokenizer again about 1 year ago
  • tokenizer_config.json
    1.33 kB
    Push tokenizer again about 1 year ago
  • vocab.txt
    232 kB
    upload about 5 years ago