How to use cross-encoder/ms-marco-electra-base with sentence-transformers:
from sentence_transformers import CrossEncoder model = CrossEncoder("cross-encoder/ms-marco-electra-base") query = "Which planet is known as the Red Planet?" passages = [ "Venus is often called Earth's twin because of its similar size and proximity.", "Mars, known for its reddish appearance, is often referred to as the Red Planet.", "Jupiter, the largest planet in our solar system, has a prominent red spot.", "Saturn, famous for its rings, is sometimes mistaken for the Red Planet." ] scores = model.predict([(query, passage) for passage in passages]) print(scores)
How to use cross-encoder/ms-marco-electra-base with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("cross-encoder/ms-marco-electra-base") model = AutoModelForSequenceClassification.from_pretrained("cross-encoder/ms-marco-electra-base")
What are the hyper-parameter and other settings info for this electra-base trained on ms-marco ? I do want those details in sentence-transformers/msmarco-bert-base-dot-v5 model page.
· Sign up or log in to comment