Feature Extraction
sentence-transformers
PyTorch
Safetensors
Transformers
English
mistral
mteb
Eval Results (legacy)
Eval Results
text-embeddings-inference
Instructions to use intfloat/e5-mistral-7b-instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use intfloat/e5-mistral-7b-instruct with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("intfloat/e5-mistral-7b-instruct") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Transformers
How to use intfloat/e5-mistral-7b-instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="intfloat/e5-mistral-7b-instruct")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("intfloat/e5-mistral-7b-instruct") model = AutoModel.from_pretrained("intfloat/e5-mistral-7b-instruct") - Inference
- Notebooks
- Google Colab
- Kaggle
Can you training a multilingual e5 based on a multilingual LLM?
#9
by hantian - opened
https://huggingface.co/Qwen/Qwen-1_8B is a good choice, less params, good performance both on English an Chinese
As far as I know, Qwen only supports English and Chinese. A truly multilingual model that supports dozens of languages might be more preferable.
Yes, you are right