Instructions to use nvidia/NV-Embed-v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use nvidia/NV-Embed-v1 with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("nvidia/NV-Embed-v1", trust_remote_code=True) sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Notebooks
- Google Colab
- Kaggle
missing citation
Congrats on the release!
We have read your technical paper and found it truly informative! However, we noticed there may be some related works on bi-LLMs that are missing:
[1] Label Supervised LLaMA Finetuning: first Bi-LLM (removing causal mask of LLM) work for classification tasks.
[2] BeLLM: Backward Dependency Enhanced Large Language Model for Sentence Embeddings (NAACL24): it also uses Bi-LLM for sentence embeddings (earlier than LLM2Vec).
Thank you for providing the valuable feedbacks and suggestions. We will consider adding these related works to our manuscript in future.
if any additional citations are required i would prefer this one https://huggingface.co/intfloat/e5-mistral-7b-instruct as similarities are much more evident (to me) than the works cited above