Instructions to use nvidia/NV-Embed-v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use nvidia/NV-Embed-v1 with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("nvidia/NV-Embed-v1", trust_remote_code=True) sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Notebooks
- Google Colab
- Kaggle
Has the tokenizer of the base model(Mistral-7B-v0.1) been retrained?
#37
by LH0521 - opened
Hi,
I noticed that Mistral-7B-v0.1 was used as the base model. However, the original Mistral-7B-v0.1 uses BPE tokenization, while I found that NV-Embed-v1 uses a word-by-word mapping method.
Did you retrain the tokenizer? If so, was it because the latent layer needs to integrate the words better?
Thanks!