Feature Extraction
Transformers
Safetensors
sentence-transformers
English
nvembed
mteb
custom_code
Eval Results (legacy)
Instructions to use nvidia/NV-Embed-v2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nvidia/NV-Embed-v2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="nvidia/NV-Embed-v2", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("nvidia/NV-Embed-v2", trust_remote_code=True, dtype="auto") - sentence-transformers
How to use nvidia/NV-Embed-v2 with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("nvidia/NV-Embed-v2", trust_remote_code=True) sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Notebooks
- Google Colab
- Kaggle
Update metadata: Add library_name to Transformers
Browse files
README.md
CHANGED
|
@@ -2003,6 +2003,7 @@ model-index:
|
|
| 2003 |
language:
|
| 2004 |
- en
|
| 2005 |
license: cc-by-nc-4.0
|
|
|
|
| 2006 |
---
|
| 2007 |
## Introduction
|
| 2008 |
We present NV-Embed-v2, a generalist embedding model that ranks No. 1 on the Massive Text Embedding Benchmark ([MTEB benchmark](https://huggingface.co/spaces/mteb/leaderboard))(as of Aug 30, 2024) with a score of 72.31 across 56 text embedding tasks. It also holds the No. 1 in the retrieval sub-category (a score of 62.65 across 15 tasks) in the leaderboard, which is essential to the development of RAG technology.
|
|
|
|
| 2003 |
language:
|
| 2004 |
- en
|
| 2005 |
license: cc-by-nc-4.0
|
| 2006 |
+
library_name: transformers
|
| 2007 |
---
|
| 2008 |
## Introduction
|
| 2009 |
We present NV-Embed-v2, a generalist embedding model that ranks No. 1 on the Massive Text Embedding Benchmark ([MTEB benchmark](https://huggingface.co/spaces/mteb/leaderboard))(as of Aug 30, 2024) with a score of 72.31 across 56 text embedding tasks. It also holds the No. 1 in the retrieval sub-category (a score of 62.65 across 15 tasks) in the leaderboard, which is essential to the development of RAG technology.
|