Feature Extraction
Transformers
Safetensors
sentence-transformers
minicpm
mteb
custom_code
Eval Results (legacy)
Instructions to use openbmb/MiniCPM-Embedding-Light with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openbmb/MiniCPM-Embedding-Light with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="openbmb/MiniCPM-Embedding-Light", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("openbmb/MiniCPM-Embedding-Light", trust_remote_code=True, dtype="auto") - sentence-transformers
How to use openbmb/MiniCPM-Embedding-Light with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("openbmb/MiniCPM-Embedding-Light", trust_remote_code=True) sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Notebooks
- Google Colab
- Kaggle
Update scripts/transformers_demo.py
Browse files
scripts/transformers_demo.py
CHANGED
|
@@ -2,7 +2,7 @@
|
|
| 2 |
from transformers import AutoModel
|
| 3 |
import torch
|
| 4 |
|
| 5 |
-
model_name = "openbmb/
|
| 6 |
model = AutoModel.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.float16).to("cuda")
|
| 7 |
|
| 8 |
# you can use flash_attention_2 for faster inference
|
|
|
|
| 2 |
from transformers import AutoModel
|
| 3 |
import torch
|
| 4 |
|
| 5 |
+
model_name = "openbmb/MiniCPM-Embedding-Light"
|
| 6 |
model = AutoModel.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.float16).to("cuda")
|
| 7 |
|
| 8 |
# you can use flash_attention_2 for faster inference
|