Sentence Similarity
sentence-transformers
Safetensors
Transformers
Chinese
English
qwen2
feature-extraction
text-embeddings-inference
Instructions to use BAAI/bge-code-v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use BAAI/bge-code-v1 with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("BAAI/bge-code-v1") sentences = [ "那是 個快樂的人", "那是 條快樂的狗", "那是 個非常幸福的人", "今天是晴天" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Transformers
How to use BAAI/bge-code-v1 with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("BAAI/bge-code-v1") model = AutoModel.from_pretrained("BAAI/bge-code-v1") - Notebooks
- Google Colab
- Kaggle
Slightly update Sentence Transformers snippet
#2
by tomaarsen HF Staff - opened
README.md
CHANGED
|
@@ -61,7 +61,11 @@ from sentence_transformers import SentenceTransformer
|
|
| 61 |
import torch
|
| 62 |
|
| 63 |
# Load the model, optionally in float16 precision for faster inference
|
| 64 |
-
model = SentenceTransformer(
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
# Prepare a prompt given an instruction
|
| 67 |
instruction = 'Given a question in text, retrieve SQL queries that are appropriate responses to the question.'
|
|
|
|
| 61 |
import torch
|
| 62 |
|
| 63 |
# Load the model, optionally in float16 precision for faster inference
|
| 64 |
+
model = SentenceTransformer(
|
| 65 |
+
"BAAI/bge-code-v1",
|
| 66 |
+
trust_remote_code=True,
|
| 67 |
+
model_kwargs={"torch_dtype": torch.float16},
|
| 68 |
+
)
|
| 69 |
|
| 70 |
# Prepare a prompt given an instruction
|
| 71 |
instruction = 'Given a question in text, retrieve SQL queries that are appropriate responses to the question.'
|