Sentence Similarity
sentence-transformers
Safetensors
Transformers
Hebrew
bert
feature-extraction
text-embeddings-inference
Instructions to use MPA/sambert with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use MPA/sambert with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("MPA/sambert") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Transformers
How to use MPA/sambert with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("MPA/sambert") model = AutoModel.from_pretrained("MPA/sambert") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -57,8 +57,8 @@ def mean_pooling(model_output, attention_mask):
|
|
| 57 |
sentences = ["讗诪讗 讛诇讻讛 诇讙谉", "讗讘讗 讛诇讱 诇讙谉", "讬专拽讜谞讬 拽讜谞讛 诇谞讜 驻讬爪讜转"]
|
| 58 |
|
| 59 |
# Load model from HuggingFace Hub
|
| 60 |
-
tokenizer = AutoTokenizer.from_pretrained('
|
| 61 |
-
model = AutoModel.from_pretrained('
|
| 62 |
|
| 63 |
# Tokenize sentences
|
| 64 |
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
|
|
|
|
| 57 |
sentences = ["讗诪讗 讛诇讻讛 诇讙谉", "讗讘讗 讛诇讱 诇讙谉", "讬专拽讜谞讬 拽讜谞讛 诇谞讜 驻讬爪讜转"]
|
| 58 |
|
| 59 |
# Load model from HuggingFace Hub
|
| 60 |
+
tokenizer = AutoTokenizer.from_pretrained('MPA/sambert')
|
| 61 |
+
model = AutoModel.from_pretrained('MPA/sambert')
|
| 62 |
|
| 63 |
# Tokenize sentences
|
| 64 |
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
|