Typo fix
#6
by
srijithrajamohan - opened
README.md
CHANGED
|
@@ -41,7 +41,7 @@ model-index:
|
|
| 41 |
|
| 42 |
# SentenceTransformer based on Alibaba-NLP/gte-modernbert-base
|
| 43 |
|
| 44 |
-
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-modernbert-base) on the
|
| 45 |
|
| 46 |
## Model Details
|
| 47 |
|
|
@@ -84,7 +84,7 @@ Then you can load this model and run inference.
|
|
| 84 |
from sentence_transformers import SentenceTransformer
|
| 85 |
|
| 86 |
# Download from the 🤗 Hub
|
| 87 |
-
model = SentenceTransformer("redis/langcache-embed-v1")
|
| 88 |
# Run inference
|
| 89 |
sentences = [
|
| 90 |
'Will the value of Indian rupee increase after the ban of 500 and 1000 rupee notes?',
|
|
@@ -126,7 +126,7 @@ print(similarities.shape)
|
|
| 126 |
#### Medical
|
| 127 |
|
| 128 |
* Dataset: Medical dataset
|
| 129 |
-
* Size:
|
| 130 |
* Columns: <code>question_1</code>, <code>question_2</code>, and <code>label</code>
|
| 131 |
|
| 132 |
## Citation
|
|
|
|
| 41 |
|
| 42 |
# SentenceTransformer based on Alibaba-NLP/gte-modernbert-base
|
| 43 |
|
| 44 |
+
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-modernbert-base) on the Medical dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity for the purpose of semantic caching in the medical domain.
|
| 45 |
|
| 46 |
## Model Details
|
| 47 |
|
|
|
|
| 84 |
from sentence_transformers import SentenceTransformer
|
| 85 |
|
| 86 |
# Download from the 🤗 Hub
|
| 87 |
+
model = SentenceTransformer("redis/langcache-embed-medical-v1")
|
| 88 |
# Run inference
|
| 89 |
sentences = [
|
| 90 |
'Will the value of Indian rupee increase after the ban of 500 and 1000 rupee notes?',
|
|
|
|
| 126 |
#### Medical
|
| 127 |
|
| 128 |
* Dataset: Medical dataset
|
| 129 |
+
* Size:
|
| 130 |
* Columns: <code>question_1</code>, <code>question_2</code>, and <code>label</code>
|
| 131 |
|
| 132 |
## Citation
|