Update README.md
Browse files
README.md
CHANGED
|
@@ -4,12 +4,12 @@ tags: []
|
|
| 4 |
---
|
| 5 |
|
| 6 |
# Introduction
|
| 7 |
-
We introduce **Elb**edding,
|
| 8 |
|
| 9 |
-
For more technical details, refer to our paper:
|
| 10 |
|
| 11 |
# Model Details
|
| 12 |
-
- Base Decoder-only LLM:
|
| 13 |
- Pooling Type: Last EOS Token
|
| 14 |
- Maximum context length: 512
|
| 15 |
- Embedding Dimension: 4096
|
|
@@ -82,11 +82,11 @@ print(embeddings)
|
|
| 82 |
|
| 83 |
|
| 84 |
## Supported Languages
|
| 85 |
-
|
| 86 |
|
| 87 |
|
| 88 |
## MTEB Benchmark Evaluation
|
| 89 |
-
|
| 90 |
|
| 91 |
## FAQ
|
| 92 |
|
|
@@ -95,7 +95,7 @@ print(embeddings)
|
|
| 95 |
Yes, this is how the model is trained, otherwise you will see a performance degradation. On the other hand, there is no need to add instructions to the document side.
|
| 96 |
|
| 97 |
## Citation
|
| 98 |
-
|
| 99 |
|
| 100 |
## Limitations
|
| 101 |
-
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
# Introduction
|
| 7 |
+
We introduce **Elb**edding, *TBD*
|
| 8 |
|
| 9 |
+
For more technical details, refer to our paper: *TBD*
|
| 10 |
|
| 11 |
# Model Details
|
| 12 |
+
- Base Decoder-only LLM: *TBD*
|
| 13 |
- Pooling Type: Last EOS Token
|
| 14 |
- Maximum context length: 512
|
| 15 |
- Embedding Dimension: 4096
|
|
|
|
| 82 |
|
| 83 |
|
| 84 |
## Supported Languages
|
| 85 |
+
*TBD*
|
| 86 |
|
| 87 |
|
| 88 |
## MTEB Benchmark Evaluation
|
| 89 |
+
*TBD*
|
| 90 |
|
| 91 |
## FAQ
|
| 92 |
|
|
|
|
| 95 |
Yes, this is how the model is trained, otherwise you will see a performance degradation. On the other hand, there is no need to add instructions to the document side.
|
| 96 |
|
| 97 |
## Citation
|
| 98 |
+
*TBD*
|
| 99 |
|
| 100 |
## Limitations
|
| 101 |
+
*TBD*
|