Update README.md
Browse files
README.md
CHANGED
|
@@ -152,7 +152,7 @@ datasets:
|
|
| 152 |
|
| 153 |
This is a SPLADE sparse retrieval model based on BERT-Tiny (4M) that was trained by distilling a Cross-Encoder on the MSMARCO dataset. The cross-encoder used was [ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2).
|
| 154 |
|
| 155 |
-
|
| 156 |
|
| 157 |
- `Collection:` https://huggingface.co/collections/rasyosef/splade-tiny-msmarco-687c548c0691d95babf65b70
|
| 158 |
- `Distillation Dataset:` https://huggingface.co/datasets/yosefw/msmarco-train-distil-v2
|
|
|
|
| 152 |
|
| 153 |
This is a SPLADE sparse retrieval model based on BERT-Tiny (4M) that was trained by distilling a Cross-Encoder on the MSMARCO dataset. The cross-encoder used was [ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2).
|
| 154 |
|
| 155 |
+
This Tiny SPLADE model beats `BM25` by `65.6%` on the MSMARCO benchmark. While this model is `15x` smaller than Naver's official `splade-v3-distilbert`, is posesses `80%` of it's performance on MSMARCO. This model is small enough to be used without a GPU on a dataset of a few thousand documents.
|
| 156 |
|
| 157 |
- `Collection:` https://huggingface.co/collections/rasyosef/splade-tiny-msmarco-687c548c0691d95babf65b70
|
| 158 |
- `Distillation Dataset:` https://huggingface.co/datasets/yosefw/msmarco-train-distil-v2
|