Datasets:
Add explanation about the different subsets
#11
by
albertvillanova
HF Staff
- opened
README.md
CHANGED
|
@@ -163,6 +163,17 @@ The wikipedia articles were split into multiple, disjoint text blocks of 100 wor
|
|
| 163 |
|
| 164 |
The wikipedia dump is the one from Dec. 20, 2018.
|
| 165 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 166 |
|
| 167 |
### Supported Tasks and Leaderboards
|
| 168 |
|
|
|
|
| 163 |
|
| 164 |
The wikipedia dump is the one from Dec. 20, 2018.
|
| 165 |
|
| 166 |
+
There are two types of DPR embeddings based on two different models:
|
| 167 |
+
- `nq`: the model is trained on the Natural Questions dataset
|
| 168 |
+
- `multiset`: the model is trained on multiple datasets
|
| 169 |
+
|
| 170 |
+
Additionally, a FAISS index can be created from the embeddings:
|
| 171 |
+
- `exact`: with an exact FAISS index (high RAM usage)
|
| 172 |
+
- `compressed`: with a compressed FAISS index (approximate, but lower RAM usage)
|
| 173 |
+
- `no_index`: without FAISS index
|
| 174 |
+
|
| 175 |
+
Finally, there is the possibility of generating the dataset without the embeddings:
|
| 176 |
+
- `no_embeddings`
|
| 177 |
|
| 178 |
### Supported Tasks and Leaderboards
|
| 179 |
|