Update README.md
Browse files
README.md
CHANGED
|
@@ -21,4 +21,11 @@ ranking examples (for 250K, the examples file size is a little over 8GB).
|
|
| 21 |
The source of this curated data is [unicamp-dl/mmarco](https://huggingface.co/datasets/unicamp-dl/mmarco) and the full examples json file (27 GB) is
|
| 22 |
linked in the [ColBERT V2 repo](https://github.com/stanford-futuredata/ColBERT?tab=readme-ov-file#advanced-training-colbertv2-style).
|
| 23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
More: https://www.linkedin.com/posts/akhooli_arabic-mmarco-sample-dataset-and-colbert-activity-7225135682044743680-35nN
|
|
|
|
| 21 |
The source of this curated data is [unicamp-dl/mmarco](https://huggingface.co/datasets/unicamp-dl/mmarco) and the full examples json file (27 GB) is
|
| 22 |
linked in the [ColBERT V2 repo](https://github.com/stanford-futuredata/ColBERT?tab=readme-ov-file#advanced-training-colbertv2-style).
|
| 23 |
|
| 24 |
+
Following an observation of Arabic tokenization issues (ex. in BERT models) - see https://www.linkedin.com/posts/akhooli_arabic-bert-tokenizers-you-may-need-to-normalize-activity-7225747473523216384-D1oH -
|
| 25 |
+
two new files were uploaded to this dataset (normalized queries and collection). Models based on these files require normalizing the query first:
|
| 26 |
+
```python
|
| 27 |
+
from unicodedata import normalize
|
| 28 |
+
normalized_text = normalize('NFKC', text)
|
| 29 |
+
```
|
| 30 |
+
|
| 31 |
More: https://www.linkedin.com/posts/akhooli_arabic-mmarco-sample-dataset-and-colbert-activity-7225135682044743680-35nN
|