|
|
--- |
|
|
license: mit |
|
|
|
|
|
language: |
|
|
- ar |
|
|
|
|
|
configs: |
|
|
- config_name: pipe |
|
|
|
|
|
data_files: "arabic-queries-no-latin.tsv" |
|
|
sep: "|" |
|
|
--- |
|
|
|
|
|
# akhooli/ar-mmarco-sample |
|
|
This repo has samples from the Arabic (machine translation) version of the mMARCO dataset, together with mined rankings (English, but should apply |
|
|
as translations are aligned across languages). |
|
|
The purpose is to train (using free compute, so not fully trained) an Arabic ColBERT V2 model. |
|
|
The original dataset has a little over 800K queries (training set). I filtered out ones with English words, leaving around 700K, then sampled 250K along with their |
|
|
ranking examples (for 250K, the examples file size is a little over 8GB). |
|
|
|
|
|
The source of this curated data is [unicamp-dl/mmarco](https://huggingface.co/datasets/unicamp-dl/mmarco) and the full examples json file (27 GB) is |
|
|
linked in the [ColBERT V2 repo](https://github.com/stanford-futuredata/ColBERT?tab=readme-ov-file#advanced-training-colbertv2-style). |
|
|
|
|
|
Following an observation of Arabic tokenization issues (ex. in BERT models) - see https://www.linkedin.com/posts/akhooli_arabic-bert-tokenizers-you-may-need-to-normalize-activity-7225747473523216384-D1oH - |
|
|
two new files were uploaded to this dataset (normalized queries and collection - visually they are the same). Models based on these files require normalizing the query first. |
|
|
```python |
|
|
from unicodedata import normalize |
|
|
normalized_text = normalize('NFKC', text) |
|
|
``` |
|
|
The recent addition to this repo (the 711k i.e. all queries without latin words) is normalized. The corresponding examples size is 24G. |
|
|
|
|
|
More: https://www.linkedin.com/posts/akhooli_arabic-mmarco-sample-dataset-and-colbert-activity-7225135682044743680-35nN |