Update README.md
Browse files
README.md
CHANGED
|
@@ -233,20 +233,27 @@ Encompassing a wide spectrum of content, ranging from social media conversations
|
|
| 233 |
This corpus offers comprehensive insights into the linguistic diversity and cultural nuances of Arabic expression.
|
| 234 |
|
| 235 |
## Usage
|
|
|
|
| 236 |
If you want to use this dataset you pick one among the available configs:
|
| 237 |
|
| 238 |
`Ara--MBZUAI--Bactrian-X` | `Ara--OpenAssistant--oasst1` | `Ary--AbderrahmanSkiredj1--Darija-Wikipedia`
|
| 239 |
|
| 240 |
`Ara--Wikipedia` | `Ary--Wikipedia` | `Arz--Wikipedia`
|
| 241 |
|
|
|
|
|
|
|
| 242 |
Example of usage:
|
|
|
|
| 243 |
```python
|
| 244 |
dataset = load_dataset('Ali-C137/Mixed-Arabic-Datasets', 'Ara--MBZUAI--Bactrian-X')
|
| 245 |
```
|
|
|
|
| 246 |
If you loaded multiple datasets and wanted to merge them together then you can simply laverage `concatenate_datasets()` from `datasets`
|
|
|
|
| 247 |
```pyhton
|
| 248 |
dataset3 = concatenate_datasets([dataset1['train'], dataset2['train']])
|
| 249 |
```
|
|
|
|
| 250 |
Note : proccess the datasets before merging in order to make sure you have a new dataset that is consistent
|
| 251 |
|
| 252 |
## Dataset Details
|
|
@@ -271,6 +278,8 @@ MAD draws from a diverse array of sources, each contributing to its richness and
|
|
| 271 |
- [✔] Arabic Wikipedia : [Dataset Link](https://huggingface.co/datasets/wikipedia)
|
| 272 |
- [✔] Moroccan Arabic Wikipedia : [Dataset Link](https://huggingface.co/datasets/wikipedia)
|
| 273 |
- [✔] Egyptian Arabic Wikipedia : [Dataset Link](https://huggingface.co/datasets/wikipedia)
|
|
|
|
|
|
|
| 274 |
- [] Pain/ArabicTweets : [Dataset Link](https://huggingface.co/datasets/pain/Arabic-Tweets)
|
| 275 |
- [] Abu-El-Khair Corpus : [Dataset Link](https://huggingface.co/datasets/arabic_billion_words)
|
| 276 |
- [] QuranExe : [Dataset Link](https://huggingface.co/datasets/mustapha/QuranExe)
|
|
|
|
| 233 |
This corpus offers comprehensive insights into the linguistic diversity and cultural nuances of Arabic expression.
|
| 234 |
|
| 235 |
## Usage
|
| 236 |
+
|
| 237 |
If you want to use this dataset you pick one among the available configs:
|
| 238 |
|
| 239 |
`Ara--MBZUAI--Bactrian-X` | `Ara--OpenAssistant--oasst1` | `Ary--AbderrahmanSkiredj1--Darija-Wikipedia`
|
| 240 |
|
| 241 |
`Ara--Wikipedia` | `Ary--Wikipedia` | `Arz--Wikipedia`
|
| 242 |
|
| 243 |
+
`Ary--Ali-C137--Darija-Stories-Dataset` | `Ara--Ali-C137--Hindawi-Books-dataset` | ``
|
| 244 |
+
|
| 245 |
Example of usage:
|
| 246 |
+
|
| 247 |
```python
|
| 248 |
dataset = load_dataset('Ali-C137/Mixed-Arabic-Datasets', 'Ara--MBZUAI--Bactrian-X')
|
| 249 |
```
|
| 250 |
+
|
| 251 |
If you loaded multiple datasets and wanted to merge them together then you can simply laverage `concatenate_datasets()` from `datasets`
|
| 252 |
+
|
| 253 |
```pyhton
|
| 254 |
dataset3 = concatenate_datasets([dataset1['train'], dataset2['train']])
|
| 255 |
```
|
| 256 |
+
|
| 257 |
Note : proccess the datasets before merging in order to make sure you have a new dataset that is consistent
|
| 258 |
|
| 259 |
## Dataset Details
|
|
|
|
| 278 |
- [✔] Arabic Wikipedia : [Dataset Link](https://huggingface.co/datasets/wikipedia)
|
| 279 |
- [✔] Moroccan Arabic Wikipedia : [Dataset Link](https://huggingface.co/datasets/wikipedia)
|
| 280 |
- [✔] Egyptian Arabic Wikipedia : [Dataset Link](https://huggingface.co/datasets/wikipedia)
|
| 281 |
+
- [✔] Darija Stories Dataset : [Dataset Link](https://huggingface.co/datasets/Ali-C137/Darija-Stories-Dataset)
|
| 282 |
+
- [✔] Hindawi Books Dataset : [Dataset Link](https://huggingface.co/datasets/Ali-C137/Hindawi-Books-dataset)
|
| 283 |
- [] Pain/ArabicTweets : [Dataset Link](https://huggingface.co/datasets/pain/Arabic-Tweets)
|
| 284 |
- [] Abu-El-Khair Corpus : [Dataset Link](https://huggingface.co/datasets/arabic_billion_words)
|
| 285 |
- [] QuranExe : [Dataset Link](https://huggingface.co/datasets/mustapha/QuranExe)
|