Update README.md
Browse files
README.md
CHANGED
|
@@ -75,6 +75,8 @@ Wikipedia dataset containing cleaned articles of the Standard Arabic Wikipedia v
|
|
| 75 |
|
| 76 |
*Gemma Tokenizer
|
| 77 |
|
|
|
|
|
|
|
| 78 |
The dataset is built from the Wikipedia dumps (https://dumps.wikimedia.org/)
|
| 79 |
with one subset per language, each containing a single train split.
|
| 80 |
|
|
|
|
| 75 |
|
| 76 |
*Gemma Tokenizer
|
| 77 |
|
| 78 |
+
|
| 79 |
+
### Original model card
|
| 80 |
The dataset is built from the Wikipedia dumps (https://dumps.wikimedia.org/)
|
| 81 |
with one subset per language, each containing a single train split.
|
| 82 |
|