|
|
--- |
|
|
license: cc-by-nc-4.0 |
|
|
language: |
|
|
- en |
|
|
- ar |
|
|
size_categories: |
|
|
- 100M<n<1B |
|
|
task_categories: |
|
|
- text-generation |
|
|
--- |
|
|
|
|
|
# DeepLatent Pretraining Data |
|
|
|
|
|
Combined English and Arabic pretraining data for DeepLatent language models. |
|
|
|
|
|
## Dataset Statistics |
|
|
|
|
|
| Language | Rows | |
|
|
|----------|------| |
|
|
| English | 40,182,085 | |
|
|
| Arabic | 91,363,714 | |
|
|
| **Total** | **131,545,799** | |
|
|
|
|
|
## Schema |
|
|
|
|
|
- `text` (string): The document text |
|
|
- `language` (string): Language code ('en' or 'ar') |
|
|
|
|
|
## Data Sources |
|
|
|
|
|
- **English**: FineWeb-Edu sample |
|
|
- **Arabic**: Deduplicated Arabic web corpus |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load full dataset |
|
|
dataset = load_dataset("almaghrabima/deeplatent-pretraining-data") |
|
|
|
|
|
# Load specific language |
|
|
english = dataset.filter(lambda x: x['language'] == 'en') |
|
|
arabic = dataset.filter(lambda x: x['language'] == 'ar') |
|
|
``` |
|
|
|
|
|
## License |
|
|
|
|
|
cc-by-nc-4.0 |