Dataset Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
_fingerprint: string
_format_columns: null
_format_kwargs: struct<>
_format_type: null
_output_all_columns: bool
_split: null
text: string
length: int64
source: string
to
{'text': Value('string'), 'source': Value('string'), 'length': Value('int64')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 265, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 120, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
_fingerprint: string
_format_columns: null
_format_kwargs: struct<>
_format_type: null
_output_all_columns: bool
_split: null
text: string
length: int64
source: string
to
{'text': Value('string'), 'source': Value('string'), 'length': Value('int64')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Pashto Corpus - Training Ready
Clean, preprocessed Pashto text dataset ready for fine-tuning language models.
Dataset Description
This dataset contains preprocessed and chunked Pashto text from educational materials, literary works, and academic texts. It's formatted and ready for training/fine-tuning language models like GPT, Llama, mT5, etc.
Splits
| Split | Samples | Purpose |
|---|---|---|
| Train | ~25,000 | Model training |
| Validation | ~2,800 | Hyperparameter tuning |
| Test | ~2,800 | Final evaluation |
Data Fields
text: Cleaned Pashto text chunksource: Category (literary_works, grammar_books, textbooks_kpk, academic_research)length: Character count
Preprocessing
- Removed PDF artifacts and page numbers
- Normalized Arabic/Pashto characters
- Split long documents into chunks (max 2000 chars)
- Quality filtering (minimum length, script ratio)
- Train/Val/Test split (80/10/10)
Usage
Load with 🤗 Datasets
from datasets import load_dataset
dataset = load_dataset("tasal9/Pashto-Corpus-Training-Ready")
print(dataset["train"][0]["text"][:200])
Tokenization Example
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google/mt5-base")
def tokenize(example):
return tokenizer(
example["text"],
padding="max_length",
truncation=True,
max_length=512
)
tokenized = dataset.map(tokenize, batched=True)
Training Example
from transformers import AutoModelForCausalLM, TrainingArguments, Trainer
model = AutoModelForCausalLM.from_pretrained("gpt2")
training_args = TrainingArguments(
output_dir="./pashto-model",
per_device_train_batch_size=16,
learning_rate=5e-5,
num_train_epochs=3,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized["train"],
eval_dataset=tokenized["validation"],
)
trainer.train()
License
CC-BY 4.0 - Free for commercial and research use with attribution.
Citation
@dataset{pashto-corpus-training-2026,
author = {Tasal},
title = {Pashto Corpus - Training Ready},
year = {2026},
url = {https://huggingface.co/datasets/tasal9/Pashto-Corpus-Training-Ready},
version = {1.0.0}
}
Contact
- Author: Tasal
- Email: yaqoobtasal@zamai.dev
- HuggingFace: https://huggingface.co/tasal9
- Downloads last month
- 51