urdu_finepdfs / README.md
humair025's picture
Update README.md
b3e86a5 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: id
      dtype: string
    - name: dump
      dtype: string
    - name: url
      dtype: string
    - name: date
      dtype: string
    - name: file_path
      dtype: string
    - name: offset
      dtype: int64
    - name: token_count
      dtype: int64
    - name: language
      dtype: string
    - name: page_average_lid
      dtype: string
    - name: page_average_lid_score
      dtype: float64
    - name: full_doc_lid
      dtype: string
    - name: full_doc_lid_score
      dtype: float64
    - name: per_page_languages
      list: string
    - name: is_truncated
      dtype: bool
    - name: extractor
      dtype: string
    - name: page_ends
      list: int64
  splits:
    - name: train
      num_bytes: 12988288725
      num_examples: 118768
    - name: test
      num_bytes: 110433769
      num_examples: 1010
  download_size: 6214257908
  dataset_size: 13098722494
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
license: apache-2.0
task_categories:
  - text-classification
  - text-generation
language:
  - ur
tags:
  - URDU
  - ur
  - text

urdu_finepdfs

https://huggingface.co/datasets/HuggingFaceFW/finepdfs

This repository contains the Urdu subset of Hugging Face's FinePDFs collection — a large, multilingual corpus of parsed PDF documents. The goal of this repo is to make it easier to find, inspect, and use the Urdu portion of FinePDFs for research and model development.

NOTE: FinePDFs is maintained by HuggingFace (HuggingFaceFW). This repository is a small, focused collection and helper for the Urdu data only; it does not re-host the full FinePDFs dataset. Users should consult the original FinePDFs dataset for the full dataset card, license, and details.


What’s inside

  • data/ (optional) — small example files / scripts [FUTURE] . This repo is primarily a pointer + helpers to the official FinePDFs Urdu shards.
  • scripts/ [FUTURE] — utility scripts to list, preview, and filter Urdu parquet shards (example: extract metadata, sample text, convert to plain text).
  • README.md — this file.

If you cloned this repo to help with the downstream work, expect the real Urdu shards to be loaded from the official Hugging Face hub (see examples below).


About FinePDFs

FinePDFs is a massive corpus built from PDF documents. It contains hundreds of millions of PDF-derived documents across many languages and scripts and is made available by Hugging Face. The canonical dataset card, data files, and licensing are hosted on the Hugging Face dataset page for HuggingFaceFW/finepdfs.

Key facts (from the original FinePDFs project):

  • FinePDFs is a multilingual PDF corpus released by Hugging Face (FinePDFs dataset). citeturn1view0turn2view0
  • The dataset is very large (multiple terabytes; FinePDFs core release metadata and changelog available on Hugging Face). If you plan to work with large portions, prefer streaming or cloud compute. citeturn0search2turn1view0
  • The dataset is published under the ODC-BY-style license shown on the dataset page — check the dataset card before redistributing. citeturn1view0

How to load the Urdu subset (recommended)

Option A — load the Urdu config directly (recommended if the dataset exposes a per-language config):

from datasets import load_dataset

# try loading the Urdu config by its language+script code (config name often like "urd_Arab")
dset = load_dataset("HuggingFaceFW/finepdfs", "urd_Arab", streaming=True)

# streaming=True avoids downloading large files locally; iterate and process each example
for example in dset:
    print(example.get("text")[:400])
    break

Option B — load the full dataset and filter (use only for small experiments or when configs are unavailable):

from datasets import load_dataset

dset = load_dataset("HuggingFaceFW/finepdfs", streaming=True)
# filter by metadata column that indicates language (often called 'language' or 'lang')
urdu = dset.filter(lambda ex: ex.get("language") in ("urd", "urd_Arab"))

for ex in urdu.take(5):
    print(ex.get("text")[:200])

Notes:

  • FinePDFs uses parquet shards and many per-language shard names (e.g. urd_Arab-00000-of-00010.parquet). If a per-language config exists you can pass that config name to load_dataset. The data directory and dataset card on Hugging Face list available language/shard names. citeturn4view0turn2view0
  • Prefer streaming=True on large datasets to avoid downloading multi-GB/TB data.

Suggested scripts (examples in scripts/) [FUTURE]

  • list_shards.py — list available Urdu shards on the hub (by reading the dataset data/ directory). Useful to see exact shard filenames (e.g. urd_Arab-*.parquet).
  • sample_text.py — stream N examples and write plain-text samples for quick inspection.
  • parquet_to_txt.py — convert a local Urdu parquet shard to newline-separated UTF-8 text.

(Include any scripts you add here and document their CLI flags.)


License & citation

This repo is a focused helper for the Hugging Face FinePDFs Urdu subset. The original FinePDFs dataset and its license (ODC-BY, dataset card) remain the canonical source — check the dataset page for exact terms before reusing or redistributing. If you use the data in research, please cite the FinePDFs dataset and related paper(s) as described on the Hugging Face dataset card. citeturn1view0turn2view0


Contributing

If you'd like to help:

  • Open an issue to request helpers or report missing shard names.
  • Submit PRs that add useful scripts (e.g., better samplers, cleaning utilities) and unit tests.
  • If you create smaller curated Urdu splits (cleaned, deduplicated), include exact provenance and obey the original license.

Caution & ethics

  • PDFs can contain copyrighted text, PII, or sensitive information. Before using or releasing derived subsets, verify that your use case complies with the license and with ethical/data-protection requirements.
  • FinePDFs is built from heterogeneous PDF sources — quality and OCR correctness vary across shards and languages.

Contact

If this repo is maintained by me (Humair) or you want to add contact info, add a MAINTAINER entry or open issues on the repo.


This README was prepared as a focused guide for the Urdu subset of Hugging Face FinePDFs.