license: apache-2.0
Romanian PDFs - Processed Dataset
This is a processed and filtered version of the Romanian subset from the FinepdFs dataset, containing high-quality Romanian PDF documents extracted from Common Crawl. The dataset has been filtered for quality (full_doc_lid_score ≥ 0.5) and optimized by removing redundant metadata columns.
Dataset Overview
- Total Documents: 3,254,816
- Total Size: ~24.32 GB (compressed parquet with ZSTD)
- Language: Romanian (ron_Latn)
- Source: FinepdFs (Common Crawl PDF extraction)
- Quality Filter: Only documents with full_doc_lid_score ≥ 0.5 retained
- Filtered Out: 10,316 low-quality documents removed
Dataset Structure
Data Instances
Example from the Romanian subset (values truncated for readability):
{
"text": "| Subd. /ziunea clasificatiei bugetului aprobat (articole / alineate) | Crediti bugetare aprobate | Angajamente bugetare | ...",
"id": "<urn:uuid:cfca6b5e-ae8c-4a22-8ecb-0b43230b5445>",
"dump": "CC-MAIN-2024-38",
"url": "https://cetatenie.just.ro/storage/2019/12/Detaliere_executie",
"date": "2024-09-15T09:20:48+00:00",
"token_count": 3078,
"page_average_lid_score": 0.7427258491516113,
"full_doc_lid_score": 0.7427258491516113,
"per_page_languages": ["ron_Latn"],
"is_truncated": false,
"extractor": "docling",
"page_ends": [3078]
}
Data Fields
- text (string): extracted and cleaned text from the PDF document, preserving structure like tables, lists, and formatting where possible
- id (string): unique identifier for the document (UUID format)
- dump (string): Common Crawl dump identifier (e.g., CC-MAIN-2024-38)
- url (string): source URL where the PDF was found
- date (string): ISO-8601 timestamp when the document was crawled
- token_count (int64): number of tokens in the text (pre-computed from original dataset)
- page_average_lid_score (float64): average language identification confidence score across all pages
- full_doc_lid_score (float64): language identification confidence score for the full document (all values ≥ 0.5)
- per_page_languages (list[string]): detected language codes for each page (typically ["ron_Latn"])
- is_truncated (bool): whether the text was truncated due to size limits
- extractor (categorical): extraction method used (e.g., "docling", "rolmOCR")
- page_ends (list[int64]): character offsets marking the end of each page in the text
Removed Columns
The following columns were removed from the original dataset for optimization:
- file_path: internal file path (not useful for end users)
- offset: internal byte offset (not useful for end users)
- language: redundant (all values were "ron_Latn")
- page_average_lid: redundant (all values were "ron_Latn")
- full_doc_lid: redundant (all values were "ron_Latn")
Processing Details
This dataset was created from the original FinepdFs Romanian subset by:
- Quality filtering: Removed 10,316 documents with full_doc_lid_score < 0.5 (0.3% of total)
- Column optimization: Removed 5 redundant/internal columns
- Memory-efficient processing: Processed one file at a time to handle 22GB dataset on 32GB RAM
- Compression: Re-saved with ZSTD compression (compression_level=3) for optimal storage
Quality Statistics
- Mean full_doc_lid_score: 0.986 (very high confidence Romanian content)
- Score range: 0.5000 to 1.0000
- Documents filtered: 10,316 low-quality documents removed
Files
000_00000_processed.parquet: 578,527 documents (~4.30 GB)000_00001_processed.parquet: 569,277 documents (~4.26 GB)000_00002_processed.parquet: 564,650 documents (~4.19 GB)000_00003_processed.parquet: 390,862 documents (~2.88 GB)000_00004_processed.parquet: 390,362 documents (~2.98 GB)000_00005_processed.parquet: 384,547 documents (~2.86 GB)000_00006_processed.parquet: 376,591 documents (~2.83 GB)
Use Cases
- Training or fine-tuning Romanian language models on diverse document types
- Domain-specific model training (legal, government, academic, technical documents)
- Document understanding and information extraction
- Question answering systems over Romanian documents
- Text summarization and generation from structured documents
- PDF content analysis and classification
- Building Romanian document embeddings and search systems
Source Dataset
This dataset is derived from FinepdFs, which provides high-quality PDF extractions from Common Crawl. FinepdFs uses advanced extraction methods (Docling, rolmOCR) to preserve document structure and formatting.
Extraction Methods
- Docling: Modern PDF extraction with structure preservation
- rolmOCR: OCR-based extraction for scanned/image PDFs
Citation Information
@dataset{finepdfs2025,
title = {FinepdFs: High-Quality PDF Extraction from Common Crawl},
author = {HuggingFace FW Team},
year = {2025},
publisher = {Hugging Face Datasets},
url = {https://huggingface.co/datasets/HuggingFaceFW/finepdfs},
note = {Source: Common Crawl PDF documents with advanced extraction methods}
}
If you use this processed dataset, please also cite the original FinepdFs dataset.
License
The dataset follows the original FinepdFs license. PDF content is extracted from publicly available documents on the web via Common Crawl. Users should be aware of and respect any copyright or usage restrictions of the original documents.
Dataset Creator
Processed and uploaded by Yxanul
Additional Notes
- This dataset complements the Romanian Wikipedia dataset by providing diverse, real-world document content
- PDFs include various domains: legal documents, government publications, academic papers, technical documentation, and more
- The high average LID score (0.986) indicates very clean Romanian language content
- Token counts are pre-computed from the original dataset (tokenization method may vary)