You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

finepdfs-sample-75k-meta

Overview

This dataset is a metadata-enriched multilingual sample of the original FinePDFs dataset.

FinePDFs is a large-scale collection of document-level texts extracted from PDF files, sourced primarily from Common Crawl. The dataset emphasizes high-quality document extraction, structural coherence, and large-scale coverage of technical, scientific, educational, and administrative content commonly distributed in PDF form.

This release contains a 75,000-document sample drawn from FinePDFs and enriched with additional symbolic metadata layers designed to improve interpretability, licensing awareness, and semantic analysis of document-based web content.


Language Distribution

The dataset includes five languages, with an equal number of documents per language:

Language Subset Number of Entries Percentage
English en 15,000 20%
Italian it 15,009 20%
French fr 15,006 20%
German de 15,003 20%
Spanish es 14,997 20%

Sampling Strategy

This dataset represents a sample of the original FinePDFs corpus.
The sampling preserves language diversity and original FinePDFs filtering and extraction guarantees.

The goal is not to replace FinePDFs, but to provide a research-oriented subset enriched with structured metadata that enables deeper analysis of document properties, licensing signals, and semantic content.


Metadata Enrichment

Each document has been enriched with two complementary categories of metadata.

License-Related Metadata

License-related metadata aims to assess whether the source documents may be considered permissively usable. For each document, we check:

  • whether the original source URL is still reachable,
  • whether the hosting domain’s robots.txt allows crawler access,
  • whether the website mentions any license governing the document content.

These indicators are intended to support large-scale dataset auditing and filtering rather than provide definitive legal conclusions.


Semantics-Related Metadata

Semantics-related metadata provides a symbolic, structured characterization of document content. Four main areas are explored:

  1. Formal document features
    Including length, structural complexity, readability, and informativeness.

  2. Entity-centric analysis
    Detection of people, organizations, temporal references, and geographical locations.

  3. Domain and topic classification
    Assignment of document domains and subdomains reflecting subject matter.

  4. Content risk and sensitivity indicators
    Identification of biased language, sensitive information, opinionated content, negative sentiment, and personal data.

All semantic annotations were extracted using an entirely symbolic processing pipeline, selected for scalability, cost efficiency, and reproducibility.

In particular, semantics-related metadata was derived using the proprietary expert.ai knowledge graph.


Methodology Notes

  • No neural models were used for metadata extraction.
  • All enrichment steps are deterministic and rule-based.
  • The pipeline is designed for efficient processing of large-scale document collections.

Work in Progress

This dataset represents an initial public sample of an ongoing enrichment effort.
Future releases aim to extend the same metadata extraction pipeline to the entirety of the FinePDFs corpus.

A comprehensive technical report will document methodology, coverage, and validation results.


Data Fields

Each entry contains the following fields:

Field Type Description
text string Main text content
id string Unique identifier for this sample
dump string Common Crawl dump this sample was part of
url string URL of the original page where the text was present
date string Crawl date (from Common Crawl)
file_path string S3 path for the individual Common Crawl WARC file containing this sample
offset int Offset in the Common Crawl WARC file containing this sample
language string ISO 639-3 code for the language and script of this sample
per_page_languages list[string] Per-page ISO 639-3 language and script codes
page_average_lid string ISO 639-3 language and script detected by averaging LID scores across pages
page_average_lid_score float Score of the top-detected language from page-level averaging
full_doc_lid string ISO 639-3 language and script detected from the first 40k characters
full_doc_lid_score string Score of the top-detected language from full-document LID
is_truncated bool Indicates whether the document is truncated in Common Crawl
extractor string PDF extractor used for this sample (docling or rolmOCR)
page_ends list[int] Indices denoting the end position of each page (exclusive)
token_count int Number of tokens computed using the GPT-2 tokenizer
eai_lenChar string Document length in characters
eai_lenClass string Length-based document class
eai_readabilityLIX string LIX readability index
eai_readabilityEAI string Structural and semantic readability score
eai_readabilityCScore string Informativeness classification
eai_quality string Indicators of text quality issues
eai_geoClass string Detected geographic references
eai_timeRef string Detected temporal references
eai_categories string Domain and topic labels
eai_subject string Referent types present
eai_metrics string Entity relevance metrics
eai_idGeonames string GeoNames identifiers
eai_idWikiPeople string Wikipedia people identifiers
eai_idWikiOrg string Wikipedia organization identifiers
eai_idWikiGeo string Wikipedia location identifiers
eai_idWikiOther string Other Wikipedia entity identifiers
eai_idGoogleKGraph string Google Knowledge Graph identifiers
eai_idIATE string IATE terminology identifiers
eai_bias string Indicators of biased content
eai_sensitiveContent string Sensitive content flags
eai_opinions string Opinionated language indicators
eai_negativity string Negative sentiment
eai_privacy string Presence of personal data
eai_source string Source domain
eai_urlReachable string URL reachability status
eai_robotsTxt string Robots.txt permissions
eai_licenseInfo string License information detected

Intended Use

This dataset supports:

  • analysis of document-level web corpora,
  • research on licensing-aware dataset construction,
  • multilingual document quality assessment,
  • development of symbolic and hybrid data auditing tools.

It complements, rather than replaces, the original FinePDFs dataset.


Licensing

This dataset inherits the licensing considerations of the original FinePDFs corpus.
The additional metadata is provided as-is and does not constitute legal advice.

Users are responsible for ensuring compliance with applicable licenses and usage terms.

Downloads last month
9