draft / README.md
tamnd's picture
Publish shards CC-MAIN-2026-08/02111–02120 (10 files)
aef806b verified
metadata
license: odc-by
task_categories:
  - text-generation
  - feature-extraction
language:
  - en
pretty_name: Open Index
size_categories:
  - 1M<n<10M
tags:
  - common-crawl
  - web-crawl
  - markdown
  - text
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*/*
  - config_name: CC-MAIN-2026-08
    data_files:
      - split: train
        path: data/CC-MAIN-2026-08/*

Open Index

Clean markdown from the web, ready for training and retrieval

What is it?

Open Index is a large-scale web text dataset built from Common Crawl. Every page goes through a pipeline that extracts the main content from raw HTML, converts it to clean Markdown using trafilatura, and packages the result into Parquet files with full WARC metadata preserved.

The dataset currently includes crawl CC-MAIN-2026-08 with 11,157,687 documents across 578 shards. We plan to add more snapshots over time.

Open Index is released under the Open Data Commons Attribution License (ODC-By) v1.0, the same license used by Common Crawl.

What is being released?

Each Common Crawl WARC file (~1 GB of compressed HTML) becomes one Parquet shard. The shards live under a crawl-specific directory so multiple snapshots can coexist:

data/
  CC-MAIN-2026-08/
    00000.parquet
    00001.parquet
    ...

Every row in a Parquet file is one web page. Along with the markdown body, we preserve the original WARC headers as a JSON column so you can always trace a document back to its source record.

How to download and use Open Index

Using datasets

from datasets import load_dataset

# stream the entire dataset
ds = load_dataset("open-index/draft", name="CC-MAIN-2026-08", split="train", streaming=True)
for doc in ds:
    print(doc["url"], len(doc["markdown"]))

# load a single shard into memory
ds = load_dataset(
    "open-index/draft",
    data_files="data/CC-MAIN-2026-08/00000.parquet",
    split="train",
)

Using huggingface_hub

from huggingface_hub import snapshot_download

folder = snapshot_download(
    "open-index/draft",
    repo_type="dataset",
    local_dir="./open-index/",
    allow_patterns="data/CC-MAIN-2026-08/*",
)

For faster downloads, install pip install huggingface_hub[hf_transfer] and set HF_HUB_ENABLE_HF_TRANSFER=1.

Using DuckDB

SELECT url, host, markdown_length
FROM read_parquet('hf://datasets/open-index/draft/data/CC-MAIN-2026-08/*.parquet')
WHERE host = 'en.wikipedia.org'
LIMIT 10;

Dataset card for Open Index

Dataset Description

Dataset Structure

Data Instance

The following is an example row from the dataset:

{
  "doc_id": "6aaa5be7-a917-5105-aa60-e39ea1d087fc",
  "url": "https://example.com/article/interesting-topic",
  "host": "example.com",
  "crawl_date": "2026-02-06T18:14:58Z",
  "warc_record_id": "<urn:uuid:a1b2c3d4-e5f6-7890-abcd-ef1234567890>",
  "warc_refers_to": "<urn:uuid:f9e8d7c6-b5a4-3210-fedc-ba0987654321>",
  "html_length": 48210,
  "markdown_length": 3847,
  "markdown": "# Interesting Topic\n\nThis is the main content of the page..."
}

Data Fields

Column Type Description
doc_id string Deterministic UUID v5 derived from the canonical URL: doc_id = UUID5(NamespaceURL, url) — identical URLs always produce the same doc_id across crawls
url string Original URL of the crawled page
host string Lowercase hostname extracted from the URL
crawl_date string RFC 3339 timestamp from the WARC record
warc_record_id string Full WARC-Record-ID of this conversion record (<urn:uuid:...>)
warc_refers_to string WARC-Record-ID of the original HTTP response this was converted from
html_length int64 Byte length of the original HTML body before conversion
markdown_length int64 Byte length of the converted markdown body
markdown string Clean markdown content extracted from the page

Data Splits

The default subset includes all available data across all crawl snapshots. You can also load a specific crawl by using its ID as the config name (e.g. CC-MAIN-2026-08).

Dataset Creation

Curation Rationale

Most open web datasets either release raw text without structure or keep the HTML and leave parsing to the user. Open Index sits in between: it converts every page to Markdown so the content is immediately usable for training, while preserving the full WARC headers so you can always go back to the source if you need to.

Source Data

The source data consists of web pages crawled by the Common Crawl foundation. Common Crawl archives billions of pages across the public web and makes the raw WARC files freely available on Amazon S3.

Data Processing Steps

The processing pipeline runs in five stages:

  1. Download raw .warc.gz files from Common Crawl S3 (each file is roughly 1 GB compressed)
  2. Filter to keep only HTTP 200 responses with a text/html content type, discarding images, scripts, redirects, and error pages
  3. Convert HTML to Markdown using trafilatura, which extracts the main content and strips boilerplate, navigation, sidebars, footers, and ads
  4. Pack converted records into seekable .md.warc.gz files where each record is wrapped in its own gzip member, matching Common Crawl's concatenated-gzip format
  5. Export each shard to Apache Parquet with Zstd compression, 100,000 rows per row group, and an 8 MB page buffer

Empty conversions (pages where trafilatura could not extract meaningful content) are dropped.

Compression Ratios

Numbers below are actual measurements summed across all 578 files of CC-MAIN-2026-08 (11,157,687 pages total), projected to the full crawl of 100,000 WARC files.

Stage 578 files (measured) 100,000 files (projected) Reduction
Raw WARC (.warc.gz, downloaded) ~468.5 GB ~83 TB
HTML extracted (uncompressed) 1.3 TB ~295 TB
Packed markdown WARC (.md.warc.gz) ~23.6 GB ~3.7 TB -98.3% vs HTML
Final Parquet (Zstd level 19) 16.0 GB ~2.9 TB -32.3% vs packed WARC

The big win is the HTML → Markdown step: trafilatura strips all tags, scripts, styles, navigation, and ads, keeping only the main content. This cuts 1.3 TB of uncompressed HTML down to 50.3 GB of markdown — a 98.3% reduction — before any file-level compression is applied. Parquet with Zstd level 19 then compresses the markdown a further 68.2%.

End to end: ~468.5 GB of raw gzipped WARCs becomes 16.0 GB of Parquet — a 96.6% total reduction — containing 11,157,687 clean markdown documents.

Processing Times

Pipeline timings across 578 shards of CC-MAIN-2026-08:

Download (raw WARC)        ████████████████████████  total 27h 9m 10s    avg 2m 49s
Convert  (HTML → MD)       █░░░░░░░░░░░░░░░░░░░░░░░  total 1h 39m 12s    avg 10s
Export   (Parquet)          █████░░░░░░░░░░░░░░░░░░░  total 6h 10m 25s    avg 38s
Publish  (HuggingFace)      ████░░░░░░░░░░░░░░░░░░░░  total 5h 3m 20s     avg 31s

Dataset Charts

Total size: HTML vs Markdown vs Parquet

Compression breakdown

Size per shard: HTML vs Markdown

Pipeline time per shard

Personal and Sensitive Information

No additional PII filtering is applied beyond what Common Crawl provides. As the dataset is sourced from the public web, it is likely that some personally identifiable information is present. If you find your own PII in the dataset and would like it removed, please open an issue on the repository.

Considerations for Using the Data

Social Impact

By releasing both the dataset and the full processing pipeline, we aim to lower the barrier to training and evaluating language models on high quality web data. Researchers and practitioners who cannot afford to run their own Common Crawl processing pipelines can use Open Index directly.

Discussion of Biases

Open Index inherits the biases present in Common Crawl and the public web at large. The trafilatura extraction step favors article-like pages and may underrepresent content from forums, social media, and non-standard page layouts. We have not applied any machine-learning-based quality or toxicity filters, as such filters have been shown to disproportionately remove content from certain dialects and communities.

Known Limitations

Code-heavy pages may not convert well to Markdown. If you are training a model that needs strong code performance, consider supplementing Open Index with a dedicated code dataset such as The Stack v2. Similarly, highly structured pages like Wikipedia may have better formatting in dedicated Wikipedia dumps than in their Common Crawl versions.

Additional Information

Licensing

The dataset is released under the Open Data Commons Attribution License (ODC-By) v1.0. The use of this dataset is also subject to Common Crawl's Terms of Use. The original content remains subject to the rights and terms of its respective publishers.

Contact

Please open a discussion on the Community tab for questions, feedback, or issues.