--- license: odc-by task_categories: - text-generation - feature-extraction language: - en pretty_name: Open Index size_categories: - 1M Clean markdown from the web, ready for training and retrieval ## What is it? Open Index is a large-scale web text dataset built from [Common Crawl](https://commoncrawl.org). Every page goes through a pipeline that extracts the main content from raw HTML, converts it to clean Markdown using [trafilatura](https://github.com/adbar/trafilatura), and packages the result into Parquet files with full WARC metadata preserved. The dataset currently includes crawl **CC-MAIN-2026-08** with **18,158,836 documents across 938 shards**. We plan to add more snapshots over time. Open Index is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl. ## What is being released? Each Common Crawl WARC file (~1 GB of compressed HTML) becomes one Parquet shard. The shards live under a crawl-specific directory so multiple snapshots can coexist: ``` data/ CC-MAIN-2026-08/ 00000.parquet 00001.parquet ... ``` Every row in a Parquet file is one web page. Along with the markdown body, we preserve the original WARC headers as a JSON column so you can always trace a document back to its source record. ## How to download and use Open Index ### Using `datasets` ```python from datasets import load_dataset # stream the entire dataset ds = load_dataset("open-index/draft", name="CC-MAIN-2026-08", split="train", streaming=True) for doc in ds: print(doc["url"], len(doc["markdown"])) # load a single shard into memory ds = load_dataset( "open-index/draft", data_files="data/CC-MAIN-2026-08/00000.parquet", split="train", ) ``` ### Using `huggingface_hub` ```python from huggingface_hub import snapshot_download folder = snapshot_download( "open-index/draft", repo_type="dataset", local_dir="./open-index/", allow_patterns="data/CC-MAIN-2026-08/*", ) ``` For faster downloads, install `pip install huggingface_hub[hf_transfer]` and set `HF_HUB_ENABLE_HF_TRANSFER=1`. ### Using DuckDB ```sql SELECT url, host, markdown_length FROM read_parquet('hf://datasets/open-index/draft/data/CC-MAIN-2026-08/*.parquet') WHERE host = 'en.wikipedia.org' LIMIT 10; ``` # Dataset card for Open Index ## Dataset Description - **Homepage and Repository:** [https://huggingface.co/datasets/open-index/draft](https://huggingface.co/datasets/open-index/draft) - **Point of Contact:** please create a discussion on the Community tab - **License:** Open Data Commons Attribution License (ODC-By) v1.0 ## Dataset Structure ### Data Instance The following is an example row from the dataset: ```json { "doc_id": "6aaa5be7-a917-5105-aa60-e39ea1d087fc", "url": "https://example.com/article/interesting-topic", "host": "example.com", "crawl_date": "2026-02-06T18:14:58Z", "warc_record_id": "", "warc_refers_to": "", "html_length": 48210, "markdown_length": 3847, "markdown": "# Interesting Topic\n\nThis is the main content of the page..." } ``` ### Data Fields | Column | Type | Description | |---|---|---| | `doc_id` | string | Deterministic UUID v5 derived from the canonical URL: `doc_id = UUID5(NamespaceURL, url)` — identical URLs always produce the same `doc_id` across crawls | | `url` | string | Original URL of the crawled page | | `host` | string | Lowercase hostname extracted from the URL | | `crawl_date` | string | RFC 3339 timestamp from the WARC record | | `warc_record_id` | string | Full WARC-Record-ID of this conversion record (``) | | `warc_refers_to` | string | WARC-Record-ID of the original HTTP response this was converted from | | `html_length` | int64 | Byte length of the original HTML body before conversion | | `markdown_length` | int64 | Byte length of the converted markdown body | | `markdown` | string | Clean markdown content extracted from the page | ### Data Splits The default subset includes all available data across all crawl snapshots. You can also load a specific crawl by using its ID as the config name (e.g. `CC-MAIN-2026-08`). ## Dataset Creation ### Curation Rationale Most open web datasets either release raw text without structure or keep the HTML and leave parsing to the user. Open Index sits in between: it converts every page to Markdown so the content is immediately usable for training, while preserving the full WARC headers so you can always go back to the source if you need to. ### Source Data The source data consists of web pages crawled by the [Common Crawl](https://commoncrawl.org) foundation. Common Crawl archives billions of pages across the public web and makes the raw WARC files freely available on Amazon S3. ### Data Processing Steps The processing pipeline runs in five stages: 1. **Download** raw .warc.gz files from Common Crawl S3 (each file is roughly 1 GB compressed) 2. **Filter** to keep only HTTP 200 responses with a text/html content type, discarding images, scripts, redirects, and error pages 3. **Convert** HTML to Markdown using [trafilatura](https://github.com/adbar/trafilatura), which extracts the main content and strips boilerplate, navigation, sidebars, footers, and ads 4. **Pack** converted records into seekable .md.warc.gz files where each record is wrapped in its own gzip member, matching Common Crawl's concatenated-gzip format 5. **Export** each shard to Apache Parquet with Zstd compression, 100,000 rows per row group, and an 8 MB page buffer Empty conversions (pages where trafilatura could not extract meaningful content) are dropped. ### Compression Ratios Numbers below are actual measurements summed across all 938 files of CC-MAIN-2026-08 (18,158,836 pages total), projected to the full crawl of 100,000 WARC files. | Stage | 938 files (measured) | 100,000 files (projected) | Reduction | |---|---|---|---| | Raw WARC (.warc.gz, downloaded) | ~760.3 GB | ~83 TB | — | | HTML extracted (uncompressed) | 2.2 TB | ~295 TB | — | | Packed markdown WARC (.md.warc.gz) | ~38.5 GB | ~3.7 TB | **-98.3%** vs HTML | | Final Parquet (Zstd level 19) | 26.0 GB | ~2.9 TB | **-32.4%** vs packed WARC | The big win is the HTML → Markdown step: trafilatura strips all tags, scripts, styles, navigation, and ads, keeping only the main content. This cuts 2.2 TB of uncompressed HTML down to 81.9 GB of markdown — a **98.3% reduction** — before any file-level compression is applied. Parquet with Zstd level 19 then compresses the markdown a further 68.2%. End to end: ~760.3 GB of raw gzipped WARCs becomes **26.0 GB of Parquet** — a **96.6% total reduction** — containing 18,158,836 clean markdown documents. ### Processing Times Pipeline timings across 938 shards of CC-MAIN-2026-08: ``` Download (raw WARC) ████████████████████████ total 61h 22m 28s avg 3m 55s Convert (HTML → MD) █░░░░░░░░░░░░░░░░░░░░░░░ total 4h 56m 25s avg 18s Export (Parquet) █████░░░░░░░░░░░░░░░░░░░ total 14h 6m 10s avg 54s Publish (HuggingFace) ██░░░░░░░░░░░░░░░░░░░░░░ total 6h 47m 11s avg 26s ``` ### Dataset Charts ![Total size: HTML vs Markdown vs Parquet](charts/totals_chart.png) ![Pipeline stage durations](charts/timing_chart.png) ### Personal and Sensitive Information No additional PII filtering is applied beyond what Common Crawl provides. As the dataset is sourced from the public web, it is likely that some personally identifiable information is present. If you find your own PII in the dataset and would like it removed, please open an issue on the repository. ## Considerations for Using the Data ### Social Impact By releasing both the dataset and the full processing pipeline, we aim to lower the barrier to training and evaluating language models on high quality web data. Researchers and practitioners who cannot afford to run their own Common Crawl processing pipelines can use Open Index directly. ### Discussion of Biases Open Index inherits the biases present in Common Crawl and the public web at large. The trafilatura extraction step favors article-like pages and may underrepresent content from forums, social media, and non-standard page layouts. We have not applied any machine-learning-based quality or toxicity filters, as such filters have been shown to disproportionately remove content from certain dialects and communities. ### Known Limitations Code-heavy pages may not convert well to Markdown. If you are training a model that needs strong code performance, consider supplementing Open Index with a dedicated code dataset such as [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2). Similarly, highly structured pages like Wikipedia may have better formatting in dedicated Wikipedia dumps than in their Common Crawl versions. ## Additional Information ### Licensing The dataset is released under the **Open Data Commons Attribution License (ODC-By) v1.0**. The use of this dataset is also subject to [Common Crawl's Terms of Use](https://commoncrawl.org/terms-of-use). The original content remains subject to the rights and terms of its respective publishers. ### Contact Please open a discussion on the [Community tab](https://huggingface.co/datasets/open-index/draft/discussions) for questions, feedback, or issues.