| --- |
| license: odc-by |
| task_categories: |
| - text-generation |
| - feature-extraction |
| - text-classification |
| language: |
| - en |
| - mul |
| pretty_name: OpenHTML |
| size_categories: |
| - 1M<n<10M |
| tags: |
| - common-crawl |
| - web-crawl |
| - html |
| - text |
| - metadata |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: data/*/* |
| - config_name: CC-MAIN-2026-12 |
| data_files: |
| - split: train |
| path: data/CC-MAIN-2026-12/* |
| --- |
| |
| # **OpenHTML** |
|
|
| > Raw HTML from the web with rich structured metadata — ready for training, retrieval, and analysis |
|
|
| ## What is it? |
|
|
| **OpenHTML** is a large-scale web dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the raw HTML body along with structured metadata from WARC records, HTTP response headers, and HTML `<head>` tags, then packages everything into Parquet files with 24 columns. |
|
|
| The dataset currently includes crawl **CC-MAIN-2026-12** with **197,357 documents across 10 shards**. Processed 34.3 GB of raw HTML into 34.3 GB of stored body text — 6.5 GB as Parquet (Zstd). We plan to add more snapshots over time. |
|
|
| **OpenHTML** is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl. |
|
|
| ## What is being released? |
|
|
| Each Common Crawl WARC file (~1 GB of compressed HTML) becomes one Parquet shard. The shards live under a crawl-specific directory so multiple snapshots can coexist: |
|
|
| ``` |
| data/ |
| CC-MAIN-2026-12/ |
| 00000.parquet |
| 00001.parquet |
| ... |
| ``` |
|
|
| Every row in a Parquet file is one web page with **24 columns** of metadata. Each row includes the `warc_record_id` and `warc_date` fields parsed from the original WARC headers, so you can trace any document back to its source record. We also extract HTTP response headers (`content_type`, `charset`, `content_language`, `http_server`, `http_last_modified`) and HTML `<head>` metadata (`title`, `description`, `og:title`, `og:description`, `og:image`, `og:type`, `canonical_url`, `html_lang`). The URL is decomposed into `host`, `domain` (eTLD+1), `path`, and `query`. |
|
|
| ## How to download and use OpenHTML |
|
|
| ### Using `datasets` |
|
|
| ```python |
| from datasets import load_dataset |
| |
| # stream the entire dataset |
| ds = load_dataset("open-index/open-html", name="CC-MAIN-2026-12", split="train", streaming=True) |
| for doc in ds: |
| print(doc["url"], doc["title"], len(doc["body"])) |
| |
| # load a single shard into memory |
| ds = load_dataset( |
| "open-index/open-html", |
| data_files="data/CC-MAIN-2026-12/00000.parquet", |
| split="train", |
| ) |
| ``` |
|
|
| ### Using `huggingface_hub` |
| |
| ```python |
| from huggingface_hub import snapshot_download |
| |
| folder = snapshot_download( |
| "open-index/open-html", |
| repo_type="dataset", |
| local_dir="./open-html/", |
| allow_patterns="data/CC-MAIN-2026-12/*", |
| ) |
| ``` |
| |
| For faster downloads, install `pip install huggingface_hub[hf_transfer]` and set `HF_HUB_ENABLE_HF_TRANSFER=1`. |
|
|
| ### Using DuckDB |
|
|
| ```sql |
| SELECT url, title, domain, html_lang, html_length |
| FROM read_parquet('hf://datasets/open-index/open-html/data/CC-MAIN-2026-12/*.parquet') |
| WHERE domain = 'wikipedia.org' |
| LIMIT 10; |
| ``` |
|
|
| ```sql |
| -- Top domains by page count |
| SELECT domain, COUNT(*) as pages, AVG(html_length) as avg_html |
| FROM read_parquet('hf://datasets/open-index/open-html/data/CC-MAIN-2026-12/*.parquet') |
| GROUP BY domain |
| ORDER BY pages DESC |
| LIMIT 20; |
| ``` |
|
|
| ```sql |
| -- Pages with Open Graph metadata |
| SELECT url, og_title, og_description, og_image |
| FROM read_parquet('hf://datasets/open-index/open-html/data/CC-MAIN-2026-12/*.parquet') |
| WHERE og_title != '' AND og_image != '' |
| LIMIT 10; |
| ``` |
|
|
| # Dataset card for OpenHTML |
|
|
| ## Dataset Description |
|
|
| - **Homepage and Repository:** [https://huggingface.co/datasets/open-index/open-html](https://huggingface.co/datasets/open-index/open-html) |
| - **Point of Contact:** please create a discussion on the Community tab |
| - **License:** Open Data Commons Attribution License (ODC-By) v1.0 |
|
|
| ## Dataset Structure |
|
|
| ### Data Instance |
|
|
| The following is an example row from the dataset: |
|
|
| ```json |
| { |
| "url": "https://example.com/article/interesting-topic", |
| "warc_date": "2026-03-05T07:14:58Z", |
| "warc_record_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", |
| "warc_filename": "CC-MAIN-20260305070756-20260305100756-00000.warc.gz", |
| "http_status": 200, |
| "content_type": "text/html", |
| "charset": "utf-8", |
| "content_language": "en", |
| "http_server": "nginx", |
| "http_last_modified": "Tue, 04 Mar 2026 12:00:00 GMT", |
| "host": "example.com", |
| "domain": "example.com", |
| "path": "/article/interesting-topic", |
| "query": "", |
| "html_lang": "en", |
| "title": "Interesting Topic - Example", |
| "description": "A fascinating article about interesting topics.", |
| "og_title": "Interesting Topic", |
| "og_description": "A fascinating article about interesting topics.", |
| "og_image": "https://example.com/images/topic.jpg", |
| "og_type": "article", |
| "canonical_url": "https://example.com/article/interesting-topic", |
| "html_length": 48210, |
| "body": "<!DOCTYPE html><html lang=\"en\"><head>..." |
| } |
| ``` |
|
|
| ### Data Fields |
|
|
| | Column | Type | Description | |
| |---|---|---| |
| | `url` | string | Full URL of the crawled page | |
| | `warc_date` | string | Crawl timestamp from the WARC record (RFC 3339) | |
| | `warc_record_id` | string | UUID from the WARC-Record-ID header, for source traceability | |
| | `warc_filename` | string | Source WARC file basename from Common Crawl | |
| | `http_status` | int32 | HTTP response status code (always 200 in this dataset) | |
| | `content_type` | string | Content-Type from the HTTP response (always starts with `text/html`) | |
| | `charset` | string | Character encoding from the Content-Type header (e.g., `utf-8`, `iso-8859-1`) | |
| | `content_language` | string | Content-Language HTTP header (e.g., `en`, `de`, `fr`) | |
| | `http_server` | string | Server software from the HTTP response (e.g., `nginx`, `Apache`) | |
| | `http_last_modified` | string | Last-Modified HTTP header — when the page was last changed | |
| | `host` | string | Lowercase hostname extracted from the URL (e.g., `www.example.com`) | |
| | `domain` | string | Registered domain (eTLD+1) — groups subdomains together (e.g., `example.com`) | |
| | `path` | string | URL path component (e.g., `/article/interesting-topic`) | |
| | `query` | string | URL query string, if any (e.g., `page=2&sort=date`) | |
| | `html_lang` | string | Language attribute from `<html lang="...">` tag | |
| | `title` | string | Page title from `<title>` tag in `<head>` | |
| | `description` | string | Meta description from `<meta name="description">` | |
| | `og_title` | string | Open Graph title from `<meta property="og:title">` | |
| | `og_description` | string | Open Graph description from `<meta property="og:description">` | |
| | `og_image` | string | Open Graph image URL from `<meta property="og:image">` | |
| | `og_type` | string | Open Graph type from `<meta property="og:type">` (e.g., `article`, `website`) | |
| | `canonical_url` | string | Canonical URL from `<link rel="canonical">` — the page's preferred URL | |
| | `html_length` | int64 | Byte length of the raw HTML body in bytes | |
| | `body` | string | Raw HTML body (full content, no truncation) | |
|
|
| ### Data Splits |
|
|
| The default subset includes all available data across all crawl snapshots. You can also load a specific crawl by using its ID as the config name (e.g. `CC-MAIN-2026-12`). |
|
|
| ## Dataset Creation |
|
|
| ### Curation Rationale |
|
|
| Most open web datasets either release raw text (losing structure) or processed markdown (losing metadata). **OpenHTML** takes a different approach: it preserves the **raw HTML** alongside **24 columns of structured metadata** extracted from WARC headers, HTTP response headers, and HTML `<head>` tags. This lets you: |
|
|
| - **Train** models on raw web content with full context |
| - **Filter** by language, domain, content type, or Open Graph metadata |
| - **Analyze** web structure, server software distribution, or charset usage |
| - **Trace** every document back to its exact WARC source record |
|
|
| ### Source Data |
|
|
| The source data consists of web pages crawled by the [Common Crawl](https://commoncrawl.org) foundation. Common Crawl archives billions of pages across the public web and makes the raw WARC files freely available on Amazon S3. |
|
|
| ### Data Processing Steps |
|
|
| The processing pipeline runs as a single-pass extraction: |
|
|
| 1. **Download** raw .warc.gz files from Common Crawl S3 (each file is roughly 1 GB compressed) |
| 2. **Filter** to keep only HTTP 200 responses with a `text/html` content type, discarding images, scripts, redirects, and error pages |
| 3. **Parse** HTTP response headers to extract `content_type`, `charset`, `content_language`, `server`, and `last_modified` |
| 4. **Decompose** the URL into `host`, `domain` (eTLD+1 via the Public Suffix List), `path`, and `query` |
| 5. **Extract** HTML `<head>` metadata using a streaming tokenizer: `title`, `description`, Open Graph tags (`og:title`, `og:description`, `og:image`, `og:type`), `canonical_url`, and `html_lang` |
| 6. **Store** the full HTML body (no truncation — `html_length` matches `body` size) |
| 7. **Export** directly to Apache Parquet with Zstd compression, 100,000 rows per row group |
|
|
| No intermediate files are created — the pipeline streams from compressed WARC through extraction directly into Parquet. Pages that produce empty HTML bodies are dropped. |
|
|
| ### Compression Ratios |
|
|
| Numbers below are actual measurements summed across all 10 files of CC-MAIN-2026-12 (197,357 pages total), projected to the full crawl of 100,000 WARC files. |
|
|
| | Stage | 10 files (measured) | 100,000 files (projected) | Reduction | |
| |---|---|---|---| |
| | Raw WARC (.warc.gz, downloaded) | ~8.1 GB | ~79.2 TB | — | |
| | HTML extracted (uncompressed) | 34.3 GB | ~335.2 TB | — | |
| | Body stored (full HTML) | 34.3 GB | ~335.2 TB | **-0.0%** vs HTML | |
| | Final Parquet (Zstd) | 6.5 GB | ~63.8 TB | **-81.0%** vs body | |
|
|
| The body column stores the full raw HTML. Parquet with Zstd then compresses the data further. End to end: ~8.1 GB of raw gzipped WARCs becomes **6.5 GB of Parquet** — a **19.5% total reduction** — containing 197,357 web pages with full metadata. |
|
|
| ### Processing Times |
|
|
| Pipeline timings across 10 shards of CC-MAIN-2026-12: |
|
|
| ``` |
| Download (raw WARC) ████████████████████████ 1h 29m 47s |
| Extract (WARC → HTML + metadata) ███████████████████████░ 1h 28m 15s |
| Publish (HuggingFace upload) ███░░░░░░░░░░░░░░░░░░░░░ 12m 58s |
| ``` |
|
|
| ### Dataset Charts |
|
|
|  |
|
|
|  |
|
|
| ### Personal and Sensitive Information |
|
|
| No additional PII filtering is applied beyond what Common Crawl provides. As the dataset is sourced from the public web, it is likely that some personally identifiable information is present. If you find your own PII in the dataset and would like it removed, please open an issue on the repository. |
|
|
| ## Considerations for Using the Data |
|
|
| ### Social Impact |
|
|
| By releasing both the dataset and the full processing pipeline, we aim to lower the barrier to training and evaluating language models on high quality web data. Researchers and practitioners who cannot afford to run their own Common Crawl processing pipelines can use **OpenHTML** directly. |
|
|
| ### Discussion of Biases |
|
|
| **OpenHTML** inherits the biases present in Common Crawl and the public web at large. The filtering step keeps only `text/html` pages, which may underrepresent content served as other content types. We have not applied any machine-learning-based quality or toxicity filters, as such filters have been shown to disproportionately remove content from certain dialects and communities. |
|
|
| ### Known Limitations |
|
|
| The full HTML body is stored without truncation. Very large pages (e.g., pages with inline data URIs) will increase shard sizes. The `html_length` field reflects the exact body size in bytes. |
|
|
| Metadata extraction scans only the `<head>` section for performance. Pages that place `<meta>` or `<title>` tags in the `<body>` will have missing metadata. |
|
|
| ## Additional Information |
|
|
| ### Licensing |
|
|
| The dataset is released under the **Open Data Commons Attribution License (ODC-By) v1.0**. The use of this dataset is also subject to [Common Crawl's Terms of Use](https://commoncrawl.org/terms-of-use). The original content remains subject to the rights and terms of its respective publishers. |
|
|
| ### Contact |
|
|
| Please open a discussion on the [Community tab](https://huggingface.co/datasets/open-index/open-html/discussions) for questions, feedback, or issues. |
|
|