File size: 12,837 Bytes
46b3fa5
 
 
b24c50e
 
 
46b3fa5
b24c50e
 
 
46b3fa5
b24c50e
46b3fa5
b24c50e
 
 
 
 
 
 
 
 
 
 
 
 
 
46b3fa5
 
b24c50e
46b3fa5
b24c50e
46b3fa5
b24c50e
46b3fa5
b24c50e
46b3fa5
42a0ff4
46b3fa5
b24c50e
46b3fa5
b24c50e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46b3fa5
 
 
 
b24c50e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46b3fa5
b24c50e
 
 
 
 
 
46b3fa5
b24c50e
 
 
 
 
 
 
 
46b3fa5
b24c50e
 
 
 
 
 
46b3fa5
 
b24c50e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8c15773
 
b24c50e
 
46b3fa5
b24c50e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8c15773
b24c50e
 
 
 
 
 
42a0ff4
b24c50e
42a0ff4
b24c50e
42a0ff4
 
 
 
b24c50e
42a0ff4
46b3fa5
aad22e1
 
42a0ff4
aad22e1
 
42a0ff4
 
019ac0c
aad22e1
 
b24c50e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8c15773
b24c50e
 
 
 
 
 
 
 
 
 
46b3fa5
b24c50e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
---
license: odc-by
task_categories:
- text-generation
- feature-extraction
- text-classification
language:
- en
- mul
pretty_name: OpenHTML
size_categories:
- 1M<n<10M
tags:
- common-crawl
- web-crawl
- html
- text
- metadata
configs:
- config_name: default
  data_files:
  - split: train
    path: data/*/*
- config_name: CC-MAIN-2026-12
  data_files:
  - split: train
    path: data/CC-MAIN-2026-12/*
---

# **OpenHTML**

> Raw HTML from the web with rich structured metadata — ready for training, retrieval, and analysis

## What is it?

**OpenHTML** is a large-scale web dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the raw HTML body along with structured metadata from WARC records, HTTP response headers, and HTML `<head>` tags, then packages everything into Parquet files with 24 columns.

The dataset currently includes crawl **CC-MAIN-2026-12** with **197,357 documents across 10 shards**. Processed 34.3 GB of raw HTML into 34.3 GB of stored body text — 6.5 GB as Parquet (Zstd). We plan to add more snapshots over time.

**OpenHTML** is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl.

## What is being released?

Each Common Crawl WARC file (~1 GB of compressed HTML) becomes one Parquet shard. The shards live under a crawl-specific directory so multiple snapshots can coexist:

```
data/
  CC-MAIN-2026-12/
    00000.parquet
    00001.parquet
    ...
```

Every row in a Parquet file is one web page with **24 columns** of metadata. Each row includes the `warc_record_id` and `warc_date` fields parsed from the original WARC headers, so you can trace any document back to its source record. We also extract HTTP response headers (`content_type`, `charset`, `content_language`, `http_server`, `http_last_modified`) and HTML `<head>` metadata (`title`, `description`, `og:title`, `og:description`, `og:image`, `og:type`, `canonical_url`, `html_lang`). The URL is decomposed into `host`, `domain` (eTLD+1), `path`, and `query`.

## How to download and use OpenHTML

### Using `datasets`

```python
from datasets import load_dataset

# stream the entire dataset
ds = load_dataset("open-index/open-html", name="CC-MAIN-2026-12", split="train", streaming=True)
for doc in ds:
    print(doc["url"], doc["title"], len(doc["body"]))

# load a single shard into memory
ds = load_dataset(
    "open-index/open-html",
    data_files="data/CC-MAIN-2026-12/00000.parquet",
    split="train",
)
```

### Using `huggingface_hub`

```python
from huggingface_hub import snapshot_download

folder = snapshot_download(
    "open-index/open-html",
    repo_type="dataset",
    local_dir="./open-html/",
    allow_patterns="data/CC-MAIN-2026-12/*",
)
```

For faster downloads, install `pip install huggingface_hub[hf_transfer]` and set `HF_HUB_ENABLE_HF_TRANSFER=1`.

### Using DuckDB

```sql
SELECT url, title, domain, html_lang, html_length
FROM read_parquet('hf://datasets/open-index/open-html/data/CC-MAIN-2026-12/*.parquet')
WHERE domain = 'wikipedia.org'
LIMIT 10;
```

```sql
-- Top domains by page count
SELECT domain, COUNT(*) as pages, AVG(html_length) as avg_html
FROM read_parquet('hf://datasets/open-index/open-html/data/CC-MAIN-2026-12/*.parquet')
GROUP BY domain
ORDER BY pages DESC
LIMIT 20;
```

```sql
-- Pages with Open Graph metadata
SELECT url, og_title, og_description, og_image
FROM read_parquet('hf://datasets/open-index/open-html/data/CC-MAIN-2026-12/*.parquet')
WHERE og_title != '' AND og_image != ''
LIMIT 10;
```

# Dataset card for OpenHTML

## Dataset Description

- **Homepage and Repository:** [https://huggingface.co/datasets/open-index/open-html](https://huggingface.co/datasets/open-index/open-html)
- **Point of Contact:** please create a discussion on the Community tab
- **License:** Open Data Commons Attribution License (ODC-By) v1.0

## Dataset Structure

### Data Instance

The following is an example row from the dataset:

```json
{
  "url": "https://example.com/article/interesting-topic",
  "warc_date": "2026-03-05T07:14:58Z",
  "warc_record_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "warc_filename": "CC-MAIN-20260305070756-20260305100756-00000.warc.gz",
  "http_status": 200,
  "content_type": "text/html",
  "charset": "utf-8",
  "content_language": "en",
  "http_server": "nginx",
  "http_last_modified": "Tue, 04 Mar 2026 12:00:00 GMT",
  "host": "example.com",
  "domain": "example.com",
  "path": "/article/interesting-topic",
  "query": "",
  "html_lang": "en",
  "title": "Interesting Topic - Example",
  "description": "A fascinating article about interesting topics.",
  "og_title": "Interesting Topic",
  "og_description": "A fascinating article about interesting topics.",
  "og_image": "https://example.com/images/topic.jpg",
  "og_type": "article",
  "canonical_url": "https://example.com/article/interesting-topic",
  "html_length": 48210,
  "body": "<!DOCTYPE html><html lang=\"en\"><head>..."
}
```

### Data Fields

| Column | Type | Description |
|---|---|---|
| `url` | string | Full URL of the crawled page |
| `warc_date` | string | Crawl timestamp from the WARC record (RFC 3339) |
| `warc_record_id` | string | UUID from the WARC-Record-ID header, for source traceability |
| `warc_filename` | string | Source WARC file basename from Common Crawl |
| `http_status` | int32 | HTTP response status code (always 200 in this dataset) |
| `content_type` | string | Content-Type from the HTTP response (always starts with `text/html`) |
| `charset` | string | Character encoding from the Content-Type header (e.g., `utf-8`, `iso-8859-1`) |
| `content_language` | string | Content-Language HTTP header (e.g., `en`, `de`, `fr`) |
| `http_server` | string | Server software from the HTTP response (e.g., `nginx`, `Apache`) |
| `http_last_modified` | string | Last-Modified HTTP header — when the page was last changed |
| `host` | string | Lowercase hostname extracted from the URL (e.g., `www.example.com`) |
| `domain` | string | Registered domain (eTLD+1) — groups subdomains together (e.g., `example.com`) |
| `path` | string | URL path component (e.g., `/article/interesting-topic`) |
| `query` | string | URL query string, if any (e.g., `page=2&sort=date`) |
| `html_lang` | string | Language attribute from `<html lang="...">` tag |
| `title` | string | Page title from `<title>` tag in `<head>` |
| `description` | string | Meta description from `<meta name="description">` |
| `og_title` | string | Open Graph title from `<meta property="og:title">` |
| `og_description` | string | Open Graph description from `<meta property="og:description">` |
| `og_image` | string | Open Graph image URL from `<meta property="og:image">` |
| `og_type` | string | Open Graph type from `<meta property="og:type">` (e.g., `article`, `website`) |
| `canonical_url` | string | Canonical URL from `<link rel="canonical">` — the page's preferred URL |
| `html_length` | int64 | Byte length of the raw HTML body in bytes |
| `body` | string | Raw HTML body (full content, no truncation) |

### Data Splits

The default subset includes all available data across all crawl snapshots. You can also load a specific crawl by using its ID as the config name (e.g. `CC-MAIN-2026-12`).

## Dataset Creation

### Curation Rationale

Most open web datasets either release raw text (losing structure) or processed markdown (losing metadata). **OpenHTML** takes a different approach: it preserves the **raw HTML** alongside **24 columns of structured metadata** extracted from WARC headers, HTTP response headers, and HTML `<head>` tags. This lets you:

- **Train** models on raw web content with full context
- **Filter** by language, domain, content type, or Open Graph metadata
- **Analyze** web structure, server software distribution, or charset usage
- **Trace** every document back to its exact WARC source record

### Source Data

The source data consists of web pages crawled by the [Common Crawl](https://commoncrawl.org) foundation. Common Crawl archives billions of pages across the public web and makes the raw WARC files freely available on Amazon S3.

### Data Processing Steps

The processing pipeline runs as a single-pass extraction:

1. **Download** raw .warc.gz files from Common Crawl S3 (each file is roughly 1 GB compressed)
2. **Filter** to keep only HTTP 200 responses with a `text/html` content type, discarding images, scripts, redirects, and error pages
3. **Parse** HTTP response headers to extract `content_type`, `charset`, `content_language`, `server`, and `last_modified`
4. **Decompose** the URL into `host`, `domain` (eTLD+1 via the Public Suffix List), `path`, and `query`
5. **Extract** HTML `<head>` metadata using a streaming tokenizer: `title`, `description`, Open Graph tags (`og:title`, `og:description`, `og:image`, `og:type`), `canonical_url`, and `html_lang`
6. **Store** the full HTML body (no truncation — `html_length` matches `body` size)
7. **Export** directly to Apache Parquet with Zstd compression, 100,000 rows per row group

No intermediate files are created — the pipeline streams from compressed WARC through extraction directly into Parquet. Pages that produce empty HTML bodies are dropped.

### Compression Ratios

Numbers below are actual measurements summed across all 10 files of CC-MAIN-2026-12 (197,357 pages total), projected to the full crawl of 100,000 WARC files.

| Stage | 10 files (measured) | 100,000 files (projected) | Reduction |
|---|---|---|---|
| Raw WARC (.warc.gz, downloaded) | ~8.1 GB | ~79.2 TB | — |
| HTML extracted (uncompressed) | 34.3 GB | ~335.2 TB | — |
| Body stored (full HTML) | 34.3 GB | ~335.2 TB | **-0.0%** vs HTML |
| Final Parquet (Zstd) | 6.5 GB | ~63.8 TB | **-81.0%** vs body |

The body column stores the full raw HTML. Parquet with Zstd then compresses the data further. End to end: ~8.1 GB of raw gzipped WARCs becomes **6.5 GB of Parquet** — a **19.5% total reduction** — containing 197,357 web pages with full metadata.

### Processing Times

Pipeline timings across 10 shards of CC-MAIN-2026-12:

```
Download (raw WARC)                  ████████████████████████  1h 29m 47s
Extract  (WARC → HTML + metadata)    ███████████████████████░  1h 28m 15s
Publish  (HuggingFace upload)        ███░░░░░░░░░░░░░░░░░░░░░  12m 58s
```

### Dataset Charts

![Total size: HTML vs Body vs Parquet](charts/totals_chart.png)

![Pipeline stage durations](charts/timing_chart.png)

### Personal and Sensitive Information

No additional PII filtering is applied beyond what Common Crawl provides. As the dataset is sourced from the public web, it is likely that some personally identifiable information is present. If you find your own PII in the dataset and would like it removed, please open an issue on the repository.

## Considerations for Using the Data

### Social Impact

By releasing both the dataset and the full processing pipeline, we aim to lower the barrier to training and evaluating language models on high quality web data. Researchers and practitioners who cannot afford to run their own Common Crawl processing pipelines can use **OpenHTML** directly.

### Discussion of Biases

**OpenHTML** inherits the biases present in Common Crawl and the public web at large. The filtering step keeps only `text/html` pages, which may underrepresent content served as other content types. We have not applied any machine-learning-based quality or toxicity filters, as such filters have been shown to disproportionately remove content from certain dialects and communities.

### Known Limitations

The full HTML body is stored without truncation. Very large pages (e.g., pages with inline data URIs) will increase shard sizes. The `html_length` field reflects the exact body size in bytes.

Metadata extraction scans only the `<head>` section for performance. Pages that place `<meta>` or `<title>` tags in the `<body>` will have missing metadata.

## Additional Information

### Licensing

The dataset is released under the **Open Data Commons Attribution License (ODC-By) v1.0**. The use of this dataset is also subject to [Common Crawl's Terms of Use](https://commoncrawl.org/terms-of-use). The original content remains subject to the rights and terms of its respective publishers.

### Contact

Please open a discussion on the [Community tab](https://huggingface.co/datasets/open-index/open-html/discussions) for questions, feedback, or issues.