File size: 10,603 Bytes
753d31d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d4b3045
753d31d
94d0259
 
d4b3045
94d0259
8d4eda2
94d0259
d4b3045
94d0259
11584dd
94d0259
d4b3045
753d31d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d4b3045
753d31d
d4b3045
753d31d
59eb766
55d2aa5
60c6bd1
9aba735
753d31d
55d2aa5
753d31d
d4b3045
753d31d
 
 
d4b3045
753d31d
 
d4b3045
 
 
753d31d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
---
license: odc-by
task_categories:
- text-generation
- feature-extraction
language:
- en
pretty_name: Open Markdown
size_categories:
- 1M<n<10M
tags:
- common-crawl
- web-crawl
- markdown
- text
configs:
- config_name: default
  data_files:
  - split: train
    path: "data/**/*.parquet"
- config_name: CC-MAIN-2026-12
  data_files:
  - split: train
    path: "data/CC-MAIN-2026-12/**/*.parquet"
---

# **Open Markdown**

> Clean markdown from the web, ready for training and retrieval

## What is it?

**Open Markdown** is a large-scale web text dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the main content from raw HTML, converts it to clean Markdown, and packages the result into Parquet files with useful WARC metadata for traceability.

The dataset currently includes crawl **CC-MAIN-2026-12** with **621,996,760 documents across 35972 shards**. Processed 80.6 TB of raw HTML into 5.2 TB of clean Markdown — a **93.6% reduction**. We plan to add more snapshots over time.

### Live Progress

Processing at **54.8 shards/hour** — 35,972 of 100,000 done (**35.97%**)

Estimated completion: **June 5, 2026** (49 days)

**Current server:** 6 CPU cores, 12 GB RAM (3.9 GB available), 46 GB disk free

**Memory per session:** avg 577 MB, peak 799 MB (measured via VmRSS)

**With 10 identical servers:** 548 shards/hour → April 22, 2026 (5 days)

**Open Markdown** is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl.

## What is being released?

Each Common Crawl WARC file (~1 GB of compressed HTML) becomes one Parquet shard. The shards live under a crawl-specific directory so multiple snapshots can coexist:

```
data/
  CC-MAIN-2026-12/
    00/
      00/
        000000.parquet
        000001.parquet
        ...
      01/
        000100.parquet
        ...
    01/
      ...
```

Every row in a Parquet file is one web page. Each row includes the `warc_record_id` and `warc_refers_to` fields parsed from the original WARC headers, so you can trace any document back to its source record. We also store `html_length` and `markdown_length` to measure the compression from raw HTML to clean markdown.

## How to download and use Open Markdown

### Using `datasets`

```python
from datasets import load_dataset

# stream the entire dataset
ds = load_dataset("open-index/open-markdown", name="CC-MAIN-2026-12", split="train", streaming=True)
for doc in ds:
    print(doc["url"], len(doc["markdown"]))

# load a single shard into memory
ds = load_dataset(
    "open-index/open-markdown",
    data_files="data/CC-MAIN-2026-12/00/00/000000.parquet",
    split="train",
)
```

### Using `huggingface_hub`

```python
from huggingface_hub import snapshot_download

folder = snapshot_download(
    "open-index/open-markdown",
    repo_type="dataset",
    local_dir="./open-index/",
    allow_patterns="data/CC-MAIN-2026-12/**/*.parquet",
)
```

For faster downloads, install `pip install huggingface_hub[hf_transfer]` and set `HF_HUB_ENABLE_HF_TRANSFER=1`.

### Using DuckDB

```sql
SELECT url, host, markdown_length
FROM read_parquet('hf://datasets/open-index/open-markdown/data/CC-MAIN-2026-12/**/*.parquet')
WHERE host = 'en.wikipedia.org'
LIMIT 10;
```

# Dataset card for Open Markdown

## Dataset Description

- **Homepage and Repository:** [https://huggingface.co/datasets/open-index/open-markdown](https://huggingface.co/datasets/open-index/open-markdown)
- **Point of Contact:** please create a discussion on the Community tab
- **License:** Open Data Commons Attribution License (ODC-By) v1.0

## Dataset Structure

### Data Instance

The following is an example row from the dataset:

```json
{
  "doc_id": "6aaa5be7-a917-5105-aa60-e39ea1d087fc",
  "url": "https://example.com/article/interesting-topic",
  "host": "example.com",
  "crawl_date": "2026-02-06T18:14:58Z",
  "warc_record_id": "<urn:uuid:a1b2c3d4-e5f6-7890-abcd-ef1234567890>",
  "warc_refers_to": "<urn:uuid:f9e8d7c6-b5a4-3210-fedc-ba0987654321>",
  "html_length": 48210,
  "markdown_length": 3847,
  "markdown": "# Interesting Topic\n\nThis is the main content of the page..."
}
```

### Data Fields

| Column | Type | Description |
|---|---|---|
| `doc_id` | string | Deterministic UUID v5 derived from the canonical URL: `doc_id = UUID5(NamespaceURL, url)` — identical URLs always produce the same `doc_id` across crawls |
| `url` | string | Original URL of the crawled page |
| `host` | string | Lowercase hostname extracted from the URL |
| `crawl_date` | string | RFC 3339 timestamp from the WARC record |
| `warc_record_id` | string | Full WARC-Record-ID of this conversion record (`<urn:uuid:...>`) |
| `warc_refers_to` | string | WARC-Record-ID of the original HTTP response this was converted from |
| `html_length` | int64 | Byte length of the original HTML body before conversion |
| `markdown_length` | int64 | Byte length of the converted markdown body |
| `markdown` | string | Clean markdown content extracted from the page |

### Data Splits

The default subset includes all available data across all crawl snapshots. You can also load a specific crawl by using its ID as the config name (e.g. `CC-MAIN-2026-12`).

## Dataset Creation

### Curation Rationale

Most open web datasets either release raw text without structure or keep the HTML and leave parsing to the user. **Open Markdown** sits in between: it converts every page to Markdown so the content is immediately usable for training, while preserving key WARC identifiers (`warc_record_id`, `warc_refers_to`) so you can always trace back to the source record.

### Source Data

The source data consists of web pages crawled by the [Common Crawl](https://commoncrawl.org) foundation. Common Crawl archives billions of pages across the public web and makes the raw WARC files freely available on Amazon S3.

### Data Processing Steps

The processing pipeline runs as a single-pass direct conversion:

1. **Download** raw .warc.gz files from Common Crawl S3 (each file is roughly 1 GB compressed)
2. **Filter** to keep only HTTP 200 responses with a text/html content type, discarding images, scripts, redirects, and error pages
3. **Convert** HTML to clean Markdown using a lightweight tokenizer-based extractor that strips tags, scripts, styles, navigation, and boilerplate — keeping only the main content
4. **Export** directly to Apache Parquet with Zstd compression, 100,000 rows per row group

No intermediate files are created — the pipeline streams from compressed WARC through conversion directly into Parquet. Pages that produce empty conversions are dropped.

### Compression Ratios

Numbers below are actual measurements summed across all 35972 files of CC-MAIN-2026-12 (621,996,760 pages total), projected to the full crawl of 100,000 WARC files.

| Stage | 35972 files (measured) | 100,000 files (projected) | Reduction |
|---|---|---|---|
| Raw WARC (.warc.gz, downloaded) | ~28.5 TB | ~79.2 TB | — |
| HTML extracted (uncompressed) | 80.6 TB | ~224.2 TB | — |
| Markdown (clean text) | 5.2 TB | ~14.4 TB | **-93.6%** vs HTML |
| Final Parquet (Zstd) | 1.6 TB | ~4.5 TB | **-68.9%** vs markdown |

The big win is HTML → Markdown conversion: the tokenizer strips all tags, scripts, styles, navigation, and ads, keeping only the main content. This cuts 80.6 TB of uncompressed HTML down to 5.2 TB of markdown — a **93.6% reduction**. Parquet with Zstd then compresses the markdown a further 68.9%.

End to end: ~28.5 TB of raw gzipped WARCs becomes **1.6 TB of Parquet** — a **94.3% total reduction** — containing 621,996,760 clean markdown documents.

### Processing Times

Pipeline timings across 35972 shards of CC-MAIN-2026-12:

```
Download (raw WARC)                   ████████████░░░░░░░░░░░░  159h 29m 29s
Convert  (HTML → Markdown → Parquet)  ████████████████████████  316h 3m 54s
Publish  (HuggingFace)                ███████░░░░░░░░░░░░░░░░░  100h 23m 5s
```

### Dataset Charts

![Total size: HTML vs Markdown vs Parquet](charts/totals_chart.png)

![Pipeline stage durations](charts/timing_chart.png)

### Personal and Sensitive Information

No additional PII filtering is applied beyond what Common Crawl provides. As the dataset is sourced from the public web, it is likely that some personally identifiable information is present. If you find your own PII in the dataset and would like it removed, please open an issue on the repository.

## Considerations for Using the Data

### Social Impact

By releasing both the dataset and the full processing pipeline, we aim to lower the barrier to training and evaluating language models on high quality web data. Researchers and practitioners who cannot afford to run their own Common Crawl processing pipelines can use **Open Markdown** directly.

### Discussion of Biases

**Open Markdown** inherits the biases present in Common Crawl and the public web at large. The trafilatura extraction step favors article-like pages and may underrepresent content from forums, social media, and non-standard page layouts. We have not applied any machine-learning-based quality or toxicity filters, as such filters have been shown to disproportionately remove content from certain dialects and communities.

### Known Limitations

Code-heavy pages may not convert well to Markdown. If you are training a model that needs strong code performance, consider supplementing **Open Markdown** with a dedicated code dataset such as [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2). Similarly, highly structured pages like Wikipedia may have better formatting in dedicated Wikipedia dumps than in their Common Crawl versions.

## Additional Information

### Licensing

The dataset is released under the **Open Data Commons Attribution License (ODC-By) v1.0**. The use of this dataset is also subject to [Common Crawl's Terms of Use](https://commoncrawl.org/terms-of-use). The original content remains subject to the rights and terms of its respective publishers.

### Contact

Please open a discussion on the [Community tab](https://huggingface.co/datasets/open-index/open-markdown/discussions) for questions, feedback, or issues.