tamnd commited on
Commit
3375444
·
verified ·
1 Parent(s): 2f34f4b

Publish 5 shards CC-MAIN-2026-12/01596–01707

Browse files
LICENSE CHANGED
@@ -1,11 +1,15 @@
1
  Open Data Commons Attribution License (ODC-By) v1.0
2
 
3
- You are free to:
4
- - Share: copy, distribute, and use the database.
5
- - Create: produce works from the database.
6
- - Adapt: modify, transform, and build upon the database.
7
-
8
- As long as you:
9
- - Attribute: You must attribute any public use of the database,
10
- or works produced from the database, in the manner specified
11
- in the ODC-By license.
 
 
 
 
 
1
  Open Data Commons Attribution License (ODC-By) v1.0
2
 
3
+ Full text: https://opendatacommons.org/licenses/by/1-0/
4
+
5
+ You are free to copy, distribute, use, modify, transform, and build upon
6
+ this database, as long as you attribute the source.
7
+
8
+ Attribution: "Open Markdown, derived from Common Crawl (https://commoncrawl.org)"
9
+
10
+ Note: This dataset contains data derived from Common Crawl, which archives
11
+ third-party web content. The original content remains subject to the rights
12
+ of its respective publishers. You are responsible for complying with applicable
13
+ law including downstream licensing obligations, robots.txt restrictions, privacy
14
+ requirements, and content removal requests. See Common Crawl's Terms of Use:
15
+ https://commoncrawl.org/terms-of-use
README.md CHANGED
@@ -1,62 +1,233 @@
1
  ---
2
  license: odc-by
3
  task_categories:
4
- - text-generation
 
5
  language:
6
- - en
 
7
  size_categories:
8
- - 100M<n<1B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
- # Open Markdown — CC-MAIN-2026-12
12
 
13
- Common Crawl HTML pages converted to clean Markdown.
14
 
15
- ## Stats
16
 
17
- | Metric | Value |
18
- |--------|-------|
19
- | Shards | 1617 |
20
- | Documents | 22,897,035 |
21
- | HTML | 3018.9 GB |
22
- | Markdown | 197.1 GB |
23
- | Parquet (zstd) | 61.0 GB |
24
 
25
- The dataset currently includes crawl **CC-MAIN-2026-12** with **22,897,035 documents** across **1617 shards**. Processed 3018.9 GB of raw HTML into 197.1 GB of clean Markdown — a **93.5% reduction**.
26
 
27
- ### Processing Times
 
 
 
 
 
 
 
 
28
 
29
- Pipeline timings across 1617 shards of CC-MAIN-2026-12:
 
 
 
 
 
 
30
 
31
  ```
32
- Download (raw WARC) █████████░░░░░░░░░░░░░░░ total 11h 24m 48s avg 25s
33
- Convert (HTML → Markdown → Parquet) ████████████████████████ total 27h 41m 32s avg 1m 1s
34
- Publish (HuggingFace) ████░░░░░░░░░░░░░░░░░░░░ total 4h 44m 30s avg 10s
 
 
35
  ```
36
 
37
- ## Format
38
 
39
- Each row in the parquet files contains:
40
 
41
- | Column | Type | Description |
42
- |--------|------|-------------|
43
- | doc_id | string | Deterministic hash of URL |
44
- | url | string | Original page URL |
45
- | host | string | Registered domain |
46
- | crawl_date | string | ISO date from WARC |
47
- | warc_record_id | string | UUID |
48
- | warc_refers_to | string | Original WARC record ID |
49
- | html_length | int64 | Raw HTML bytes |
50
- | markdown_length | int64 | Converted Markdown bytes |
51
- | markdown | string | Clean Markdown text |
52
-
53
- ## Usage
54
 
55
  ```python
56
  from datasets import load_dataset
57
- ds = load_dataset("open-index/open-markdown", data_files="data/CC-MAIN-2026-12/*.parquet")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  ```
59
 
60
- ## License
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
 
62
- Open Data Commons Attribution License (ODC-By).
 
1
  ---
2
  license: odc-by
3
  task_categories:
4
+ - text-generation
5
+ - feature-extraction
6
  language:
7
+ - en
8
+ pretty_name: Open Markdown
9
  size_categories:
10
+ - 1M<n<10M
11
+ tags:
12
+ - common-crawl
13
+ - web-crawl
14
+ - markdown
15
+ - text
16
+ configs:
17
+ - config_name: default
18
+ data_files:
19
+ - split: train
20
+ path: data/*/*
21
+ - config_name: CC-MAIN-2026-12
22
+ data_files:
23
+ - split: train
24
+ path: data/CC-MAIN-2026-12/*
25
  ---
26
 
27
+ # **Open Markdown**
28
 
29
+ > Clean markdown from the web, ready for training and retrieval
30
 
31
+ ## What is it?
32
 
33
+ **Open Markdown** is a large-scale web text dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the main content from raw HTML, converts it to clean Markdown, and packages the result into Parquet files with useful WARC metadata for traceability.
 
 
 
 
 
 
34
 
35
+ The dataset currently includes crawl **CC-MAIN-2026-12** with **22,984,862 documents across 1622 shards**. Processed 3.0 TB of raw HTML into 197.9 GB of clean Markdown — a **96.9% reduction**. We plan to add more snapshots over time.
36
 
37
+ ### Live Progress
38
+
39
+ Processing at **78.4 shards/hour** — 1,622 of 100,000 done (**1.62%**)
40
+
41
+ Estimated completion: **May 13, 2026** (52 days)
42
+
43
+ **Current server:** 6 CPU cores, 12 GB RAM (11.0 GB available), 40 GB disk free
44
+
45
+ **Memory per session:** avg 560 MB, peak 585 MB (measured via VmRSS)
46
 
47
+ **With 10 identical servers:** 784 shards/hour March 27, 2026 (5 days)
48
+
49
+ **Open Markdown** is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl.
50
+
51
+ ## What is being released?
52
+
53
+ Each Common Crawl WARC file (~1 GB of compressed HTML) becomes one Parquet shard. The shards live under a crawl-specific directory so multiple snapshots can coexist:
54
 
55
  ```
56
+ data/
57
+ CC-MAIN-2026-12/
58
+ 00000.parquet
59
+ 00001.parquet
60
+ ...
61
  ```
62
 
63
+ Every row in a Parquet file is one web page. Each row includes the `warc_record_id` and `warc_refers_to` fields parsed from the original WARC headers, so you can trace any document back to its source record. We also store `html_length` and `markdown_length` to measure the compression from raw HTML to clean markdown.
64
 
65
+ ## How to download and use Open Markdown
66
 
67
+ ### Using `datasets`
 
 
 
 
 
 
 
 
 
 
 
 
68
 
69
  ```python
70
  from datasets import load_dataset
71
+
72
+ # stream the entire dataset
73
+ ds = load_dataset("open-index/open-markdown", name="CC-MAIN-2026-12", split="train", streaming=True)
74
+ for doc in ds:
75
+ print(doc["url"], len(doc["markdown"]))
76
+
77
+ # load a single shard into memory
78
+ ds = load_dataset(
79
+ "open-index/open-markdown",
80
+ data_files="data/CC-MAIN-2026-12/00000.parquet",
81
+ split="train",
82
+ )
83
+ ```
84
+
85
+ ### Using `huggingface_hub`
86
+
87
+ ```python
88
+ from huggingface_hub import snapshot_download
89
+
90
+ folder = snapshot_download(
91
+ "open-index/open-markdown",
92
+ repo_type="dataset",
93
+ local_dir="./open-index/",
94
+ allow_patterns="data/CC-MAIN-2026-12/*",
95
+ )
96
+ ```
97
+
98
+ For faster downloads, install `pip install huggingface_hub[hf_transfer]` and set `HF_HUB_ENABLE_HF_TRANSFER=1`.
99
+
100
+ ### Using DuckDB
101
+
102
+ ```sql
103
+ SELECT url, host, markdown_length
104
+ FROM read_parquet('hf://datasets/open-index/open-markdown/data/CC-MAIN-2026-12/*.parquet')
105
+ WHERE host = 'en.wikipedia.org'
106
+ LIMIT 10;
107
+ ```
108
+
109
+ # Dataset card for Open Markdown
110
+
111
+ ## Dataset Description
112
+
113
+ - **Homepage and Repository:** [https://huggingface.co/datasets/open-index/open-markdown](https://huggingface.co/datasets/open-index/open-markdown)
114
+ - **Point of Contact:** please create a discussion on the Community tab
115
+ - **License:** Open Data Commons Attribution License (ODC-By) v1.0
116
+
117
+ ## Dataset Structure
118
+
119
+ ### Data Instance
120
+
121
+ The following is an example row from the dataset:
122
+
123
+ ```json
124
+ {
125
+ "doc_id": "6aaa5be7-a917-5105-aa60-e39ea1d087fc",
126
+ "url": "https://example.com/article/interesting-topic",
127
+ "host": "example.com",
128
+ "crawl_date": "2026-02-06T18:14:58Z",
129
+ "warc_record_id": "<urn:uuid:a1b2c3d4-e5f6-7890-abcd-ef1234567890>",
130
+ "warc_refers_to": "<urn:uuid:f9e8d7c6-b5a4-3210-fedc-ba0987654321>",
131
+ "html_length": 48210,
132
+ "markdown_length": 3847,
133
+ "markdown": "# Interesting Topic\n\nThis is the main content of the page..."
134
+ }
135
+ ```
136
+
137
+ ### Data Fields
138
+
139
+ | Column | Type | Description |
140
+ |---|---|---|
141
+ | `doc_id` | string | Deterministic UUID v5 derived from the canonical URL: `doc_id = UUID5(NamespaceURL, url)` — identical URLs always produce the same `doc_id` across crawls |
142
+ | `url` | string | Original URL of the crawled page |
143
+ | `host` | string | Lowercase hostname extracted from the URL |
144
+ | `crawl_date` | string | RFC 3339 timestamp from the WARC record |
145
+ | `warc_record_id` | string | Full WARC-Record-ID of this conversion record (`<urn:uuid:...>`) |
146
+ | `warc_refers_to` | string | WARC-Record-ID of the original HTTP response this was converted from |
147
+ | `html_length` | int64 | Byte length of the original HTML body before conversion |
148
+ | `markdown_length` | int64 | Byte length of the converted markdown body |
149
+ | `markdown` | string | Clean markdown content extracted from the page |
150
+
151
+ ### Data Splits
152
+
153
+ The default subset includes all available data across all crawl snapshots. You can also load a specific crawl by using its ID as the config name (e.g. `CC-MAIN-2026-12`).
154
+
155
+ ## Dataset Creation
156
+
157
+ ### Curation Rationale
158
+
159
+ Most open web datasets either release raw text without structure or keep the HTML and leave parsing to the user. **Open Markdown** sits in between: it converts every page to Markdown so the content is immediately usable for training, while preserving key WARC identifiers (`warc_record_id`, `warc_refers_to`) so you can always trace back to the source record.
160
+
161
+ ### Source Data
162
+
163
+ The source data consists of web pages crawled by the [Common Crawl](https://commoncrawl.org) foundation. Common Crawl archives billions of pages across the public web and makes the raw WARC files freely available on Amazon S3.
164
+
165
+ ### Data Processing Steps
166
+
167
+ The processing pipeline runs as a single-pass direct conversion:
168
+
169
+ 1. **Download** raw .warc.gz files from Common Crawl S3 (each file is roughly 1 GB compressed)
170
+ 2. **Filter** to keep only HTTP 200 responses with a text/html content type, discarding images, scripts, redirects, and error pages
171
+ 3. **Convert** HTML to clean Markdown using a lightweight tokenizer-based extractor that strips tags, scripts, styles, navigation, and boilerplate — keeping only the main content
172
+ 4. **Export** directly to Apache Parquet with Zstd compression, 100,000 rows per row group
173
+
174
+ No intermediate files are created — the pipeline streams from compressed WARC through conversion directly into Parquet. Pages that produce empty conversions are dropped.
175
+
176
+ ### Compression Ratios
177
+
178
+ Numbers below are actual measurements summed across all 1622 files of CC-MAIN-2026-12 (22,984,862 pages total), projected to the full crawl of 100,000 WARC files.
179
+
180
+ | Stage | 1622 files (measured) | 100,000 files (projected) | Reduction |
181
+ |---|---|---|---|
182
+ | Raw WARC (.warc.gz, downloaded) | ~1.3 TB | ~79.2 TB | — |
183
+ | HTML extracted (uncompressed) | 3.0 TB | ~182.5 TB | — |
184
+ | Markdown (clean text) | 197.9 GB | ~5.6 TB | **-96.9%** vs HTML |
185
+ | Final Parquet (Zstd) | 61.3 GB | ~3.7 TB | **-69.0%** vs markdown |
186
+
187
+ The big win is HTML → Markdown conversion: the tokenizer strips all tags, scripts, styles, navigation, and ads, keeping only the main content. This cuts 3.0 TB of uncompressed HTML down to 197.9 GB of markdown — a **96.9% reduction**. Parquet with Zstd then compresses the markdown a further 69.0%.
188
+
189
+ End to end: ~1.3 TB of raw gzipped WARCs becomes **61.3 GB of Parquet** — a **95.3% total reduction** — containing 22,984,862 clean markdown documents.
190
+
191
+ ### Processing Times
192
+
193
+ Pipeline timings across 1622 shards of CC-MAIN-2026-12:
194
+
195
+ ```
196
+ Download (raw WARC) █░░░░░░░░░░░░░░░░░░░░░░░ 11h 24m 48s
197
+ Convert (HTML → Markdown → Parquet) ███░░░░░░░░░░░░░░░░░░░░░ 32h 26m 2s
198
+ Publish (HuggingFace) ████████████████████████ 210h 8m 26s
199
  ```
200
 
201
+ ### Dataset Charts
202
+
203
+ ![Total size: HTML vs Markdown vs Parquet](charts/totals_chart.png)
204
+
205
+ ![Pipeline stage durations](charts/timing_chart.png)
206
+
207
+ ### Personal and Sensitive Information
208
+
209
+ No additional PII filtering is applied beyond what Common Crawl provides. As the dataset is sourced from the public web, it is likely that some personally identifiable information is present. If you find your own PII in the dataset and would like it removed, please open an issue on the repository.
210
+
211
+ ## Considerations for Using the Data
212
+
213
+ ### Social Impact
214
+
215
+ By releasing both the dataset and the full processing pipeline, we aim to lower the barrier to training and evaluating language models on high quality web data. Researchers and practitioners who cannot afford to run their own Common Crawl processing pipelines can use **Open Markdown** directly.
216
+
217
+ ### Discussion of Biases
218
+
219
+ **Open Markdown** inherits the biases present in Common Crawl and the public web at large. The trafilatura extraction step favors article-like pages and may underrepresent content from forums, social media, and non-standard page layouts. We have not applied any machine-learning-based quality or toxicity filters, as such filters have been shown to disproportionately remove content from certain dialects and communities.
220
+
221
+ ### Known Limitations
222
+
223
+ Code-heavy pages may not convert well to Markdown. If you are training a model that needs strong code performance, consider supplementing **Open Markdown** with a dedicated code dataset such as [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2). Similarly, highly structured pages like Wikipedia may have better formatting in dedicated Wikipedia dumps than in their Common Crawl versions.
224
+
225
+ ## Additional Information
226
+
227
+ ### Licensing
228
+
229
+ The dataset is released under the **Open Data Commons Attribution License (ODC-By) v1.0**. The use of this dataset is also subject to [Common Crawl's Terms of Use](https://commoncrawl.org/terms-of-use). The original content remains subject to the rights and terms of its respective publishers.
230
+
231
+ ### Contact
232
 
233
+ Please open a discussion on the [Community tab](https://huggingface.co/datasets/open-index/open-markdown/discussions) for questions, feedback, or issues.
data/CC-MAIN-2026-12/01596.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40788c2d6a0493278a5c0c2bcd07c4a2c8957c1296bc334dbf14c0e0cc56ad18
3
+ size 51668899
data/CC-MAIN-2026-12/01617.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cba6980ec3e4166d51b896c799f69aef43b3fa4b4a64f85706e1110d72e9a66d
3
+ size 52541741
data/CC-MAIN-2026-12/01618.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb3d21ff378bcad7ca67e39d2568f848b8bcd580289b824655c136e6f21890b1
3
+ size 50993036
data/CC-MAIN-2026-12/01661.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1532dcf700d3158fe7dc7fd2336fce1584ce377a5f71954dc42131b1f507d669
3
+ size 48813254
data/CC-MAIN-2026-12/01707.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1041c9fe5b49bc7f46f9ce8c9b821a8201ead24d19ae34a57c4a3a30d5d0ccd9
3
+ size 50988516
stats.csv CHANGED
The diff for this file is too large to render. See raw diff