Publish shard CC-MAIN-2026-12/00003
Browse files- README.md +12 -12
- data/CC-MAIN-2026-12/00003.parquet +3 -0
- stats.csv +1 -0
README.md
CHANGED
|
@@ -35,7 +35,7 @@ configs:
|
|
| 35 |
|
| 36 |
**OpenHTML** is a large-scale web dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the raw HTML body along with structured metadata from WARC records, HTTP response headers, and HTML `<head>` tags, then packages everything into Parquet files with 24 columns.
|
| 37 |
|
| 38 |
-
The dataset currently includes crawl **CC-MAIN-2026-12** with **
|
| 39 |
|
| 40 |
**OpenHTML** is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl.
|
| 41 |
|
|
@@ -221,25 +221,25 @@ No intermediate files are created — the pipeline streams from compressed WARC
|
|
| 221 |
|
| 222 |
### Compression Ratios
|
| 223 |
|
| 224 |
-
Numbers below are actual measurements summed across all
|
| 225 |
|
| 226 |
-
| Stage |
|
| 227 |
|---|---|---|---|
|
| 228 |
-
| Raw WARC (.warc.gz, downloaded) | ~
|
| 229 |
-
| HTML extracted (uncompressed) |
|
| 230 |
-
| Body stored (full HTML) |
|
| 231 |
-
| Final Parquet (Zstd) | 1.
|
| 232 |
|
| 233 |
-
The body column stores the full raw HTML. Parquet with Zstd then compresses the data further. End to end: ~
|
| 234 |
|
| 235 |
### Processing Times
|
| 236 |
|
| 237 |
-
Pipeline timings across
|
| 238 |
|
| 239 |
```
|
| 240 |
-
Download (raw WARC) ████████████████████████
|
| 241 |
-
Extract (WARC → HTML + metadata) ████████████░░░░░░░░░░
|
| 242 |
-
Publish (HuggingFace upload) █████
|
| 243 |
```
|
| 244 |
|
| 245 |
### Dataset Charts
|
|
|
|
| 35 |
|
| 36 |
**OpenHTML** is a large-scale web dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the raw HTML body along with structured metadata from WARC records, HTTP response headers, and HTML `<head>` tags, then packages everything into Parquet files with 24 columns.
|
| 37 |
|
| 38 |
+
The dataset currently includes crawl **CC-MAIN-2026-12** with **58,915 documents across 3 shards**. Processed 10.2 GB of raw HTML into 10.2 GB of stored body text — 1.9 GB as Parquet (Zstd). We plan to add more snapshots over time.
|
| 39 |
|
| 40 |
**OpenHTML** is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl.
|
| 41 |
|
|
|
|
| 221 |
|
| 222 |
### Compression Ratios
|
| 223 |
|
| 224 |
+
Numbers below are actual measurements summed across all 3 files of CC-MAIN-2026-12 (58,915 pages total), projected to the full crawl of 100,000 WARC files.
|
| 225 |
|
| 226 |
+
| Stage | 3 files (measured) | 100,000 files (projected) | Reduction |
|
| 227 |
|---|---|---|---|
|
| 228 |
+
| Raw WARC (.warc.gz, downloaded) | ~2.4 GB | ~79.2 TB | — |
|
| 229 |
+
| HTML extracted (uncompressed) | 10.2 GB | ~332.4 TB | — |
|
| 230 |
+
| Body stored (full HTML) | 10.2 GB | ~332.4 TB | **-0.0%** vs HTML |
|
| 231 |
+
| Final Parquet (Zstd) | 1.9 GB | ~62.9 TB | **-81.1%** vs body |
|
| 232 |
|
| 233 |
+
The body column stores the full raw HTML. Parquet with Zstd then compresses the data further. End to end: ~2.4 GB of raw gzipped WARCs becomes **1.9 GB of Parquet** — a **20.5% total reduction** — containing 58,915 web pages with full metadata.
|
| 234 |
|
| 235 |
### Processing Times
|
| 236 |
|
| 237 |
+
Pipeline timings across 3 shards of CC-MAIN-2026-12:
|
| 238 |
|
| 239 |
```
|
| 240 |
+
Download (raw WARC) ████████████████████████ 20m 31s
|
| 241 |
+
Extract (WARC → HTML + metadata) ██████████████░░░░░░░░░░ 12m 21s
|
| 242 |
+
Publish (HuggingFace upload) █████░░░░░░░░░░░░░░░░░░░ 4m 27s
|
| 243 |
```
|
| 244 |
|
| 245 |
### Dataset Charts
|
data/CC-MAIN-2026-12/00003.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5d60b4e8a4ed3c2ad101018a0f6ed614cb2a0a89b12eb11e68d2d46326e4319e
|
| 3 |
+
size 665588917
|
stats.csv
CHANGED
|
@@ -1,3 +1,4 @@
|
|
| 1 |
crawl_id,file_idx,rows,html_bytes,body_bytes,md_bytes,parquet_bytes,created_at,dur_download_s,dur_extract_s,dur_export_s,dur_publish_s,peak_rss_mb
|
| 2 |
CC-MAIN-2026-12,0,19785,3699291685,3699291685,3699291685,714448407,2026-03-24T05:15:28Z,325,206,15,145,1167
|
| 3 |
CC-MAIN-2026-12,1,19821,3723062373,3723062373,3723062373,695149393,2026-03-24T05:32:58Z,509,219,11,122,1240
|
|
|
|
|
|
| 1 |
crawl_id,file_idx,rows,html_bytes,body_bytes,md_bytes,parquet_bytes,created_at,dur_download_s,dur_extract_s,dur_export_s,dur_publish_s,peak_rss_mb
|
| 2 |
CC-MAIN-2026-12,0,19785,3699291685,3699291685,3699291685,714448407,2026-03-24T05:15:28Z,325,206,15,145,1167
|
| 3 |
CC-MAIN-2026-12,1,19821,3723062373,3723062373,3723062373,695149393,2026-03-24T05:32:58Z,509,219,11,122,1240
|
| 4 |
+
CC-MAIN-2026-12,3,19309,3541948209,3541948209,3541948209,665588917,2026-03-24T08:52:08Z,397,266,24,0,1187
|