Publish shard CC-MAIN-2026-12/00001
Browse files- README.md +12 -12
- data/CC-MAIN-2026-12/00001.parquet +3 -0
- stats.csv +2 -1
README.md
CHANGED
|
@@ -35,7 +35,7 @@ configs:
|
|
| 35 |
|
| 36 |
**OpenHTML** is a large-scale web dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the raw HTML body along with structured metadata from WARC records, HTTP response headers, and HTML `<head>` tags, then packages everything into Parquet files with 24 columns.
|
| 37 |
|
| 38 |
-
The dataset currently includes crawl **CC-MAIN-2026-12** with **
|
| 39 |
|
| 40 |
**OpenHTML** is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl.
|
| 41 |
|
|
@@ -221,25 +221,25 @@ No intermediate files are created — the pipeline streams from compressed WARC
|
|
| 221 |
|
| 222 |
### Compression Ratios
|
| 223 |
|
| 224 |
-
Numbers below are actual measurements summed across all
|
| 225 |
|
| 226 |
-
| Stage |
|
| 227 |
|---|---|---|---|
|
| 228 |
-
| Raw WARC (.warc.gz, downloaded) | ~
|
| 229 |
-
| HTML extracted (uncompressed) |
|
| 230 |
-
| Body stored (full HTML) |
|
| 231 |
-
| Final Parquet (Zstd) |
|
| 232 |
|
| 233 |
-
The body column stores the full raw HTML. Parquet with Zstd then compresses the data further. End to end: ~
|
| 234 |
|
| 235 |
### Processing Times
|
| 236 |
|
| 237 |
-
Pipeline timings across
|
| 238 |
|
| 239 |
```
|
| 240 |
-
Download (raw WARC) ████████████████████████
|
| 241 |
-
Extract (WARC → HTML + metadata) ████████████
|
| 242 |
-
Publish (HuggingFace upload) ░░░░░░░░░░░░░░░░░░░░
|
| 243 |
```
|
| 244 |
|
| 245 |
### Dataset Charts
|
|
|
|
| 35 |
|
| 36 |
**OpenHTML** is a large-scale web dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the raw HTML body along with structured metadata from WARC records, HTTP response headers, and HTML `<head>` tags, then packages everything into Parquet files with 24 columns.
|
| 37 |
|
| 38 |
+
The dataset currently includes crawl **CC-MAIN-2026-12** with **39,606 documents across 2 shards**. Processed 6.9 GB of raw HTML into 6.9 GB of stored body text — 1.3 GB as Parquet (Zstd). We plan to add more snapshots over time.
|
| 39 |
|
| 40 |
**OpenHTML** is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl.
|
| 41 |
|
|
|
|
| 221 |
|
| 222 |
### Compression Ratios
|
| 223 |
|
| 224 |
+
Numbers below are actual measurements summed across all 2 files of CC-MAIN-2026-12 (39,606 pages total), projected to the full crawl of 100,000 WARC files.
|
| 225 |
|
| 226 |
+
| Stage | 2 files (measured) | 100,000 files (projected) | Reduction |
|
| 227 |
|---|---|---|---|
|
| 228 |
+
| Raw WARC (.warc.gz, downloaded) | ~1.6 GB | ~79.2 TB | — |
|
| 229 |
+
| HTML extracted (uncompressed) | 6.9 GB | ~337.5 TB | — |
|
| 230 |
+
| Body stored (full HTML) | 6.9 GB | ~337.5 TB | **-0.0%** vs HTML |
|
| 231 |
+
| Final Parquet (Zstd) | 1.3 GB | ~64.1 TB | **-81.0%** vs body |
|
| 232 |
|
| 233 |
+
The body column stores the full raw HTML. Parquet with Zstd then compresses the data further. End to end: ~1.6 GB of raw gzipped WARCs becomes **1.3 GB of Parquet** — a **19.0% total reduction** — containing 39,606 web pages with full metadata.
|
| 234 |
|
| 235 |
### Processing Times
|
| 236 |
|
| 237 |
+
Pipeline timings across 2 shards of CC-MAIN-2026-12:
|
| 238 |
|
| 239 |
```
|
| 240 |
+
Download (raw WARC) ████████████████████████ 13m 54s
|
| 241 |
+
Extract (WARC → HTML + metadata) ████████████░░░░░░░░░░░░ 7m 31s
|
| 242 |
+
Publish (HuggingFace upload) ████░░░░░░░░░░░░░░░░░░░░ 2m 25s
|
| 243 |
```
|
| 244 |
|
| 245 |
### Dataset Charts
|
data/CC-MAIN-2026-12/00001.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d60aa1a7e938d183e29627ed2209170ae7457e723bc5791bd144a80dd7cc35be
|
| 3 |
+
size 695149393
|
stats.csv
CHANGED
|
@@ -1,2 +1,3 @@
|
|
| 1 |
crawl_id,file_idx,rows,html_bytes,body_bytes,md_bytes,parquet_bytes,created_at,dur_download_s,dur_extract_s,dur_export_s,dur_publish_s,peak_rss_mb
|
| 2 |
-
CC-MAIN-2026-12,0,19785,3699291685,3699291685,3699291685,714448407,2026-03-24T05:15:28Z,325,206,15,
|
|
|
|
|
|
| 1 |
crawl_id,file_idx,rows,html_bytes,body_bytes,md_bytes,parquet_bytes,created_at,dur_download_s,dur_extract_s,dur_export_s,dur_publish_s,peak_rss_mb
|
| 2 |
+
CC-MAIN-2026-12,0,19785,3699291685,3699291685,3699291685,714448407,2026-03-24T05:15:28Z,325,206,15,145,1167
|
| 3 |
+
CC-MAIN-2026-12,1,19821,3723062373,3723062373,3723062373,695149393,2026-03-24T05:32:58Z,509,219,11,0,1240
|