tamnd commited on
Commit
bc609ae
·
verified ·
1 Parent(s): 9a81477

Publish shard CC-MAIN-2026-12/00001

Browse files
Files changed (3) hide show
  1. README.md +12 -12
  2. data/CC-MAIN-2026-12/00001.parquet +3 -0
  3. stats.csv +2 -1
README.md CHANGED
@@ -35,7 +35,7 @@ configs:
35
 
36
  **OpenHTML** is a large-scale web dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the raw HTML body along with structured metadata from WARC records, HTTP response headers, and HTML `<head>` tags, then packages everything into Parquet files with 24 columns.
37
 
38
- The dataset currently includes crawl **CC-MAIN-2026-12** with **19,785 documents across 1 shards**. Processed 3.4 GB of raw HTML into 3.4 GB of stored body text — 681.4 MB as Parquet (Zstd). We plan to add more snapshots over time.
39
 
40
  **OpenHTML** is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl.
41
 
@@ -221,25 +221,25 @@ No intermediate files are created — the pipeline streams from compressed WARC
221
 
222
  ### Compression Ratios
223
 
224
- Numbers below are actual measurements summed across all 1 files of CC-MAIN-2026-12 (19,785 pages total), projected to the full crawl of 100,000 WARC files.
225
 
226
- | Stage | 1 files (measured) | 100,000 files (projected) | Reduction |
227
  |---|---|---|---|
228
- | Raw WARC (.warc.gz, downloaded) | ~830.0 MB | ~79.2 TB | — |
229
- | HTML extracted (uncompressed) | 3.4 GB | ~336.4 TB | — |
230
- | Body stored (full HTML) | 3.4 GB | ~336.4 TB | **-0.0%** vs HTML |
231
- | Final Parquet (Zstd) | 681.4 MB | ~65.0 TB | **-80.7%** vs body |
232
 
233
- The body column stores the full raw HTML. Parquet with Zstd then compresses the data further. End to end: ~830.0 MB of raw gzipped WARCs becomes **681.4 MB of Parquet** — a **17.9% total reduction** — containing 19,785 web pages with full metadata.
234
 
235
  ### Processing Times
236
 
237
- Pipeline timings across 1 shards of CC-MAIN-2026-12:
238
 
239
  ```
240
- Download (raw WARC) ████████████████████████ 5m 25s
241
- Extract (WARC → HTML + metadata) ████████████████░░░░░░░░ 3m 41s
242
- Publish (HuggingFace upload) ░░░░░░░░░░░░░░░░░░░░░░░░
243
  ```
244
 
245
  ### Dataset Charts
 
35
 
36
  **OpenHTML** is a large-scale web dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the raw HTML body along with structured metadata from WARC records, HTTP response headers, and HTML `<head>` tags, then packages everything into Parquet files with 24 columns.
37
 
38
+ The dataset currently includes crawl **CC-MAIN-2026-12** with **39,606 documents across 2 shards**. Processed 6.9 GB of raw HTML into 6.9 GB of stored body text — 1.3 GB as Parquet (Zstd). We plan to add more snapshots over time.
39
 
40
  **OpenHTML** is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl.
41
 
 
221
 
222
  ### Compression Ratios
223
 
224
+ Numbers below are actual measurements summed across all 2 files of CC-MAIN-2026-12 (39,606 pages total), projected to the full crawl of 100,000 WARC files.
225
 
226
+ | Stage | 2 files (measured) | 100,000 files (projected) | Reduction |
227
  |---|---|---|---|
228
+ | Raw WARC (.warc.gz, downloaded) | ~1.6 GB | ~79.2 TB | — |
229
+ | HTML extracted (uncompressed) | 6.9 GB | ~337.5 TB | — |
230
+ | Body stored (full HTML) | 6.9 GB | ~337.5 TB | **-0.0%** vs HTML |
231
+ | Final Parquet (Zstd) | 1.3 GB | ~64.1 TB | **-81.0%** vs body |
232
 
233
+ The body column stores the full raw HTML. Parquet with Zstd then compresses the data further. End to end: ~1.6 GB of raw gzipped WARCs becomes **1.3 GB of Parquet** — a **19.0% total reduction** — containing 39,606 web pages with full metadata.
234
 
235
  ### Processing Times
236
 
237
+ Pipeline timings across 2 shards of CC-MAIN-2026-12:
238
 
239
  ```
240
+ Download (raw WARC) ████████████████████████ 13m 54s
241
+ Extract (WARC → HTML + metadata) ████████████░░░░░░░░░░░░ 7m 31s
242
+ Publish (HuggingFace upload) ████░░░░░░░░░░░░░░░░░░░░ 2m 25s
243
  ```
244
 
245
  ### Dataset Charts
data/CC-MAIN-2026-12/00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d60aa1a7e938d183e29627ed2209170ae7457e723bc5791bd144a80dd7cc35be
3
+ size 695149393
stats.csv CHANGED
@@ -1,2 +1,3 @@
1
  crawl_id,file_idx,rows,html_bytes,body_bytes,md_bytes,parquet_bytes,created_at,dur_download_s,dur_extract_s,dur_export_s,dur_publish_s,peak_rss_mb
2
- CC-MAIN-2026-12,0,19785,3699291685,3699291685,3699291685,714448407,2026-03-24T05:15:28Z,325,206,15,0,1167
 
 
1
  crawl_id,file_idx,rows,html_bytes,body_bytes,md_bytes,parquet_bytes,created_at,dur_download_s,dur_extract_s,dur_export_s,dur_publish_s,peak_rss_mb
2
+ CC-MAIN-2026-12,0,19785,3699291685,3699291685,3699291685,714448407,2026-03-24T05:15:28Z,325,206,15,145,1167
3
+ CC-MAIN-2026-12,1,19821,3723062373,3723062373,3723062373,695149393,2026-03-24T05:32:58Z,509,219,11,0,1240