tamnd commited on
Commit
ffc286c
·
verified ·
1 Parent(s): 60eadb3

Publish shard CC-MAIN-2026-12/00002

Browse files
Files changed (3) hide show
  1. README.md +12 -12
  2. data/CC-MAIN-2026-12/00002.parquet +3 -0
  3. stats.csv +2 -1
README.md CHANGED
@@ -35,7 +35,7 @@ configs:
35
 
36
  **OpenHTML** is a large-scale web dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the raw HTML body along with structured metadata from WARC records, HTTP response headers, and HTML `<head>` tags, then packages everything into Parquet files with 24 columns.
37
 
38
- The dataset currently includes crawl **CC-MAIN-2026-12** with **58,915 documents across 3 shards**. Processed 10.2 GB of raw HTML into 10.2 GB of stored body text — 1.9 GB as Parquet (Zstd). We plan to add more snapshots over time.
39
 
40
  **OpenHTML** is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl.
41
 
@@ -221,25 +221,25 @@ No intermediate files are created — the pipeline streams from compressed WARC
221
 
222
  ### Compression Ratios
223
 
224
- Numbers below are actual measurements summed across all 3 files of CC-MAIN-2026-12 (58,915 pages total), projected to the full crawl of 100,000 WARC files.
225
 
226
- | Stage | 3 files (measured) | 100,000 files (projected) | Reduction |
227
  |---|---|---|---|
228
- | Raw WARC (.warc.gz, downloaded) | ~2.4 GB | ~79.2 TB | — |
229
- | HTML extracted (uncompressed) | 10.2 GB | ~332.4 TB | — |
230
- | Body stored (full HTML) | 10.2 GB | ~332.4 TB | **-0.0%** vs HTML |
231
- | Final Parquet (Zstd) | 1.9 GB | ~62.9 TB | **-81.1%** vs body |
232
 
233
- The body column stores the full raw HTML. Parquet with Zstd then compresses the data further. End to end: ~2.4 GB of raw gzipped WARCs becomes **1.9 GB of Parquet** — a **20.5% total reduction** — containing 58,915 web pages with full metadata.
234
 
235
  ### Processing Times
236
 
237
- Pipeline timings across 3 shards of CC-MAIN-2026-12:
238
 
239
  ```
240
- Download (raw WARC) ████████████████████████ 20m 31s
241
- Extract (WARC → HTML + metadata) ██████████████░░░░░░░░░░ 12m 21s
242
- Publish (HuggingFace upload) ████░░░░░░░░░░░░░░░░░░░ 4m 27s
243
  ```
244
 
245
  ### Dataset Charts
 
35
 
36
  **OpenHTML** is a large-scale web dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the raw HTML body along with structured metadata from WARC records, HTTP response headers, and HTML `<head>` tags, then packages everything into Parquet files with 24 columns.
37
 
38
+ The dataset currently includes crawl **CC-MAIN-2026-12** with **78,579 documents across 4 shards**. Processed 13.6 GB of raw HTML into 13.6 GB of stored body text — 2.6 GB as Parquet (Zstd). We plan to add more snapshots over time.
39
 
40
  **OpenHTML** is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl.
41
 
 
221
 
222
  ### Compression Ratios
223
 
224
+ Numbers below are actual measurements summed across all 4 files of CC-MAIN-2026-12 (78,579 pages total), projected to the full crawl of 100,000 WARC files.
225
 
226
+ | Stage | 4 files (measured) | 100,000 files (projected) | Reduction |
227
  |---|---|---|---|
228
+ | Raw WARC (.warc.gz, downloaded) | ~3.2 GB | ~79.2 TB | — |
229
+ | HTML extracted (uncompressed) | 13.6 GB | ~333.1 TB | — |
230
+ | Body stored (full HTML) | 13.6 GB | ~333.1 TB | **-0.0%** vs HTML |
231
+ | Final Parquet (Zstd) | 2.6 GB | ~62.8 TB | **-81.1%** vs body |
232
 
233
+ The body column stores the full raw HTML. Parquet with Zstd then compresses the data further. End to end: ~3.2 GB of raw gzipped WARCs becomes **2.6 GB of Parquet** — a **20.7% total reduction** — containing 78,579 web pages with full metadata.
234
 
235
  ### Processing Times
236
 
237
+ Pipeline timings across 4 shards of CC-MAIN-2026-12:
238
 
239
  ```
240
+ Download (raw WARC) ████████████████████████ 30m 7s
241
+ Extract (WARC → HTML + metadata) █████████████████░░░░░░░ 22m 28s
242
+ Publish (HuggingFace upload) ████░░░░░░░░░░░░░░░░░░░ 5m 47s
243
  ```
244
 
245
  ### Dataset Charts
data/CC-MAIN-2026-12/00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:771d7daf5b50284394ee9a33f8f6094629b50aa0e427315db6bc10a7b5440c67
3
+ size 686566085
stats.csv CHANGED
@@ -1,4 +1,5 @@
1
  crawl_id,file_idx,rows,html_bytes,body_bytes,md_bytes,parquet_bytes,created_at,dur_download_s,dur_extract_s,dur_export_s,dur_publish_s,peak_rss_mb
2
  CC-MAIN-2026-12,0,19785,3699291685,3699291685,3699291685,714448407,2026-03-24T05:15:28Z,325,206,15,145,1167
3
  CC-MAIN-2026-12,1,19821,3723062373,3723062373,3723062373,695149393,2026-03-24T05:32:58Z,509,219,11,122,1240
4
- CC-MAIN-2026-12,3,19309,3541948209,3541948209,3541948209,665588917,2026-03-24T08:52:08Z,397,266,24,0,1187
 
 
1
  crawl_id,file_idx,rows,html_bytes,body_bytes,md_bytes,parquet_bytes,created_at,dur_download_s,dur_extract_s,dur_export_s,dur_publish_s,peak_rss_mb
2
  CC-MAIN-2026-12,0,19785,3699291685,3699291685,3699291685,714448407,2026-03-24T05:15:28Z,325,206,15,145,1167
3
  CC-MAIN-2026-12,1,19821,3723062373,3723062373,3723062373,695149393,2026-03-24T05:32:58Z,509,219,11,122,1240
4
+ CC-MAIN-2026-12,2,19664,3686392993,3686392993,3686392993,686566085,2026-03-24T09:00:28Z,576,576,31,0,1144
5
+ CC-MAIN-2026-12,3,19309,3541948209,3541948209,3541948209,665588917,2026-03-24T08:52:08Z,397,266,24,80,1187