Publish shard CC-MAIN-2026-12/00002
Browse files- README.md +12 -12
- data/CC-MAIN-2026-12/00002.parquet +3 -0
- stats.csv +2 -1
README.md
CHANGED
|
@@ -35,7 +35,7 @@ configs:
|
|
| 35 |
|
| 36 |
**OpenHTML** is a large-scale web dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the raw HTML body along with structured metadata from WARC records, HTTP response headers, and HTML `<head>` tags, then packages everything into Parquet files with 24 columns.
|
| 37 |
|
| 38 |
-
The dataset currently includes crawl **CC-MAIN-2026-12** with **
|
| 39 |
|
| 40 |
**OpenHTML** is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl.
|
| 41 |
|
|
@@ -221,25 +221,25 @@ No intermediate files are created — the pipeline streams from compressed WARC
|
|
| 221 |
|
| 222 |
### Compression Ratios
|
| 223 |
|
| 224 |
-
Numbers below are actual measurements summed across all
|
| 225 |
|
| 226 |
-
| Stage |
|
| 227 |
|---|---|---|---|
|
| 228 |
-
| Raw WARC (.warc.gz, downloaded) | ~
|
| 229 |
-
| HTML extracted (uncompressed) |
|
| 230 |
-
| Body stored (full HTML) |
|
| 231 |
-
| Final Parquet (Zstd) |
|
| 232 |
|
| 233 |
-
The body column stores the full raw HTML. Parquet with Zstd then compresses the data further. End to end: ~
|
| 234 |
|
| 235 |
### Processing Times
|
| 236 |
|
| 237 |
-
Pipeline timings across
|
| 238 |
|
| 239 |
```
|
| 240 |
-
Download (raw WARC) ████████████████████████
|
| 241 |
-
Extract (WARC → HTML + metadata) ██████████████░░░░░░░
|
| 242 |
-
Publish (HuggingFace upload) ████
|
| 243 |
```
|
| 244 |
|
| 245 |
### Dataset Charts
|
|
|
|
| 35 |
|
| 36 |
**OpenHTML** is a large-scale web dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the raw HTML body along with structured metadata from WARC records, HTTP response headers, and HTML `<head>` tags, then packages everything into Parquet files with 24 columns.
|
| 37 |
|
| 38 |
+
The dataset currently includes crawl **CC-MAIN-2026-12** with **78,579 documents across 4 shards**. Processed 13.6 GB of raw HTML into 13.6 GB of stored body text — 2.6 GB as Parquet (Zstd). We plan to add more snapshots over time.
|
| 39 |
|
| 40 |
**OpenHTML** is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl.
|
| 41 |
|
|
|
|
| 221 |
|
| 222 |
### Compression Ratios
|
| 223 |
|
| 224 |
+
Numbers below are actual measurements summed across all 4 files of CC-MAIN-2026-12 (78,579 pages total), projected to the full crawl of 100,000 WARC files.
|
| 225 |
|
| 226 |
+
| Stage | 4 files (measured) | 100,000 files (projected) | Reduction |
|
| 227 |
|---|---|---|---|
|
| 228 |
+
| Raw WARC (.warc.gz, downloaded) | ~3.2 GB | ~79.2 TB | — |
|
| 229 |
+
| HTML extracted (uncompressed) | 13.6 GB | ~333.1 TB | — |
|
| 230 |
+
| Body stored (full HTML) | 13.6 GB | ~333.1 TB | **-0.0%** vs HTML |
|
| 231 |
+
| Final Parquet (Zstd) | 2.6 GB | ~62.8 TB | **-81.1%** vs body |
|
| 232 |
|
| 233 |
+
The body column stores the full raw HTML. Parquet with Zstd then compresses the data further. End to end: ~3.2 GB of raw gzipped WARCs becomes **2.6 GB of Parquet** — a **20.7% total reduction** — containing 78,579 web pages with full metadata.
|
| 234 |
|
| 235 |
### Processing Times
|
| 236 |
|
| 237 |
+
Pipeline timings across 4 shards of CC-MAIN-2026-12:
|
| 238 |
|
| 239 |
```
|
| 240 |
+
Download (raw WARC) ████████████████████████ 30m 7s
|
| 241 |
+
Extract (WARC → HTML + metadata) █████████████████░░░░░░░ 22m 28s
|
| 242 |
+
Publish (HuggingFace upload) ████░░░░░░░░░░░░░░░░░░░░ 5m 47s
|
| 243 |
```
|
| 244 |
|
| 245 |
### Dataset Charts
|
data/CC-MAIN-2026-12/00002.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:771d7daf5b50284394ee9a33f8f6094629b50aa0e427315db6bc10a7b5440c67
|
| 3 |
+
size 686566085
|
stats.csv
CHANGED
|
@@ -1,4 +1,5 @@
|
|
| 1 |
crawl_id,file_idx,rows,html_bytes,body_bytes,md_bytes,parquet_bytes,created_at,dur_download_s,dur_extract_s,dur_export_s,dur_publish_s,peak_rss_mb
|
| 2 |
CC-MAIN-2026-12,0,19785,3699291685,3699291685,3699291685,714448407,2026-03-24T05:15:28Z,325,206,15,145,1167
|
| 3 |
CC-MAIN-2026-12,1,19821,3723062373,3723062373,3723062373,695149393,2026-03-24T05:32:58Z,509,219,11,122,1240
|
| 4 |
-
CC-MAIN-2026-12,
|
|
|
|
|
|
| 1 |
crawl_id,file_idx,rows,html_bytes,body_bytes,md_bytes,parquet_bytes,created_at,dur_download_s,dur_extract_s,dur_export_s,dur_publish_s,peak_rss_mb
|
| 2 |
CC-MAIN-2026-12,0,19785,3699291685,3699291685,3699291685,714448407,2026-03-24T05:15:28Z,325,206,15,145,1167
|
| 3 |
CC-MAIN-2026-12,1,19821,3723062373,3723062373,3723062373,695149393,2026-03-24T05:32:58Z,509,219,11,122,1240
|
| 4 |
+
CC-MAIN-2026-12,2,19664,3686392993,3686392993,3686392993,686566085,2026-03-24T09:00:28Z,576,576,31,0,1144
|
| 5 |
+
CC-MAIN-2026-12,3,19309,3541948209,3541948209,3541948209,665588917,2026-03-24T08:52:08Z,397,266,24,80,1187
|