Update README for CC-MAIN-2026-12
Browse files
README.md
CHANGED
|
@@ -183,8 +183,8 @@ The following is an example row from the dataset:
|
|
| 183 |
| `og_image` | string | Open Graph image URL from `<meta property="og:image">` |
|
| 184 |
| `og_type` | string | Open Graph type from `<meta property="og:type">` (e.g., `article`, `website`) |
|
| 185 |
| `canonical_url` | string | Canonical URL from `<link rel="canonical">` — the page's preferred URL |
|
| 186 |
-
| `html_length` | int64 | Byte length of the
|
| 187 |
-
| `body` | string | Raw HTML body
|
| 188 |
|
| 189 |
### Data Splits
|
| 190 |
|
|
@@ -214,7 +214,7 @@ The processing pipeline runs as a single-pass extraction:
|
|
| 214 |
3. **Parse** HTTP response headers to extract `content_type`, `charset`, `content_language`, `server`, and `last_modified`
|
| 215 |
4. **Decompose** the URL into `host`, `domain` (eTLD+1 via the Public Suffix List), `path`, and `query`
|
| 216 |
5. **Extract** HTML `<head>` metadata using a streaming tokenizer: `title`, `description`, Open Graph tags (`og:title`, `og:description`, `og:image`, `og:type`), `canonical_url`, and `html_lang`
|
| 217 |
-
6. **
|
| 218 |
7. **Export** directly to Apache Parquet with Zstd compression, 100,000 rows per row group
|
| 219 |
|
| 220 |
No intermediate files are created — the pipeline streams from compressed WARC through extraction directly into Parquet. Pages that produce empty HTML bodies are dropped.
|
|
@@ -227,10 +227,10 @@ Numbers below are actual measurements summed across all 1 files of CC-MAIN-2026-
|
|
| 227 |
|---|---|---|---|
|
| 228 |
| Raw WARC (.warc.gz, downloaded) | ~830.0 MB | ~79.2 TB | — |
|
| 229 |
| HTML extracted (uncompressed) | 2.2 GB | ~216.3 TB | — |
|
| 230 |
-
| Body stored (
|
| 231 |
| Final Parquet (Zstd) | 476.6 MB | ~45.5 TB | **-78.8%** vs body |
|
| 232 |
|
| 233 |
-
The body column stores the raw HTML
|
| 234 |
|
| 235 |
### Processing Times
|
| 236 |
|
|
@@ -264,7 +264,7 @@ By releasing both the dataset and the full processing pipeline, we aim to lower
|
|
| 264 |
|
| 265 |
### Known Limitations
|
| 266 |
|
| 267 |
-
The HTML body is
|
| 268 |
|
| 269 |
Metadata extraction scans only the `<head>` section for performance. Pages that place `<meta>` or `<title>` tags in the `<body>` will have missing metadata.
|
| 270 |
|
|
|
|
| 183 |
| `og_image` | string | Open Graph image URL from `<meta property="og:image">` |
|
| 184 |
| `og_type` | string | Open Graph type from `<meta property="og:type">` (e.g., `article`, `website`) |
|
| 185 |
| `canonical_url` | string | Canonical URL from `<link rel="canonical">` — the page's preferred URL |
|
| 186 |
+
| `html_length` | int64 | Byte length of the raw HTML body in bytes |
|
| 187 |
+
| `body` | string | Raw HTML body (full content, no truncation) |
|
| 188 |
|
| 189 |
### Data Splits
|
| 190 |
|
|
|
|
| 214 |
3. **Parse** HTTP response headers to extract `content_type`, `charset`, `content_language`, `server`, and `last_modified`
|
| 215 |
4. **Decompose** the URL into `host`, `domain` (eTLD+1 via the Public Suffix List), `path`, and `query`
|
| 216 |
5. **Extract** HTML `<head>` metadata using a streaming tokenizer: `title`, `description`, Open Graph tags (`og:title`, `og:description`, `og:image`, `og:type`), `canonical_url`, and `html_lang`
|
| 217 |
+
6. **Store** the full HTML body (no truncation — `html_length` matches `body` size)
|
| 218 |
7. **Export** directly to Apache Parquet with Zstd compression, 100,000 rows per row group
|
| 219 |
|
| 220 |
No intermediate files are created — the pipeline streams from compressed WARC through extraction directly into Parquet. Pages that produce empty HTML bodies are dropped.
|
|
|
|
| 227 |
|---|---|---|---|
|
| 228 |
| Raw WARC (.warc.gz, downloaded) | ~830.0 MB | ~79.2 TB | — |
|
| 229 |
| HTML extracted (uncompressed) | 2.2 GB | ~216.3 TB | — |
|
| 230 |
+
| Body stored (full HTML) | 2.2 GB | ~214.2 TB | **-1.0%** vs HTML |
|
| 231 |
| Final Parquet (Zstd) | 476.6 MB | ~45.5 TB | **-78.8%** vs body |
|
| 232 |
|
| 233 |
+
The body column stores the full raw HTML. Parquet with Zstd then compresses the data further. End to end: ~830.0 MB of raw gzipped WARCs becomes **476.6 MB of Parquet** — a **42.6% total reduction** — containing 19,785 web pages with full metadata.
|
| 234 |
|
| 235 |
### Processing Times
|
| 236 |
|
|
|
|
| 264 |
|
| 265 |
### Known Limitations
|
| 266 |
|
| 267 |
+
The full HTML body is stored without truncation. Very large pages (e.g., pages with inline data URIs) will increase shard sizes. The `html_length` field reflects the exact body size in bytes.
|
| 268 |
|
| 269 |
Metadata extraction scans only the `<head>` section for performance. Pages that place `<meta>` or `<title>` tags in the `<body>` will have missing metadata.
|
| 270 |
|