tamnd commited on
Commit
8c15773
·
verified ·
1 Parent(s): 7ae9a14

Update README for CC-MAIN-2026-12

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -183,8 +183,8 @@ The following is an example row from the dataset:
183
  | `og_image` | string | Open Graph image URL from `<meta property="og:image">` |
184
  | `og_type` | string | Open Graph type from `<meta property="og:type">` (e.g., `article`, `website`) |
185
  | `canonical_url` | string | Canonical URL from `<link rel="canonical">` — the page's preferred URL |
186
- | `html_length` | int64 | Byte length of the original HTML body before truncation |
187
- | `body` | string | Raw HTML body, truncated at 256 KB per record |
188
 
189
  ### Data Splits
190
 
@@ -214,7 +214,7 @@ The processing pipeline runs as a single-pass extraction:
214
  3. **Parse** HTTP response headers to extract `content_type`, `charset`, `content_language`, `server`, and `last_modified`
215
  4. **Decompose** the URL into `host`, `domain` (eTLD+1 via the Public Suffix List), `path`, and `query`
216
  5. **Extract** HTML `<head>` metadata using a streaming tokenizer: `title`, `description`, Open Graph tags (`og:title`, `og:description`, `og:image`, `og:type`), `canonical_url`, and `html_lang`
217
- 6. **Truncate** the HTML body at 256 KB (the full `html_length` is preserved for reference)
218
  7. **Export** directly to Apache Parquet with Zstd compression, 100,000 rows per row group
219
 
220
  No intermediate files are created — the pipeline streams from compressed WARC through extraction directly into Parquet. Pages that produce empty HTML bodies are dropped.
@@ -227,10 +227,10 @@ Numbers below are actual measurements summed across all 1 files of CC-MAIN-2026-
227
  |---|---|---|---|
228
  | Raw WARC (.warc.gz, downloaded) | ~830.0 MB | ~79.2 TB | — |
229
  | HTML extracted (uncompressed) | 2.2 GB | ~216.3 TB | — |
230
- | Body stored (truncated at 256 KB) | 2.2 GB | ~214.2 TB | **-1.0%** vs HTML |
231
  | Final Parquet (Zstd) | 476.6 MB | ~45.5 TB | **-78.8%** vs body |
232
 
233
- The body column stores the raw HTML truncated at 256 KB. Parquet with Zstd then compresses the data further. End to end: ~830.0 MB of raw gzipped WARCs becomes **476.6 MB of Parquet** — a **42.6% total reduction** — containing 19,785 web pages with full metadata.
234
 
235
  ### Processing Times
236
 
@@ -264,7 +264,7 @@ By releasing both the dataset and the full processing pipeline, we aim to lower
264
 
265
  ### Known Limitations
266
 
267
- The HTML body is truncated at 256 KB per record. Very long pages (e.g., full-text articles with inline images as data URIs) may be incomplete. The `html_length` field always reflects the true size. If you need complete HTML for specific pages, use the `warc_record_id` and `warc_filename` to retrieve the original from Common Crawl.
268
 
269
  Metadata extraction scans only the `<head>` section for performance. Pages that place `<meta>` or `<title>` tags in the `<body>` will have missing metadata.
270
 
 
183
  | `og_image` | string | Open Graph image URL from `<meta property="og:image">` |
184
  | `og_type` | string | Open Graph type from `<meta property="og:type">` (e.g., `article`, `website`) |
185
  | `canonical_url` | string | Canonical URL from `<link rel="canonical">` — the page's preferred URL |
186
+ | `html_length` | int64 | Byte length of the raw HTML body in bytes |
187
+ | `body` | string | Raw HTML body (full content, no truncation) |
188
 
189
  ### Data Splits
190
 
 
214
  3. **Parse** HTTP response headers to extract `content_type`, `charset`, `content_language`, `server`, and `last_modified`
215
  4. **Decompose** the URL into `host`, `domain` (eTLD+1 via the Public Suffix List), `path`, and `query`
216
  5. **Extract** HTML `<head>` metadata using a streaming tokenizer: `title`, `description`, Open Graph tags (`og:title`, `og:description`, `og:image`, `og:type`), `canonical_url`, and `html_lang`
217
+ 6. **Store** the full HTML body (no truncation `html_length` matches `body` size)
218
  7. **Export** directly to Apache Parquet with Zstd compression, 100,000 rows per row group
219
 
220
  No intermediate files are created — the pipeline streams from compressed WARC through extraction directly into Parquet. Pages that produce empty HTML bodies are dropped.
 
227
  |---|---|---|---|
228
  | Raw WARC (.warc.gz, downloaded) | ~830.0 MB | ~79.2 TB | — |
229
  | HTML extracted (uncompressed) | 2.2 GB | ~216.3 TB | — |
230
+ | Body stored (full HTML) | 2.2 GB | ~214.2 TB | **-1.0%** vs HTML |
231
  | Final Parquet (Zstd) | 476.6 MB | ~45.5 TB | **-78.8%** vs body |
232
 
233
+ The body column stores the full raw HTML. Parquet with Zstd then compresses the data further. End to end: ~830.0 MB of raw gzipped WARCs becomes **476.6 MB of Parquet** — a **42.6% total reduction** — containing 19,785 web pages with full metadata.
234
 
235
  ### Processing Times
236
 
 
264
 
265
  ### Known Limitations
266
 
267
+ The full HTML body is stored without truncation. Very large pages (e.g., pages with inline data URIs) will increase shard sizes. The `html_length` field reflects the exact body size in bytes.
268
 
269
  Metadata extraction scans only the `<head>` section for performance. Pages that place `<meta>` or `<title>` tags in the `<body>` will have missing metadata.
270