tamnd commited on
Commit
5e996e6
·
verified ·
1 Parent(s): c9c4f70

Publish data/CC-MAIN-2026-08/00000.parquet

Browse files
Files changed (3) hide show
  1. LICENSE +16 -10
  2. README.md +114 -15
  3. data/CC-MAIN-2026-08/00000.parquet +3 -0
LICENSE CHANGED
@@ -1,15 +1,21 @@
1
- Common Crawl License Notice
2
 
3
- This repository contains data derived from Common Crawl.
 
4
 
5
- Common Crawl makes its datasets publicly available subject to its Terms of Use:
6
- https://commoncrawl.org/terms-of-use
7
 
8
- Important:
 
 
9
 
10
- 1. Common Crawl is an archive of third-party web content.
11
- 2. The original content remains subject to the rights and terms of its respective publishers.
12
- 3. You are responsible for complying with applicable law, downstream licensing obligations,
13
- robots restrictions, privacy requirements, and content removal requests.
14
 
15
- Refer to the Common Crawl Terms of Use for the governing terms for the crawl data itself.
 
 
 
 
 
 
 
1
+ Open Data Commons Attribution License (ODC-By) v1.0
2
 
3
+ This dataset is made available under the Open Data Commons Attribution License:
4
+ https://opendatacommons.org/licenses/by/1-0/
5
 
6
+ You are free to share, create, and adapt this data even for commercial purposes —
7
+ as long as you attribute the source.
8
 
9
+ Attribution requirements:
10
+ - Cite "Open Index, derived from Common Crawl (https://commoncrawl.org)"
11
+ - Include a link to this dataset when used in publications or products
12
 
13
+ Additional notices:
 
 
 
14
 
15
+ 1. This dataset contains data derived from Common Crawl, which archives third-party
16
+ web content. The original content remains subject to the rights of its respective
17
+ publishers and the Common Crawl Terms of Use: https://commoncrawl.org/terms-of-use
18
+
19
+ 2. You are responsible for complying with applicable law including downstream licensing
20
+ obligations, robots.txt restrictions, privacy requirements, and content removal
21
+ requests from original publishers.
README.md CHANGED
@@ -1,26 +1,125 @@
1
  ---
2
- license: other
3
- pretty_name: Open Index Draft
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Open Index Draft
7
 
8
- This dataset contains markdown exports derived from Common Crawl shard CC-MAIN-2026-08.
9
 
10
- Layout:
11
 
12
- - data/*.parquet: one parquet file per packed markdown WARC shard
13
- - README.md: dataset description
14
- - LICENSE: Common Crawl licensing and usage notice
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
- Parquet schema:
17
 
18
- - doc_id, url, host, crawl_date
19
- - warc_type, warc_record_id, warc_refers_to
20
- - content_type, content_length, markdown_length
21
- - warc_headers_json: all WARC header metadata serialized as JSON
22
- - markdown_body: markdown body extracted from the packed WARC record
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
- Source:
25
 
26
  - Common Crawl: [https://commoncrawl.org](https://commoncrawl.org)
 
 
 
1
  ---
2
+ license: odc-by
3
+ pretty_name: Open Index
4
+ language:
5
+ - en
6
+ tags:
7
+ - common-crawl
8
+ - web-crawl
9
+ - markdown
10
+ - text
11
+ size_categories:
12
+ - 10B<n<100B
13
  ---
14
 
15
+ # Open Index
16
 
17
+ **Open Index** is a large-scale web text dataset derived from [Common Crawl](https://commoncrawl.org) with HTML converted to clean Markdown. Designed for language model training, information retrieval research, and web-scale NLP.
18
 
19
+ This snapshot is built from crawl **CC-MAIN-2026-08**.
20
 
21
+ ---
22
+
23
+ ## Dataset Summary
24
+
25
+ | Property | Value |
26
+ |---|---|
27
+ | Source | Common Crawl (CC-MAIN-2026-08) |
28
+ | Format | Apache Parquet (Zstd compressed) |
29
+ | Content | Markdown-converted web pages |
30
+ | License | [ODC-By 1.0](https://opendatacommons.org/licenses/by/1-0/) |
31
+
32
+ ---
33
+
34
+ ## Dataset Structure
35
+
36
+ Parquet files are organised by crawl ID:
37
+
38
+ `
39
+ data/
40
+ └── CC-MAIN-2026-08/
41
+ ├── 00000.parquet
42
+ ├── 00001.parquet
43
+ └── ...
44
+ `
45
+
46
+ Each file corresponds to one packed WARC shard (~1 GB source WARC).
47
+
48
+ ### Data Fields
49
+
50
+ | Field | Type | Description |
51
+ |---|---|---|
52
+ | `doc_id` | string | UUID derived from the WARC-Record-ID |
53
+ | `url` | string | Original URL of the crawled page |
54
+ | `host` | string | Lowercase hostname extracted from the URL |
55
+ | `crawl_date` | string | RFC3339 timestamp from the WARC record |
56
+ | `warc_type` | string | WARC record type (conversion, response, …) |
57
+ | `warc_record_id` | string | Original `<urn:uuid:…>` WARC record identifier |
58
+ | `warc_refers_to` | string | Record ID of the source response record |
59
+ | `content_type` | string | HTTP Content-Type of the original response |
60
+ | `html_length` | int64 | Byte length of the original HTML body |
61
+ | `markdown_length` | int64 | Byte length of the converted Markdown body |
62
+ | `warc_headers_json` | string | All WARC headers as stable-key JSON |
63
+ | `markdown_body` | string | Clean Markdown text converted from HTML |
64
+ | `source_warc_file` | string | Source packed .md.warc.gz shard filename |
65
+ | `source_file_index` | int32 | Index of the source file in the crawl manifest |
66
+
67
+ ---
68
+
69
+ ## Usage
70
 
71
+ ### Hugging Face datasets
72
 
73
+ `python
74
+ from datasets import load_dataset
75
+
76
+ # Stream the full snapshot
77
+ ds = load_dataset("open-index/draft", split="train", streaming=True)
78
+ for doc in ds:
79
+ print(doc["url"], doc["markdown_body"][:200])
80
+
81
+ # Load a single shard
82
+ ds = load_dataset(
83
+ "open-index/draft",
84
+ data_files="data/CC-MAIN-2026-08/00000.parquet",
85
+ split="train",
86
+ )
87
+ `
88
+
89
+ ### DuckDB
90
+
91
+ `sql
92
+ SELECT url, host, markdown_length
93
+ FROM read_parquet('hf://datasets/open-index/draft/data/CC-MAIN-2026-08/*.parquet')
94
+ WHERE host LIKE '%wikipedia.org'
95
+ LIMIT 10;
96
+ `
97
+
98
+ ### pandas
99
+
100
+ `python
101
+ import pandas as pd
102
+
103
+ df = pd.read_parquet(
104
+ "hf://datasets/open-index/draft/data/CC-MAIN-2026-08/00000.parquet",
105
+ columns=["url", "host", "crawl_date", "markdown_body"],
106
+ )
107
+ `
108
+
109
+ ---
110
+
111
+ ## Data Processing Pipeline
112
+
113
+ 1. **Download** — Raw .warc.gz files from Common Crawl S3.
114
+ 2. **Filter** — HTTP 200 responses with text/html content only.
115
+ 3. **Convert** — HTML → Markdown via [trafilatura](https://github.com/adbar/trafilatura) (removes boilerplate, navigation, ads).
116
+ 4. **Pack** — Seekable .md.warc.gz files (one gzip member per record, CC-compatible format).
117
+ 5. **Export** — Parquet with Zstd compression, 100K rows per row group.
118
+
119
+ ---
120
 
121
+ ## Source & License
122
 
123
  - Common Crawl: [https://commoncrawl.org](https://commoncrawl.org)
124
+ - Terms of Use: [https://commoncrawl.org/terms-of-use](https://commoncrawl.org/terms-of-use)
125
+ - This dataset is released under the [Open Data Commons Attribution License (ODC-By) v1.0](https://opendatacommons.org/licenses/by/1-0/)
data/CC-MAIN-2026-08/00000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4b1f55ef10025539e07eb92ae726a39073a80ac09c7d25d7ae9c6ec8abfaaa8
3
+ size 33308582