File size: 9,848 Bytes
c9c4f70
5e996e6
41e8840
 
 
5e996e6
 
41e8840
 
 
5e996e6
 
 
 
 
41e8840
 
 
 
 
 
 
 
 
c9c4f70
 
5e996e6
c9c4f70
41e8840
c9c4f70
41e8840
5e996e6
41e8840
5e996e6
964123a
5e996e6
41e8840
5e996e6
41e8840
5e996e6
41e8840
5e996e6
1e6f320
5e996e6
41e8840
 
 
 
1e6f320
5e996e6
41e8840
5e996e6
41e8840
c9c4f70
41e8840
c9c4f70
1e6f320
5e996e6
 
41e8840
 
5e996e6
1e6f320
5e996e6
41e8840
5e996e6
 
 
 
 
1e6f320
5e996e6
41e8840
 
1e6f320
41e8840
 
 
 
 
 
 
 
1e6f320
41e8840
 
 
 
5e996e6
1e6f320
5e996e6
 
41e8840
5e996e6
1e6f320
5e996e6
41e8840
5e996e6
1e6f320
 
 
 
 
 
41e8840
5e996e6
41e8840
 
 
 
1e6f320
41e8840
f5ea73f
41e8840
 
 
 
 
 
 
a200bcb
41e8840
1e6f320
5e996e6
41e8840
5e996e6
a200bcb
 
f5ea73f
a200bcb
 
 
b81c710
a200bcb
 
 
 
5e996e6
41e8840
5e996e6
41e8840
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a200bcb
 
964123a
a200bcb
964123a
0911c37
55af2ff
160316f
69ea3f9
6060b49
a200bcb
964123a
a200bcb
964123a
a200bcb
5e0a0ae
 
964123a
5e0a0ae
 
964123a
1086c5b
964123a
18c90e8
5e0a0ae
 
3174c48
 
b81c710
 
aa39c40
3174c48
41e8840
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c9c4f70
41e8840
c9c4f70
41e8840
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
---
license: odc-by
task_categories:
- text-generation
- feature-extraction
language:
- en
pretty_name: Open Index
size_categories:
- 1M<n<10M
tags:
- common-crawl
- web-crawl
- markdown
- text
configs:
- config_name: default
  data_files:
  - split: train
    path: data/*/*
- config_name: CC-MAIN-2026-08
  data_files:
  - split: train
    path: data/CC-MAIN-2026-08/*
---

# Open Index

> Clean markdown from the web, ready for training and retrieval

## What is it?

Open Index is a large-scale web text dataset built from [Common Crawl](https://commoncrawl.org). Every page goes through a pipeline that extracts the main content from raw HTML, converts it to clean Markdown using [trafilatura](https://github.com/adbar/trafilatura), and packages the result into Parquet files with full WARC metadata preserved.

The dataset currently includes crawl **CC-MAIN-2026-08** with **55,486,267 documents across 2863 shards**. We plan to add more snapshots over time.

Open Index is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl.

## What is being released?

Each Common Crawl WARC file (~1 GB of compressed HTML) becomes one Parquet shard. The shards live under a crawl-specific directory so multiple snapshots can coexist:

```
data/
  CC-MAIN-2026-08/
    00000.parquet
    00001.parquet
    ...
```

Every row in a Parquet file is one web page. Along with the markdown body, we preserve the original WARC headers as a JSON column so you can always trace a document back to its source record.

## How to download and use Open Index

### Using `datasets`

```python
from datasets import load_dataset

# stream the entire dataset
ds = load_dataset("open-index/draft", name="CC-MAIN-2026-08", split="train", streaming=True)
for doc in ds:
    print(doc["url"], len(doc["markdown"]))

# load a single shard into memory
ds = load_dataset(
    "open-index/draft",
    data_files="data/CC-MAIN-2026-08/00000.parquet",
    split="train",
)
```

### Using `huggingface_hub`

```python
from huggingface_hub import snapshot_download

folder = snapshot_download(
    "open-index/draft",
    repo_type="dataset",
    local_dir="./open-index/",
    allow_patterns="data/CC-MAIN-2026-08/*",
)
```

For faster downloads, install `pip install huggingface_hub[hf_transfer]` and set `HF_HUB_ENABLE_HF_TRANSFER=1`.

### Using DuckDB

```sql
SELECT url, host, markdown_length
FROM read_parquet('hf://datasets/open-index/draft/data/CC-MAIN-2026-08/*.parquet')
WHERE host = 'en.wikipedia.org'
LIMIT 10;
```

# Dataset card for Open Index

## Dataset Description

- **Homepage and Repository:** [https://huggingface.co/datasets/open-index/draft](https://huggingface.co/datasets/open-index/draft)
- **Point of Contact:** please create a discussion on the Community tab
- **License:** Open Data Commons Attribution License (ODC-By) v1.0

## Dataset Structure

### Data Instance

The following is an example row from the dataset:

```json
{
  "doc_id": "6aaa5be7-a917-5105-aa60-e39ea1d087fc",
  "url": "https://example.com/article/interesting-topic",
  "host": "example.com",
  "crawl_date": "2026-02-06T18:14:58Z",
  "warc_record_id": "<urn:uuid:a1b2c3d4-e5f6-7890-abcd-ef1234567890>",
  "warc_refers_to": "<urn:uuid:f9e8d7c6-b5a4-3210-fedc-ba0987654321>",
  "html_length": 48210,
  "markdown_length": 3847,
  "markdown": "# Interesting Topic\n\nThis is the main content of the page..."
}
```

### Data Fields

| Column | Type | Description |
|---|---|---|
| `doc_id` | string | Deterministic UUID v5 derived from the canonical URL: `doc_id = UUID5(NamespaceURL, url)` — identical URLs always produce the same `doc_id` across crawls |
| `url` | string | Original URL of the crawled page |
| `host` | string | Lowercase hostname extracted from the URL |
| `crawl_date` | string | RFC 3339 timestamp from the WARC record |
| `warc_record_id` | string | Full WARC-Record-ID of this conversion record (`<urn:uuid:...>`) |
| `warc_refers_to` | string | WARC-Record-ID of the original HTTP response this was converted from |
| `html_length` | int64 | Byte length of the original HTML body before conversion |
| `markdown_length` | int64 | Byte length of the converted markdown body |
| `markdown` | string | Clean markdown content extracted from the page |

### Data Splits

The default subset includes all available data across all crawl snapshots. You can also load a specific crawl by using its ID as the config name (e.g. `CC-MAIN-2026-08`).

## Dataset Creation

### Curation Rationale

Most open web datasets either release raw text without structure or keep the HTML and leave parsing to the user. Open Index sits in between: it converts every page to Markdown so the content is immediately usable for training, while preserving the full WARC headers so you can always go back to the source if you need to.

### Source Data

The source data consists of web pages crawled by the [Common Crawl](https://commoncrawl.org) foundation. Common Crawl archives billions of pages across the public web and makes the raw WARC files freely available on Amazon S3.

### Data Processing Steps

The processing pipeline runs in five stages:

1. **Download** raw .warc.gz files from Common Crawl S3 (each file is roughly 1 GB compressed)
2. **Filter** to keep only HTTP 200 responses with a text/html content type, discarding images, scripts, redirects, and error pages
3. **Convert** HTML to Markdown using [trafilatura](https://github.com/adbar/trafilatura), which extracts the main content and strips boilerplate, navigation, sidebars, footers, and ads
4. **Pack** converted records into seekable .md.warc.gz files where each record is wrapped in its own gzip member, matching Common Crawl's concatenated-gzip format
5. **Export** each shard to Apache Parquet with Zstd compression, 100,000 rows per row group, and an 8 MB page buffer

Empty conversions (pages where trafilatura could not extract meaningful content) are dropped.

### Compression Ratios

Numbers below are actual measurements summed across all 2863 files of CC-MAIN-2026-08 (55,486,267 pages total), projected to the full crawl of 100,000 WARC files.

| Stage | 2863 files (measured) | 100,000 files (projected) | Reduction |
|---|---|---|---|
| Raw WARC (.warc.gz, downloaded) | ~2.3 TB | ~83 TB | — |
| HTML extracted (uncompressed) | 6.7 TB | ~295 TB | — |
| Packed markdown WARC (.md.warc.gz) | ~117.3 GB | ~3.7 TB | **-98.3%** vs HTML |
| Final Parquet (Zstd level 19) | 79.1 GB | ~2.9 TB | **-32.5%** vs packed WARC |

The big win is the HTML → Markdown step: trafilatura strips all tags, scripts, styles, navigation, and ads, keeping only the main content. This cuts 6.7 TB of uncompressed HTML down to 249.7 GB of markdown — a **98.3% reduction** — before any file-level compression is applied. Parquet with Zstd level 19 then compresses the markdown a further 68.3%.

End to end: ~2.3 TB of raw gzipped WARCs becomes **79.1 GB of Parquet** — a **96.6% total reduction** — containing 55,486,267 clean markdown documents.

### Processing Times

Pipeline timings across 2863 shards of CC-MAIN-2026-08:

```
Download (raw WARC)        ████████████████████████  total 258h 46m 22s  avg 5m 25s
Convert  (HTML → MD)       █░░░░░░░░░░░░░░░░░░░░░░░  total 13h 28m 50s   avg 16s
Export   (Parquet)          ██████░░░░░░░░░░░░░░░░░░  total 66h 59m 28s   avg 1m 24s
Publish  (HuggingFace)      █░░░░░░░░░░░░░░░░░░░░░░░  total 20h 29m 38s   avg 25s
```

### Dataset Charts

![Total size: HTML vs Markdown vs Parquet](charts/totals_chart.png)

![Pipeline stage durations](charts/timing_chart.png)

### Personal and Sensitive Information

No additional PII filtering is applied beyond what Common Crawl provides. As the dataset is sourced from the public web, it is likely that some personally identifiable information is present. If you find your own PII in the dataset and would like it removed, please open an issue on the repository.

## Considerations for Using the Data

### Social Impact

By releasing both the dataset and the full processing pipeline, we aim to lower the barrier to training and evaluating language models on high quality web data. Researchers and practitioners who cannot afford to run their own Common Crawl processing pipelines can use Open Index directly.

### Discussion of Biases

Open Index inherits the biases present in Common Crawl and the public web at large. The trafilatura extraction step favors article-like pages and may underrepresent content from forums, social media, and non-standard page layouts. We have not applied any machine-learning-based quality or toxicity filters, as such filters have been shown to disproportionately remove content from certain dialects and communities.

### Known Limitations

Code-heavy pages may not convert well to Markdown. If you are training a model that needs strong code performance, consider supplementing Open Index with a dedicated code dataset such as [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2). Similarly, highly structured pages like Wikipedia may have better formatting in dedicated Wikipedia dumps than in their Common Crawl versions.

## Additional Information

### Licensing

The dataset is released under the **Open Data Commons Attribution License (ODC-By) v1.0**. The use of this dataset is also subject to [Common Crawl's Terms of Use](https://commoncrawl.org/terms-of-use). The original content remains subject to the rights and terms of its respective publishers.

### Contact

Please open a discussion on the [Community tab](https://huggingface.co/datasets/open-index/draft/discussions) for questions, feedback, or issues.