tamnd commited on
Commit
1e6f320
·
verified ·
1 Parent(s): 19123ab

Publish data/CC-MAIN-2026-08/00000.parquet

Browse files
Files changed (2) hide show
  1. README.md +20 -14
  2. data/CC-MAIN-2026-08/00000.parquet +2 -2
README.md CHANGED
@@ -40,13 +40,13 @@ Open Index is released under the **Open Data Commons Attribution License (ODC-By
40
 
41
  Each Common Crawl WARC file (~1 GB of compressed HTML) becomes one Parquet shard. The shards live under a crawl-specific directory so multiple snapshots can coexist:
42
 
43
- `
44
  data/
45
  CC-MAIN-2026-08/
46
  00000.parquet
47
  00001.parquet
48
  ...
49
- `
50
 
51
  Every row in a Parquet file is one web page. Along with the markdown body, we preserve the original WARC headers as a JSON column so you can always trace a document back to its source record.
52
 
@@ -54,13 +54,13 @@ Every row in a Parquet file is one web page. Along with the markdown body, we pr
54
 
55
  ### Using `datasets`
56
 
57
- `python
58
  from datasets import load_dataset
59
 
60
  # stream the entire dataset
61
  ds = load_dataset("open-index/draft", name="CC-MAIN-2026-08", split="train", streaming=True)
62
  for doc in ds:
63
- print(doc["url"], len(doc["markdown_body"]))
64
 
65
  # load a single shard into memory
66
  ds = load_dataset(
@@ -68,11 +68,11 @@ ds = load_dataset(
68
  data_files="data/CC-MAIN-2026-08/00000.parquet",
69
  split="train",
70
  )
71
- `
72
 
73
  ### Using `huggingface_hub`
74
 
75
- `python
76
  from huggingface_hub import snapshot_download
77
 
78
  folder = snapshot_download(
@@ -81,28 +81,34 @@ folder = snapshot_download(
81
  local_dir="./open-index/",
82
  allow_patterns="data/CC-MAIN-2026-08/*",
83
  )
84
- `
85
 
86
  For faster downloads, install `pip install huggingface_hub[hf_transfer]` and set `HF_HUB_ENABLE_HF_TRANSFER=1`.
87
 
88
  ### Using DuckDB
89
 
90
- `sql
91
  SELECT url, host, markdown_length
92
  FROM read_parquet('hf://datasets/open-index/draft/data/CC-MAIN-2026-08/*.parquet')
93
  WHERE host = 'en.wikipedia.org'
94
  LIMIT 10;
95
- `
96
 
97
  # Dataset card for Open Index
98
 
 
 
 
 
 
 
99
  ## Dataset Structure
100
 
101
  ### Data Instance
102
 
103
  The following is an example row from the dataset:
104
 
105
- `json
106
  {
107
  "doc_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
108
  "url": "https://example.com/article/interesting-topic",
@@ -114,12 +120,12 @@ The following is an example row from the dataset:
114
  "content_type": "text/markdown",
115
  "html_length": 48210,
116
  "markdown_length": 3847,
117
- "warc_headers_json": "{\"Content-Length\": \"3847\", \"Content-Type\": \"text/markdown\", ...}",
118
- "markdown_body": "# Interesting Topic\n\nThis is the main content of the page...",
119
  "source_warc_file": "00000.md.warc.gz",
120
  "source_file_index": 0
121
  }
122
- `
123
 
124
  ### Data Fields
125
 
@@ -134,7 +140,7 @@ The following is an example row from the dataset:
134
  - `html_length` (int64): byte length of the original HTML body before conversion
135
  - `markdown_length` (int64): byte length of the converted markdown body
136
  - `warc_headers_json` (string): all WARC headers serialized as a JSON object with sorted keys, preserving every header from the packed record for full provenance
137
- - `markdown_body` (string): the cleaned markdown content extracted from the HTML page
138
  - `source_warc_file` (string): filename of the packed .md.warc.gz shard this record came from
139
  - `source_file_index` (int32): zero-based index of the source file in the crawl manifest
140
 
 
40
 
41
  Each Common Crawl WARC file (~1 GB of compressed HTML) becomes one Parquet shard. The shards live under a crawl-specific directory so multiple snapshots can coexist:
42
 
43
+ ```
44
  data/
45
  CC-MAIN-2026-08/
46
  00000.parquet
47
  00001.parquet
48
  ...
49
+ ```
50
 
51
  Every row in a Parquet file is one web page. Along with the markdown body, we preserve the original WARC headers as a JSON column so you can always trace a document back to its source record.
52
 
 
54
 
55
  ### Using `datasets`
56
 
57
+ ```python
58
  from datasets import load_dataset
59
 
60
  # stream the entire dataset
61
  ds = load_dataset("open-index/draft", name="CC-MAIN-2026-08", split="train", streaming=True)
62
  for doc in ds:
63
+ print(doc["url"], len(doc["markdown"]))
64
 
65
  # load a single shard into memory
66
  ds = load_dataset(
 
68
  data_files="data/CC-MAIN-2026-08/00000.parquet",
69
  split="train",
70
  )
71
+ ```
72
 
73
  ### Using `huggingface_hub`
74
 
75
+ ```python
76
  from huggingface_hub import snapshot_download
77
 
78
  folder = snapshot_download(
 
81
  local_dir="./open-index/",
82
  allow_patterns="data/CC-MAIN-2026-08/*",
83
  )
84
+ ```
85
 
86
  For faster downloads, install `pip install huggingface_hub[hf_transfer]` and set `HF_HUB_ENABLE_HF_TRANSFER=1`.
87
 
88
  ### Using DuckDB
89
 
90
+ ```sql
91
  SELECT url, host, markdown_length
92
  FROM read_parquet('hf://datasets/open-index/draft/data/CC-MAIN-2026-08/*.parquet')
93
  WHERE host = 'en.wikipedia.org'
94
  LIMIT 10;
95
+ ```
96
 
97
  # Dataset card for Open Index
98
 
99
+ ## Dataset Description
100
+
101
+ - **Homepage and Repository:** [https://huggingface.co/datasets/open-index/draft](https://huggingface.co/datasets/open-index/draft)
102
+ - **Point of Contact:** please create a discussion on the Community tab
103
+ - **License:** Open Data Commons Attribution License (ODC-By) v1.0
104
+
105
  ## Dataset Structure
106
 
107
  ### Data Instance
108
 
109
  The following is an example row from the dataset:
110
 
111
+ ```json
112
  {
113
  "doc_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
114
  "url": "https://example.com/article/interesting-topic",
 
120
  "content_type": "text/markdown",
121
  "html_length": 48210,
122
  "markdown_length": 3847,
123
+ "warc_headers_json": "{\"Content-Length\": \"3847\", ...}",
124
+ "markdown": "# Interesting Topic\n\nThis is the main content of the page...",
125
  "source_warc_file": "00000.md.warc.gz",
126
  "source_file_index": 0
127
  }
128
+ ```
129
 
130
  ### Data Fields
131
 
 
140
  - `html_length` (int64): byte length of the original HTML body before conversion
141
  - `markdown_length` (int64): byte length of the converted markdown body
142
  - `warc_headers_json` (string): all WARC headers serialized as a JSON object with sorted keys, preserving every header from the packed record for full provenance
143
+ - `markdown` (string): the cleaned markdown content extracted from the HTML page
144
  - `source_warc_file` (string): filename of the packed .md.warc.gz shard this record came from
145
  - `source_file_index` (int32): zero-based index of the source file in the crawl manifest
146
 
data/CC-MAIN-2026-08/00000.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5ad96a87fe820a6af315e29e16e1f179778ef4bb5930a2ff4285046a7897813e
3
- size 33495070
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7854c258a10297d8d3b994a473fbba8bfd693db12422dc2e62c0e9d3d280fe74
3
+ size 33495060