tamnd commited on
Commit
e44cb0d
·
verified ·
1 Parent(s): d21ba38

add comments/2006-04 2006/04 (1 shards, 19.1K rows)

Browse files
Files changed (4) hide show
  1. README.md +371 -68
  2. data/comments/2006/04/000.parquet +3 -0
  3. states.json +10 -10
  4. stats.csv +2 -1
README.md CHANGED
@@ -16,113 +16,416 @@ tags:
16
  - social-media
17
  - arctic-shift
18
  - pushshift
 
 
 
 
19
  pretty_name: Arctic Shift Reddit Archive
20
  size_categories:
21
- - 100B<n<1T
 
 
 
 
22
  ---
23
 
24
  # Arctic Shift Reddit Archive
25
 
26
- Full Reddit dataset (comments + submissions) sourced from the
27
- [Arctic Shift](https://github.com/ArthurHeitmann/arctic_shift) project,
28
- covering all subreddits from 2005-06 through **2006-03**.
29
 
30
- Data is organized as monthly parquet shards by type, making it easy to load
31
- specific time ranges or work with comments and submissions independently.
32
 
33
- ## Quick Start
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
  ```python
36
  from datasets import load_dataset
37
 
38
- # Stream all comments (recommended dataset is very large)
39
- comments = load_dataset("open-index/arctic", "comments", streaming=True)
40
- for item in comments["train"]:
41
- print(item["author"], item["body"][:80])
42
 
43
  # Load submissions for a specific year
44
- subs = load_dataset("open-index/arctic", "submissions",
45
- data_files="data/submissions/2020/**/*.parquet")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  ```
47
 
48
- ## Dataset Stats
49
 
50
- | Type | Months | Rows | Parquet Size |
51
- |-------------|--------|------|--------------|
52
- | comments | 4 | 27.7K | 2.9 MB |
53
  | submissions | 4 | 35.4K | 2.0 MB |
 
 
 
 
 
 
 
 
 
54
 
55
- *Updated: 2026-03-15*
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
 
58
- ## 🔄 Live Session
59
 
60
- > Auto-updated every ~5 minutes by the running ingestion job. Last update: 2026-03-15 01:09 UTC
61
 
62
- **Started:** 2026-03-15 01:07 UTC · **Elapsed:** 2m · **Committed this session:** 2
63
 
64
  | | |
65
  |:---|:---|
66
  | Phase | committing |
67
- | Month | **2006-03** — submissions |
68
- | Progress | committing to HuggingFace… |
69
-
70
 
71
- **Overall:** `░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░` 7 / 488 (1.4%)
72
 
73
  | Metric | This Session |
74
- |--------|-------------|
75
- | Months committed | 2 |
76
- | Total rows | 23.4K |
77
- | Total size | 1.9 MB |
 
 
 
78
 
79
- ## Growth (rows per year, comments + submissions combined)
80
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81
  ```
82
- 2005 ████ 6.4K
83
- 2006 ████████████████████████████████████████ 56.7K
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
  ```
85
 
86
- ## Schema
87
 
88
- ### Comments
89
 
90
  | Column | Type | Description |
91
  |--------|------|-------------|
92
- | id | VARCHAR | Comment ID |
93
- | author | VARCHAR | Username |
94
- | subreddit | VARCHAR | Subreddit name |
95
- | body | VARCHAR | Comment text |
96
- | score | BIGINT | Net upvotes |
97
- | created_utc | BIGINT | Unix timestamp |
98
- | created_at | TIMESTAMP | Derived from created_utc |
99
- | body_length | BIGINT | Character count of body |
100
- | link_id | VARCHAR | Parent submission ID |
101
- | parent_id | VARCHAR | Parent comment or submission ID |
102
- | distinguished | VARCHAR | mod/admin/null |
103
- | author_flair_text | VARCHAR | Author flair |
104
-
105
- ### Submissions
106
 
107
  | Column | Type | Description |
108
  |--------|------|-------------|
109
- | id | VARCHAR | Submission ID |
110
- | author | VARCHAR | Username |
111
- | subreddit | VARCHAR | Subreddit name |
112
- | title | VARCHAR | Post title |
113
- | selftext | VARCHAR | Post body (self posts) |
114
- | score | BIGINT | Net upvotes |
115
- | created_utc | BIGINT | Unix timestamp |
116
- | created_at | TIMESTAMP | Derived from created_utc |
117
- | title_length | BIGINT | Character count of title |
118
- | num_comments | BIGINT | Comment count |
119
- | url | VARCHAR | External URL or permalink |
120
- | over_18 | BOOLEAN | NSFW flag |
121
- | link_flair_text | VARCHAR | Post flair |
122
- | author_flair_text | VARCHAR | Author flair |
123
-
124
- ## Source & License
125
-
126
- Repackaged from [Arctic Shift](https://github.com/ArthurHeitmann/arctic_shift) monthly dumps,
127
- which re-process the [PushShift](https://pushshift.io) Reddit archive.
128
- Original content by Reddit users.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  - social-media
17
  - arctic-shift
18
  - pushshift
19
+ - comments
20
+ - submissions
21
+ - parquet
22
+ - community
23
  pretty_name: Arctic Shift Reddit Archive
24
  size_categories:
25
+ - 1B<n<10B
26
+ task_categories:
27
+ - text-generation
28
+ - text-classification
29
+ - feature-extraction
30
  ---
31
 
32
  # Arctic Shift Reddit Archive
33
 
34
+ > Every Reddit comment and submission since 2005, organized as monthly Parquet shards
 
 
35
 
36
+ ## Table of Contents
 
37
 
38
+ - [What is it?](#what-is-it)
39
+ - [What is being released?](#what-is-being-released)
40
+ - [Breakdown by year](#breakdown-by-year)
41
+ - [How to download and use this dataset](#how-to-download-and-use-this-dataset)
42
+ - [Dataset statistics](#dataset-statistics)
43
+ - [Pipeline status](#-pipeline-status)
44
+ - [Dataset card](#dataset-card-for-arctic-shift-reddit-archive)
45
+ - [Dataset summary](#dataset-summary)
46
+ - [Dataset structure](#dataset-structure)
47
+ - [Dataset creation](#dataset-creation)
48
+ - [Considerations for using the data](#considerations-for-using-the-data)
49
+ - [Additional information](#additional-information)
50
+
51
+ ## What is it?
52
+
53
+ This dataset contains the complete [Reddit](https://www.reddit.com) archive of comments and submissions, sourced from the [Arctic Shift](https://github.com/ArthurHeitmann/arctic_shift) project which re-processes the historical [PushShift](https://pushshift.io) Reddit dumps. It covers **every public subreddit** from the earliest available data in **2005-12** through **2006-04**.
54
+
55
+ The archive currently contains **82.2K items** (46.8K comments + 35.4K submissions) totaling **7.0 MB** of compressed Parquet data. The data is organized as two independent datasets — `comments` and `submissions` — each split into monthly shards that can be loaded independently or streamed together.
56
+
57
+ Reddit is one of the largest and most diverse online communities, with millions of users discussing everything from programming and science to cooking and local news. This makes it a valuable resource for language model training, sentiment analysis, community dynamics research, and information retrieval. Unlike many Reddit datasets that focus on specific subreddits or time periods, this archive aims to be comprehensive: all subreddits, all months, all public content.
58
+
59
+ ## What is being released?
60
+
61
+ The dataset is organized as monthly Parquet files by type (comments or submissions), with each month split into one or more shards. Early months (pre-2010) typically fit in a single shard; recent months with millions of posts produce multiple shards of ~200 MB each.
62
+
63
+ ```
64
+ data/
65
+ comments/
66
+ 2005/12/000.parquet earliest month with data
67
+ 2006/01/000.parquet
68
+ ...
69
+ 2023/06/000.parquet
70
+ 001.parquet large months have multiple shards
71
+ 002.parquet
72
+ submissions/
73
+ 2005/12/000.parquet
74
+ 2006/01/000.parquet
75
+ ...
76
+ stats.csv one row per committed (month, type) pair
77
+ states.json live pipeline state (updated every ~5 min)
78
+ ```
79
+
80
+ Along with the Parquet files, we include `stats.csv` which tracks every committed (month, type) pair with its row count, shard count, file size, processing duration, and commit timestamp. This makes it easy to verify completeness and track ingestion progress.
81
+
82
+ ## Breakdown by year
83
+
84
+ The chart below shows the total number of items (comments + submissions combined) committed per year.
85
+
86
+ ```
87
+ 2005 ██░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 6.4K
88
+ 2006 ██████████████████████████████ 75.8K
89
+ ```
90
+
91
+ ## How to download and use this dataset
92
+
93
+ You can load comments or submissions independently, filter by year or month, or stream the entire archive. The dataset uses the standard Hugging Face Parquet layout, so it works out of the box with DuckDB, the `datasets` library, `pandas`, and `huggingface_hub`.
94
+
95
+ ### Using DuckDB
96
+
97
+ DuckDB can read Parquet files directly from Hugging Face without downloading anything first. This is the fastest way to explore the data:
98
+
99
+ ```sql
100
+ -- Top 20 subreddits by comment volume (all time)
101
+ SELECT subreddit, count(*) AS comments
102
+ FROM read_parquet('hf://datasets/open-index/arctic/data/comments/**/*.parquet')
103
+ GROUP BY subreddit
104
+ ORDER BY comments DESC
105
+ LIMIT 20;
106
+ ```
107
+
108
+ ```sql
109
+ -- Monthly submission volume for a specific year
110
+ SELECT
111
+ strftime(created_at, '%Y-%m') AS month,
112
+ count(*) AS submissions,
113
+ sum(num_comments) AS total_comments
114
+ FROM read_parquet('hf://datasets/open-index/arctic/data/submissions/2023/**/*.parquet')
115
+ GROUP BY month
116
+ ORDER BY month;
117
+ ```
118
+
119
+ ```sql
120
+ -- Most active authors across all comments
121
+ SELECT author, count(*) AS comments, avg(score) AS avg_score
122
+ FROM read_parquet('hf://datasets/open-index/arctic/data/comments/**/*.parquet')
123
+ WHERE author != '[deleted]'
124
+ GROUP BY author
125
+ ORDER BY comments DESC
126
+ LIMIT 20;
127
+ ```
128
+
129
+ ```sql
130
+ -- Average comment length by year — how has Reddit writing changed?
131
+ SELECT
132
+ extract(year FROM created_at) AS year,
133
+ avg(body_length) AS avg_length,
134
+ count(*) AS comments
135
+ FROM read_parquet('hf://datasets/open-index/arctic/data/comments/**/*.parquet')
136
+ GROUP BY year
137
+ ORDER BY year;
138
+ ```
139
+
140
+ ```sql
141
+ -- Top linked domains in submissions
142
+ SELECT
143
+ regexp_extract(url, 'https?://([^/]+)', 1) AS domain,
144
+ count(*) AS posts
145
+ FROM read_parquet('hf://datasets/open-index/arctic/data/submissions/**/*.parquet')
146
+ WHERE url IS NOT NULL AND url != ''
147
+ GROUP BY domain
148
+ ORDER BY posts DESC
149
+ LIMIT 20;
150
+ ```
151
+
152
+ ### Using `datasets`
153
 
154
  ```python
155
  from datasets import load_dataset
156
 
157
+ # Stream all comments without downloading everything
158
+ comments = load_dataset("open-index/arctic", "comments", split="train", streaming=True)
159
+ for item in comments:
160
+ print(item["author"], item["subreddit"], item["body"][:80])
161
 
162
  # Load submissions for a specific year
163
+ subs = load_dataset(
164
+ "open-index/arctic", "submissions",
165
+ data_files="data/submissions/2023/**/*.parquet",
166
+ split="train",
167
+ )
168
+ print(f"{len(subs):,} submissions in 2023")
169
+ ```
170
+
171
+ ### Using `huggingface_hub`
172
+
173
+ ```python
174
+ from huggingface_hub import snapshot_download
175
+
176
+ # Download only 2023 comments
177
+ snapshot_download(
178
+ "open-index/arctic",
179
+ repo_type="dataset",
180
+ local_dir="./arctic/",
181
+ allow_patterns="data/comments/2023/**/*",
182
+ )
183
+ ```
184
+
185
+ For faster downloads, install `pip install huggingface_hub[hf_transfer]` and set `HF_HUB_ENABLE_HF_TRANSFER=1`.
186
+
187
+ ### Using the CLI
188
+
189
+ ```bash
190
+ # Download a single month of submissions
191
+ huggingface-cli download open-index/arctic \
192
+ --include "data/submissions/2024/01/*" \
193
+ --repo-type dataset --local-dir ./arctic/
194
  ```
195
 
196
+ ## Dataset statistics
197
 
198
+ | Type | Months | Rows | Parquet Size |
199
+ |------|-------:|-----:|-------------:|
200
+ | comments | 5 | 46.8K | 5.0 MB |
201
  | submissions | 4 | 35.4K | 2.0 MB |
202
+ | **Total** | **5** | **82.2K** | **7.0 MB** |
203
+
204
+ You can query the per-month statistics directly from the `stats.csv` file:
205
+
206
+ ```sql
207
+ SELECT year, month, type, shards, count, size_bytes
208
+ FROM read_csv_auto('hf://datasets/open-index/arctic/stats.csv')
209
+ ORDER BY year, month, type;
210
+ ```
211
 
212
+ The `stats.csv` file tracks each committed (month, type) pair with the following columns:
213
+
214
+ | Column | Description |
215
+ |--------|-------------|
216
+ | `year`, `month` | Calendar month |
217
+ | `type` | `comments` or `submissions` |
218
+ | `shards` | Number of Parquet files for this (month, type) |
219
+ | `count` | Total number of rows across all shards |
220
+ | `size_bytes` | Total Parquet size across all shards |
221
+ | `dur_download_s` | Seconds to download the source .zst file |
222
+ | `dur_process_s` | Seconds to decompress and convert to Parquet |
223
+ | `dur_commit_s` | Seconds to commit to Hugging Face |
224
+ | `committed_at` | ISO 8601 timestamp of when this pair was committed |
225
 
226
 
227
+ ## 🔄 Pipeline Status
228
 
229
+ > The ingestion pipeline is actively running. This section auto-updates every ~5 minutes.
230
 
231
+ **Started:** 2026-03-15 01:26 UTC · **Elapsed:** 1m · **Committed this session:** 0
232
 
233
  | | |
234
  |:---|:---|
235
  | Phase | committing |
236
+ | Month | **2006-04** — comments |
237
+ | Progress | committing to Hugging Face… |
 
238
 
239
+ `░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░` 8 / 488 (1.6%)
240
 
241
  | Metric | This Session |
242
+ |--------|-------------:|
243
+ | Months committed | 0 |
244
+ | Rows processed | 0 |
245
+ | Data committed | 0 B |
246
+
247
+ *Last update: 2026-03-15 01:27 UTC*
248
+
249
 
250
+ # Dataset card for Arctic Shift Reddit Archive
251
 
252
+ ## Dataset summary
253
+
254
+ This dataset is a complete repackaging of the [Arctic Shift](https://github.com/ArthurHeitmann/arctic_shift) monthly Reddit dumps into analysis-ready Parquet files. Arctic Shift itself re-processes the historical [PushShift](https://pushshift.io) Reddit archive, which captured the vast majority of public Reddit content from the site's early days through the API changes in 2023.
255
+
256
+ The data covers every public subreddit, every month, and includes both comments and submissions. It is intended for research, analysis, and training. Common use cases include:
257
+
258
+ - **Language model pretraining and fine-tuning** on one of the largest sources of natural human conversation
259
+ - **Sentiment and trend analysis** across two decades of online discourse
260
+ - **Community dynamics research** across thousands of subreddits with different cultures and norms
261
+ - **Information retrieval** benchmarks using real-world questions and answers from r/AskReddit, r/explainlikeimfive, and others
262
+ - **Content moderation research** using the moderation signals present in the data
263
+
264
+ ## Dataset structure
265
+
266
+ ### Data instances
267
+
268
+ Here is an example comment:
269
+
270
+ ```json
271
+ {
272
+ "id": "c0001",
273
+ "author": "spez",
274
+ "subreddit": "reddit.com",
275
+ "body": "Welcome to Reddit!",
276
+ "score": 42,
277
+ "created_utc": 1134028003,
278
+ "created_at": "2005-12-08T10:06:43",
279
+ "body_length": 19,
280
+ "link_id": "t3_17",
281
+ "parent_id": "t3_17",
282
+ "distinguished": null,
283
+ "author_flair_text": null
284
+ }
285
  ```
286
+
287
+ And here is an example submission:
288
+
289
+ ```json
290
+ {
291
+ "id": "abc123",
292
+ "author": "kn0thing",
293
+ "subreddit": "reddit.com",
294
+ "title": "The Downing Street Memo",
295
+ "selftext": "",
296
+ "score": 15,
297
+ "created_utc": 1118895720,
298
+ "created_at": "2005-06-16T01:02:00",
299
+ "title_length": 23,
300
+ "num_comments": 3,
301
+ "url": "http://www.timesonline.co.uk/...",
302
+ "over_18": false,
303
+ "link_flair_text": null,
304
+ "author_flair_text": null
305
+ }
306
  ```
307
 
308
+ ### Data fields
309
 
310
+ #### Comments (`data/comments/YYYY/MM/NNN.parquet`)
311
 
312
  | Column | Type | Description |
313
  |--------|------|-------------|
314
+ | `id` | VARCHAR | Reddit's base-36 comment ID |
315
+ | `author` | VARCHAR | Username of the commenter. `[deleted]` if account was removed |
316
+ | `subreddit` | VARCHAR | Subreddit name (without `r/` prefix) |
317
+ | `body` | VARCHAR | Comment text in Markdown format |
318
+ | `score` | BIGINT | Net upvotes at time of archival |
319
+ | `created_utc` | BIGINT | Unix timestamp of comment creation |
320
+ | `created_at` | TIMESTAMP | Derived from `created_utc` for easier querying |
321
+ | `body_length` | BIGINT | Character count of `body` |
322
+ | `link_id` | VARCHAR | ID of the parent submission (`t3_...` format) |
323
+ | `parent_id` | VARCHAR | ID of the parent comment or submission |
324
+ | `distinguished` | VARCHAR | `moderator`, `admin`, or null |
325
+ | `author_flair_text` | VARCHAR | Author's flair text in this subreddit |
326
+
327
+ #### Submissions (`data/submissions/YYYY/MM/NNN.parquet`)
328
 
329
  | Column | Type | Description |
330
  |--------|------|-------------|
331
+ | `id` | VARCHAR | Reddit's base-36 submission ID |
332
+ | `author` | VARCHAR | Username of the poster |
333
+ | `subreddit` | VARCHAR | Subreddit name |
334
+ | `title` | VARCHAR | Post title |
335
+ | `selftext` | VARCHAR | Post body for self/text posts (empty for link posts) |
336
+ | `score` | BIGINT | Net upvotes at time of archival |
337
+ | `created_utc` | BIGINT | Unix timestamp |
338
+ | `created_at` | TIMESTAMP | Derived from `created_utc` |
339
+ | `title_length` | BIGINT | Character count of `title` |
340
+ | `num_comments` | BIGINT | Number of comments on this post |
341
+ | `url` | VARCHAR | External URL for link posts, permalink for self posts |
342
+ | `over_18` | BOOLEAN | NSFW flag |
343
+ | `link_flair_text` | VARCHAR | Post flair text |
344
+ | `author_flair_text` | VARCHAR | Author's flair text |
345
+
346
+ ### Data splits
347
+
348
+ The dataset has two named configurations: `comments` and `submissions`. Each configuration loads all monthly shards for that type as a single `train` split.
349
+
350
+ You can also load individual years or months using `data_files`:
351
+
352
+ ```python
353
+ # Load just January 2020 comments
354
+ ds = load_dataset("open-index/arctic", data_files="data/comments/2020/01/*.parquet", split="train")
355
+
356
+ # Load all 2023 submissions
357
+ ds = load_dataset("open-index/arctic", data_files="data/submissions/2023/**/*.parquet", split="train")
358
+ ```
359
+
360
+ ## Dataset creation
361
+
362
+ ### Curation rationale
363
+
364
+ Reddit is one of the most valuable sources of natural human conversation on the internet, but accessing the full archive has become increasingly difficult since Reddit restricted API access in 2023. The Arctic Shift project preserves this data as monthly .zst-compressed JSONL dumps. We repackage these dumps into Parquet files on Hugging Face to make the data immediately queryable with DuckDB, streamable with the `datasets` library, and downloadable in bulk — no custom tooling required.
365
+
366
+ ### Source data
367
+
368
+ All data is sourced from [Arctic Shift](https://github.com/ArthurHeitmann/arctic_shift) monthly torrent archives, which re-process the historical [PushShift](https://pushshift.io) Reddit dumps. The source files are .zst-compressed JSONL, with one JSON object per line representing a single comment or submission.
369
+
370
+ - **2005-12 through 2023-12:** Sourced from the Arctic Shift bundle torrent
371
+ - **2024-01 onward:** Sourced from individual monthly torrents published by Arctic Shift
372
+
373
+ ### Data processing steps
374
+
375
+ The pipeline is built in Go and uses [DuckDB](https://duckdb.org) for Parquet conversion. For each (month, type) pair:
376
+
377
+ 1. **Download** the .zst file via BitTorrent using selective file priority (only the target file is downloaded from the bundle torrent, not the entire archive)
378
+ 2. **Stream** the .zst through a [klauspost/compress](https://github.com/klauspost/compress) zstd decoder with a 2 GB decode window
379
+ 3. **Chunk** the JSONL stream into batches of ~2 million lines, writing each batch to a temporary file
380
+ 4. **Convert** each chunk to Parquet using DuckDB's `read_json_auto` with explicit column selection and `TRY_CAST`, exporting as Zstandard-compressed Parquet with 131,072-row row groups
381
+ 5. **Delete** each temporary chunk immediately after its shard is written (disk space is constrained)
382
+ 6. **Commit** all shards for this (month, type) to Hugging Face along with updated `stats.csv` and `README.md`
383
+ 7. **Clean up** all local Parquet shards after the commit succeeds
384
+
385
+ The pipeline is fully resumable: `stats.csv` tracks which (month, type) pairs have been committed, and they are skipped on restart. Disk space is managed aggressively — at most one .zst download, one JSONL chunk, and the current month's shards exist on disk at any time.
386
+
387
+ All Parquet files use **Zstandard compression** and are written with DuckDB's default row ordering. No filtering, deduplication, or content modification is applied — the data is preserved exactly as it appears in the Arctic Shift dumps.
388
+
389
+ ### Personal and sensitive information
390
+
391
+ This dataset contains Reddit usernames and user-generated text as they appeared publicly on Reddit at the time of archival. Deleted accounts show as `[deleted]` and deleted content shows as `[removed]`, reflecting Reddit's own deletion semantics at the time the data was captured.
392
+
393
+ No additional PII processing has been applied. Given the scale of the dataset, it likely contains personally identifiable information that users posted publicly on Reddit. If you find content that you believe should be removed, please open a discussion on the [Community tab](https://huggingface.co/datasets/open-index/arctic/discussions).
394
+
395
+ ## Considerations for using the data
396
+
397
+ ### Social impact
398
+
399
+ By providing the complete Reddit archive in an accessible format, we hope to enable research into online community dynamics, language evolution, and the social fabric of one of the internet's largest platforms. The dataset is particularly valuable for training language models on diverse, natural human conversation spanning many topics and communities.
400
+
401
+ ### Discussion of biases
402
+
403
+ Reddit's user base has well-documented demographic skews, primarily toward younger, male, English-speaking users in North America and Europe. Different subreddits have very different community cultures, moderation styles, and levels of toxicity. The voting system amplifies content that appeals to each community's sensibilities, which can reinforce echo chambers.
404
+
405
+ We have not applied any filtering, toxicity scoring, or quality assessment to the data. All content — including controversial, toxic, or NSFW material — is preserved as it appeared in the source archive. Researchers should apply their own filtering appropriate to their use case.
406
+
407
+ ### Known limitations
408
+
409
+ - **Data completeness depends on PushShift.** PushShift did not capture 100% of Reddit content, and there are known gaps, particularly in the earliest months and during periods when the PushShift ingestion pipeline was down.
410
+ - **Scores are point-in-time snapshots.** The `score` field reflects the value at the time PushShift captured the item, not the final score.
411
+ - **Deleted content.** Items deleted before PushShift captured them are not present. Items deleted after capture may still contain the original text.
412
+ - **No user profiles.** This dataset contains posts and comments only, not user profiles, karma, or account metadata.
413
+ - **Text may contain Markdown or HTML.** Comment bodies and submission selftexts use Reddit's Markdown variant. Some older content may contain raw HTML.
414
+
415
+ ## Additional information
416
+
417
+ ### Licensing
418
+
419
+ The original Reddit content is subject to [Reddit's Terms of Service](https://www.redditinc.com/policies/user-agreement). The Arctic Shift archive is distributed under permissive terms for research purposes. This repackaging is released as-is for research and educational use.
420
+
421
+ This is an independent community project. It is not affiliated with or endorsed by Reddit, Inc. or the Arctic Shift project.
422
+
423
+ ### Thanks
424
+
425
+ The data in this dataset comes from the [Arctic Shift](https://github.com/ArthurHeitmann/arctic_shift) project, which re-processes and distributes the historical [PushShift](https://pushshift.io) Reddit archive via Academic Torrents. Without their work preserving and distributing this data, building a complete Reddit archive would not be practical.
426
+
427
+ ### Contact
428
+
429
+ For questions, feedback, or issues, please open a discussion on the [Community tab](https://huggingface.co/datasets/open-index/arctic/discussions).
430
+
431
+ *Last updated: 2026-03-15 01:27 UTC*
data/comments/2006/04/000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f64bc0546059ff34ba2feeec69e17f7fabcd938be263136e0fc87a190b3160d0
3
+ size 2233676
states.json CHANGED
@@ -1,20 +1,20 @@
1
  {
2
- "session_id": "2026-03-15T01:07:17Z",
3
- "started_at": "2026-03-15T01:07:17.795651697Z",
4
- "updated_at": "2026-03-15T01:09:06.99699167Z",
5
  "phase": "committing",
6
  "current": {
7
- "ym": "2006-03",
8
- "type": "submissions",
9
  "phase": "committing",
10
  "shard": 1,
11
- "rows": 12525
12
  },
13
  "stats": {
14
- "committed": 2,
15
- "skipped": 5,
16
- "total_rows": 23360,
17
- "total_bytes": 1992358,
18
  "total_months": 488
19
  }
20
  }
 
1
  {
2
+ "session_id": "2026-03-15T01:26:43Z",
3
+ "started_at": "2026-03-15T01:26:43.998983213Z",
4
+ "updated_at": "2026-03-15T01:27:18.990192704Z",
5
  "phase": "committing",
6
  "current": {
7
+ "ym": "2006-04",
8
+ "type": "comments",
9
  "phase": "committing",
10
  "shard": 1,
11
+ "rows": 19090
12
  },
13
  "stats": {
14
+ "committed": 0,
15
+ "skipped": 8,
16
+ "total_rows": 0,
17
+ "total_bytes": 0,
18
  "total_months": 488
19
  }
20
  }
stats.csv CHANGED
@@ -6,4 +6,5 @@ year,month,type,shards,count,size_bytes,dur_download_s,dur_process_s,dur_commit_
6
  2006,2,comments,1,9095,1046086,24.34,6.18,21.04,2026-03-15T00:54:59Z
7
  2006,2,submissions,1,9501,573721,26.13,2.03,10.24,2026-03-15T01:07:46Z
8
  2006,3,comments,1,13859,1418637,17.65,1.09,7.38,2026-03-15T01:08:16Z
9
- 2006,3,submissions,1,12525,742070,29.43,11.93,0.00,2026-03-15T01:09:06Z
 
 
6
  2006,2,comments,1,9095,1046086,24.34,6.18,21.04,2026-03-15T00:54:59Z
7
  2006,2,submissions,1,9501,573721,26.13,2.03,10.24,2026-03-15T01:07:46Z
8
  2006,3,comments,1,13859,1418637,17.65,1.09,7.38,2026-03-15T01:08:16Z
9
+ 2006,3,submissions,1,12525,742070,29.43,11.93,11.99,2026-03-15T01:09:06Z
10
+ 2006,4,comments,1,19090,2233676,30.67,2.78,0.00,2026-03-15T01:27:18Z