tamnd commited on
Commit
f6bca32
·
verified ·
1 Parent(s): 9701a72

add submissions/2006-06 2006/06 (1 shards, 16.9K rows)

Browse files
Files changed (4) hide show
  1. README.md +10 -10
  2. data/submissions/2006/06/000.parquet +3 -0
  3. states.json +6 -6
  4. stats.csv +2 -1
README.md CHANGED
@@ -52,7 +52,7 @@ task_categories:
52
 
53
  This dataset contains the complete [Reddit](https://www.reddit.com) archive of comments and submissions, sourced from the [Arctic Shift](https://github.com/ArthurHeitmann/arctic_shift) project which re-processes the historical [PushShift](https://pushshift.io) Reddit dumps. It covers **every public subreddit** from the earliest available data in **2005-12** through **2006-06**.
54
 
55
- The archive currently contains **165.5K items** (102.8K comments + 62.7K submissions) totaling **14.5 MB** of compressed Parquet data. The data is organized as two independent datasets — `comments` and `submissions` — each split into monthly shards that can be loaded independently or streamed together.
56
 
57
  Reddit is one of the largest and most diverse online communities, with millions of users discussing everything from programming and science to cooking and local news. This makes it a valuable resource for language model training, sentiment analysis, community dynamics research, and information retrieval. Unlike many Reddit datasets that focus on specific subreddits or time periods, this archive aims to be comprehensive: all subreddits, all months, all public content.
58
 
@@ -85,7 +85,7 @@ The chart below shows the total number of items (comments + submissions combined
85
 
86
  ```
87
  2005 █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 6.4K
88
- 2006 ██████████████████████████████ 159.1K
89
  ```
90
 
91
  ## How to download and use this dataset
@@ -198,8 +198,8 @@ huggingface-cli download open-index/arctic \
198
  | Type | Months | Rows | Parquet Size |
199
  |------|-------:|-----:|-------------:|
200
  | comments | 7 | 102.8K | 11.0 MB |
201
- | submissions | 6 | 62.7K | 3.5 MB |
202
- | **Total** | **7** | **165.5K** | **14.5 MB** |
203
 
204
  You can query the per-month statistics directly from the `stats.csv` file:
205
 
@@ -228,21 +228,21 @@ The `stats.csv` file tracks each committed (month, type) pair with the following
228
 
229
  > The ingestion pipeline is actively running. This section auto-updates every ~5 minutes.
230
 
231
- **Started:** 2026-03-15 01:26 UTC · **Elapsed:** 3m · **Committed this session:** 4
232
 
233
  | | |
234
  |:---|:---|
235
  | Phase | committing |
236
- | Month | **2006-06** — comments |
237
  | Progress | committing to Hugging Face… |
238
 
239
- `░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░` 12 / 488 (2.5%)
240
 
241
  | Metric | This Session |
242
  |--------|-------------:|
243
- | Months committed | 4 |
244
- | Rows processed | 73.2K |
245
- | Data committed | 6.6 MB |
246
 
247
  *Last update: 2026-03-15 01:30 UTC*
248
 
 
52
 
53
  This dataset contains the complete [Reddit](https://www.reddit.com) archive of comments and submissions, sourced from the [Arctic Shift](https://github.com/ArthurHeitmann/arctic_shift) project which re-processes the historical [PushShift](https://pushshift.io) Reddit dumps. It covers **every public subreddit** from the earliest available data in **2005-12** through **2006-06**.
54
 
55
+ The archive currently contains **182.4K items** (102.8K comments + 79.6K submissions) totaling **15.5 MB** of compressed Parquet data. The data is organized as two independent datasets — `comments` and `submissions` — each split into monthly shards that can be loaded independently or streamed together.
56
 
57
  Reddit is one of the largest and most diverse online communities, with millions of users discussing everything from programming and science to cooking and local news. This makes it a valuable resource for language model training, sentiment analysis, community dynamics research, and information retrieval. Unlike many Reddit datasets that focus on specific subreddits or time periods, this archive aims to be comprehensive: all subreddits, all months, all public content.
58
 
 
85
 
86
  ```
87
  2005 █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 6.4K
88
+ 2006 ██████████████████████████████ 176.0K
89
  ```
90
 
91
  ## How to download and use this dataset
 
198
  | Type | Months | Rows | Parquet Size |
199
  |------|-------:|-----:|-------------:|
200
  | comments | 7 | 102.8K | 11.0 MB |
201
+ | submissions | 7 | 79.6K | 4.5 MB |
202
+ | **Total** | **7** | **182.4K** | **15.5 MB** |
203
 
204
  You can query the per-month statistics directly from the `stats.csv` file:
205
 
 
228
 
229
  > The ingestion pipeline is actively running. This section auto-updates every ~5 minutes.
230
 
231
+ **Started:** 2026-03-15 01:26 UTC · **Elapsed:** 4m · **Committed this session:** 5
232
 
233
  | | |
234
  |:---|:---|
235
  | Phase | committing |
236
+ | Month | **2006-06** — submissions |
237
  | Progress | committing to Hugging Face… |
238
 
239
+ `░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░` 13 / 488 (2.7%)
240
 
241
  | Metric | This Session |
242
  |--------|-------------:|
243
+ | Months committed | 5 |
244
+ | Rows processed | 102.4K |
245
+ | Data committed | 9.7 MB |
246
 
247
  *Last update: 2026-03-15 01:30 UTC*
248
 
data/submissions/2006/06/000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29ed1f5b5f0166e9d228aeb2d2215da03659394854925debc615b3dd1e203661
3
+ size 981744
states.json CHANGED
@@ -1,20 +1,20 @@
1
  {
2
  "session_id": "2026-03-15T01:26:43Z",
3
  "started_at": "2026-03-15T01:26:43.998983213Z",
4
- "updated_at": "2026-03-15T01:30:05.497409121Z",
5
  "phase": "committing",
6
  "current": {
7
  "ym": "2006-06",
8
- "type": "comments",
9
  "phase": "committing",
10
  "shard": 1,
11
- "rows": 29163
12
  },
13
  "stats": {
14
- "committed": 4,
15
  "skipped": 8,
16
- "total_rows": 73206,
17
- "total_bytes": 6884271,
18
  "total_months": 488
19
  }
20
  }
 
1
  {
2
  "session_id": "2026-03-15T01:26:43Z",
3
  "started_at": "2026-03-15T01:26:43.998983213Z",
4
+ "updated_at": "2026-03-15T01:30:54.486808488Z",
5
  "phase": "committing",
6
  "current": {
7
  "ym": "2006-06",
8
+ "type": "submissions",
9
  "phase": "committing",
10
  "shard": 1,
11
+ "rows": 16942
12
  },
13
  "stats": {
14
+ "committed": 5,
15
  "skipped": 8,
16
+ "total_rows": 102369,
17
+ "total_bytes": 10125089,
18
  "total_months": 488
19
  }
20
  }
stats.csv CHANGED
@@ -11,4 +11,5 @@ year,month,type,shards,count,size_bytes,dur_download_s,dur_process_s,dur_commit_
11
  2006,4,submissions,1,12556,742851,25.62,3.02,17.11,2026-03-15T01:28:04Z
12
  2006,5,comments,1,26859,3036859,24.10,5.24,5.96,2026-03-15T01:28:52Z
13
  2006,5,submissions,1,14701,870885,23.95,2.49,5.72,2026-03-15T01:29:26Z
14
- 2006,6,comments,1,29163,3240818,23.45,9.12,0.00,2026-03-15T01:30:05Z
 
 
11
  2006,4,submissions,1,12556,742851,25.62,3.02,17.11,2026-03-15T01:28:04Z
12
  2006,5,comments,1,26859,3036859,24.10,5.24,5.96,2026-03-15T01:28:52Z
13
  2006,5,submissions,1,14701,870885,23.95,2.49,5.72,2026-03-15T01:29:26Z
14
+ 2006,6,comments,1,29163,3240818,23.45,9.12,12.10,2026-03-15T01:30:05Z
15
+ 2006,6,submissions,1,16942,981744,21.90,13.34,0.00,2026-03-15T01:30:54Z