tamnd commited on
Commit
b2ce08d
·
verified ·
1 Parent(s): 59c45bd

add comments/2006-05 2006/05 (1 shards, 26.9K rows)

Browse files
Files changed (4) hide show
  1. README.md +12 -12
  2. data/comments/2006/05/000.parquet +3 -0
  3. states.json +7 -7
  4. stats.csv +2 -1
README.md CHANGED
@@ -50,9 +50,9 @@ task_categories:
50
 
51
  ## What is it?
52
 
53
- This dataset contains the complete [Reddit](https://www.reddit.com) archive of comments and submissions, sourced from the [Arctic Shift](https://github.com/ArthurHeitmann/arctic_shift) project which re-processes the historical [PushShift](https://pushshift.io) Reddit dumps. It covers **every public subreddit** from the earliest available data in **2005-12** through **2006-04**.
54
 
55
- The archive currently contains **94.8K items** (46.8K comments + 48.0K submissions) totaling **7.7 MB** of compressed Parquet data. The data is organized as two independent datasets — `comments` and `submissions` — each split into monthly shards that can be loaded independently or streamed together.
56
 
57
  Reddit is one of the largest and most diverse online communities, with millions of users discussing everything from programming and science to cooking and local news. This makes it a valuable resource for language model training, sentiment analysis, community dynamics research, and information retrieval. Unlike many Reddit datasets that focus on specific subreddits or time periods, this archive aims to be comprehensive: all subreddits, all months, all public content.
58
 
@@ -84,8 +84,8 @@ Along with the Parquet files, we include `stats.csv` which tracks every committe
84
  The chart below shows the total number of items (comments + submissions combined) committed per year.
85
 
86
  ```
87
- 2005 █░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 6.4K
88
- 2006 ██████████████████████████████ 88.3K
89
  ```
90
 
91
  ## How to download and use this dataset
@@ -197,9 +197,9 @@ huggingface-cli download open-index/arctic \
197
 
198
  | Type | Months | Rows | Parquet Size |
199
  |------|-------:|-----:|-------------:|
200
- | comments | 5 | 46.8K | 5.0 MB |
201
  | submissions | 5 | 48.0K | 2.7 MB |
202
- | **Total** | **5** | **94.8K** | **7.7 MB** |
203
 
204
  You can query the per-month statistics directly from the `stats.csv` file:
205
 
@@ -228,21 +228,21 @@ The `stats.csv` file tracks each committed (month, type) pair with the following
228
 
229
  > The ingestion pipeline is actively running. This section auto-updates every ~5 minutes.
230
 
231
- **Started:** 2026-03-15 01:26 UTC · **Elapsed:** 1m · **Committed this session:** 1
232
 
233
  | | |
234
  |:---|:---|
235
  | Phase | committing |
236
- | Month | **2006-04** — submissions |
237
  | Progress | committing to Hugging Face… |
238
 
239
- `░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░` 9 / 488 (1.8%)
240
 
241
  | Metric | This Session |
242
  |--------|-------------:|
243
- | Months committed | 1 |
244
- | Rows processed | 19.1K |
245
- | Data committed | 2.1 MB |
246
 
247
  *Last update: 2026-03-15 01:28 UTC*
248
 
 
50
 
51
  ## What is it?
52
 
53
+ This dataset contains the complete [Reddit](https://www.reddit.com) archive of comments and submissions, sourced from the [Arctic Shift](https://github.com/ArthurHeitmann/arctic_shift) project which re-processes the historical [PushShift](https://pushshift.io) Reddit dumps. It covers **every public subreddit** from the earliest available data in **2005-12** through **2006-05**.
54
 
55
+ The archive currently contains **121.6K items** (73.6K comments + 48.0K submissions) totaling **10.6 MB** of compressed Parquet data. The data is organized as two independent datasets — `comments` and `submissions` — each split into monthly shards that can be loaded independently or streamed together.
56
 
57
  Reddit is one of the largest and most diverse online communities, with millions of users discussing everything from programming and science to cooking and local news. This makes it a valuable resource for language model training, sentiment analysis, community dynamics research, and information retrieval. Unlike many Reddit datasets that focus on specific subreddits or time periods, this archive aims to be comprehensive: all subreddits, all months, all public content.
58
 
 
84
  The chart below shows the total number of items (comments + submissions combined) committed per year.
85
 
86
  ```
87
+ 2005 █░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 6.4K
88
+ 2006 ██████████████████████████████ 115.2K
89
  ```
90
 
91
  ## How to download and use this dataset
 
197
 
198
  | Type | Months | Rows | Parquet Size |
199
  |------|-------:|-----:|-------------:|
200
+ | comments | 6 | 73.6K | 7.9 MB |
201
  | submissions | 5 | 48.0K | 2.7 MB |
202
+ | **Total** | **6** | **121.6K** | **10.6 MB** |
203
 
204
  You can query the per-month statistics directly from the `stats.csv` file:
205
 
 
228
 
229
  > The ingestion pipeline is actively running. This section auto-updates every ~5 minutes.
230
 
231
+ **Started:** 2026-03-15 01:26 UTC · **Elapsed:** 2m · **Committed this session:** 2
232
 
233
  | | |
234
  |:---|:---|
235
  | Phase | committing |
236
+ | Month | **2006-05** — comments |
237
  | Progress | committing to Hugging Face… |
238
 
239
+ `░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░` 10 / 488 (2.0%)
240
 
241
  | Metric | This Session |
242
  |--------|-------------:|
243
+ | Months committed | 2 |
244
+ | Rows processed | 31.6K |
245
+ | Data committed | 2.8 MB |
246
 
247
  *Last update: 2026-03-15 01:28 UTC*
248
 
data/comments/2006/05/000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd16aaa7c7b1aa3b2f14962db17fc2fcac240814e1ee5e28e5fba8657ffad0ce
3
+ size 3036859
states.json CHANGED
@@ -1,20 +1,20 @@
1
  {
2
  "session_id": "2026-03-15T01:26:43Z",
3
  "started_at": "2026-03-15T01:26:43.998983213Z",
4
- "updated_at": "2026-03-15T01:28:04.640859055Z",
5
  "phase": "committing",
6
  "current": {
7
- "ym": "2006-04",
8
- "type": "submissions",
9
  "phase": "committing",
10
  "shard": 1,
11
- "rows": 12556
12
  },
13
  "stats": {
14
- "committed": 1,
15
  "skipped": 8,
16
- "total_rows": 19090,
17
- "total_bytes": 2233676,
18
  "total_months": 488
19
  }
20
  }
 
1
  {
2
  "session_id": "2026-03-15T01:26:43Z",
3
  "started_at": "2026-03-15T01:26:43.998983213Z",
4
+ "updated_at": "2026-03-15T01:28:52.271078102Z",
5
  "phase": "committing",
6
  "current": {
7
+ "ym": "2006-05",
8
+ "type": "comments",
9
  "phase": "committing",
10
  "shard": 1,
11
+ "rows": 26859
12
  },
13
  "stats": {
14
+ "committed": 2,
15
  "skipped": 8,
16
+ "total_rows": 31646,
17
+ "total_bytes": 2976527,
18
  "total_months": 488
19
  }
20
  }
stats.csv CHANGED
@@ -8,4 +8,5 @@ year,month,type,shards,count,size_bytes,dur_download_s,dur_process_s,dur_commit_
8
  2006,3,comments,1,13859,1418637,17.65,1.09,7.38,2026-03-15T01:08:16Z
9
  2006,3,submissions,1,12525,742070,29.43,11.93,11.99,2026-03-15T01:09:06Z
10
  2006,4,comments,1,19090,2233676,30.67,2.78,15.08,2026-03-15T01:27:18Z
11
- 2006,4,submissions,1,12556,742851,25.62,3.02,0.00,2026-03-15T01:28:04Z
 
 
8
  2006,3,comments,1,13859,1418637,17.65,1.09,7.38,2026-03-15T01:08:16Z
9
  2006,3,submissions,1,12525,742070,29.43,11.93,11.99,2026-03-15T01:09:06Z
10
  2006,4,comments,1,19090,2233676,30.67,2.78,15.08,2026-03-15T01:27:18Z
11
+ 2006,4,submissions,1,12556,742851,25.62,3.02,17.11,2026-03-15T01:28:04Z
12
+ 2006,5,comments,1,26859,3036859,24.10,5.24,0.00,2026-03-15T01:28:52Z