nick007x commited on
Commit
0d31d5a
Β·
1 Parent(s): 11498bf

Refactor Reddit data pipeline structure

Browse files
Files changed (1) hide show
  1. README.md +49 -83
README.md CHANGED
@@ -6,63 +6,62 @@ A comprehensive pipeline for archiving, processing, and analyzing Reddit data fr
6
 
7
  ## πŸ—‚οΈ Repository Structure
8
 
9
- ```
10
- β”œβ”€β”€ analyzed_subreddits/ # Case studies of specific subreddits
11
- β”‚ β”œβ”€β”€ comments/ # Subreddit-specific comment archives
12
- β”‚ β”‚ └── RC_funny.parquet # r/funny comments (example case study)
13
- β”‚ β”œβ”€β”€ reddit-media/ # Media organized by subreddit
14
- β”‚ β”‚ β”œβ”€β”€ content-hashed/ # Content-addressable media storage
15
- β”‚ β”‚ β”œβ”€β”€ images/
16
- β”‚ β”‚ β”‚ └── r_funny/2025/01/01 # Daily images by subreddit
17
- β”‚ β”‚ └── videos/
18
- β”‚ β”‚ └── r_funny/2025/01/01 # Daily videos by subreddit
19
- β”‚ └── submissions/ # Subreddit-specific submission archives
20
- β”‚ └── RS_funny.parquet # r/funny submissions (example)
 
 
21
  β”‚
22
- β”œβ”€β”€ converted_parquet/ # Optimized Parquet format (year-partitioned)
23
- β”‚ β”œβ”€β”€ comments/ # Comments 2005-2025
24
- β”‚ β”‚ β”œβ”€β”€ 2005/ ── 2025/ # Year partitions for efficient querying
25
- β”‚ └── submissions/ # Submissions 2005-2025
26
- β”‚ β”œβ”€β”€ 2005/ ── 2025/ # Year partitions
 
 
 
 
 
 
 
 
27
  β”‚
28
- β”œβ”€β”€ figures/ # Analysis visualizations
29
- β”‚ β”œβ”€β”€ parquet_subreddits_analysis/ # Analysis of Parquet-converted data
30
- β”‚ β”‚ β”œβ”€β”€ comments/
31
- β”‚ β”‚ └── submissions/
32
- β”‚ └── original_schema_analysis/ # Schema evolution analysis
33
- β”‚ β”œβ”€β”€ comments/
34
- β”‚ └── submissions/
35
  β”‚
36
- β”œβ”€β”€ original_dump/ # Raw downloaded data
37
- β”‚ β”œβ”€β”€ comments/ # Monthly comment archives (ZST compressed)
38
- β”‚ β”‚ β”œβ”€β”€ RC_2005-12.zst ── RC_2025-12.zst # 20 years of comments
39
- β”‚ β”‚ └── schema_analysis/ # Schema analysis reports
40
- β”‚ └── submissions/ # Monthly submission archives
41
- β”‚ β”œβ”€β”€ RS_2005-06.zst ── RS_2025-12.zst # 20+ years of submissions
42
- β”‚ └── schema_analysis/
43
  β”‚ β”œβ”€β”€ analysis_report_2005.txt
44
- β”‚ └── ... (2006-2018)
45
  β”‚
46
- β”œβ”€β”€ reddit-media/ # Comprehensive media archive
47
- β”‚ β”œβ”€β”€ content-hashed/ # Deduplicated media storage
48
- β”‚ β”œβ”€β”€ images/ # Image media organized by date
49
- β”‚ β”‚ └── 2025/
50
- β”‚ β”‚ └── 01/
51
- β”‚ β”‚ └── 01/ # Daily image collections
52
- β”‚ β”œβ”€β”€ thumbnails/ # Thumbnail versions
53
- β”‚ └── videos/ # Video media
54
  β”‚
55
- β”œβ”€β”€ subreddits_2025-01_* # Subreddit metadata (Jan 2025 snapshot)
56
- β”‚ β”œβ”€β”€ type_public.jsonl # 2.78M public subreddits
57
- β”‚ β”œβ”€β”€ type_restricted.jsonl # 1.92M restricted subreddits
58
- β”‚ β”œβ”€β”€ type_private.jsonl # 182K private subreddits
59
- β”‚ β”œβ”€β”€ type_user.jsonl # 16.98M user subreddits (profiles)
60
- β”‚ └── type_other.jsonl # 100 other/archived
61
- β”‚
62
- β”œβ”€β”€ .gitattributes # Git LFS configuration
63
- └── README.md # This file
64
  ```
65
 
 
66
  ## πŸ“ˆ Dataset Statistics
67
 
68
  ### Subreddit Ecosystem (January 2025)
@@ -94,7 +93,7 @@ A comprehensive pipeline for archiving, processing, and analyzing Reddit data fr
94
  - Field-by-field statistical analysis
95
  - Type distribution tracking
96
  - Null/empty value profiling
97
- - Schema evolution tracking (2005-2018 complete)
98
 
99
  ### Stage 3: Format Conversion
100
  - ZST β†’ JSONL decompression
@@ -180,33 +179,6 @@ A comprehensive pipeline for archiving, processing, and analyzing Reddit data fr
180
  - **Moderation scale:** 32% removed by Reddit, 36% by moderators
181
  - **Media evolution:** Video posts growing (3% in Jan 2025)
182
 
183
- ## πŸš€ Getting Started
184
-
185
- ### Prerequisites
186
- - 100+ TB storage (recommended)
187
- - Python 3.9+ with pandas, pyarrow, zstandard
188
- - Apache Spark (optional, for large-scale processing)
189
- - Git LFS for media storage
190
-
191
- ### Basic Usage
192
- ```bash
193
- # Analyze schema of a monthly file
194
- python analyze_schema.py original_dump/comments/RC_2025-01.zst
195
-
196
- # Convert to Parquet format
197
- python convert_to_parquet.py original_dump/comments/RC_2025-01.zst converted_parquet/comments/2025/
198
-
199
- # Analyze subreddit distribution
200
- python analyze_subreddits.py subreddits_2025-01.jsonl
201
- ```
202
-
203
- ### Processing Pipeline
204
- 1. Download monthly archives to `original_dump/`
205
- 2. Run schema analysis for quality assessment
206
- 3. Convert to partitioned Parquet format
207
- 4. Categorize and analyze subreddit metadata
208
- 5. Extract and organize media content
209
- 6. Conduct focused community studies
210
 
211
  ## πŸ“„ License & Attribution
212
 
@@ -252,9 +224,3 @@ tools for Reddit historical data (2005-2025). GitHub Repository.
252
  - Cross-platform comparative studies
253
 
254
  ---
255
- **Maintained by:** Reddit Research Group
256
- **Last Updated:** January 2026
257
- **Data Coverage:** December 2005 - December 2025
258
- **Total Volume:** ~150 TB (raw), ~40 TB (processed)
259
-
260
-
 
6
 
7
  ## πŸ—‚οΈ Repository Structure
8
 
9
+ ```bash
10
+ β”œβ”€β”€ scripts/ # Analysis scripts
11
+ β”œβ”€β”€ analysis/ # Analysis methods and visualizations
12
+ β”‚ β”œβ”€β”€ original_schema_analysis/ # Schema evolution analysis
13
+ β”‚ β”‚ β”œβ”€β”€ comments/ # Comment schema analysis results
14
+ β”‚ β”‚ β”œβ”€β”€ figures/ # Schema analysis visualizations
15
+ β”‚ β”‚ └── submissions/ # Submission schema analysis results
16
+ β”‚ β”‚ β”œβ”€β”€ analysis_report_2005.txt
17
+ β”‚ β”‚ β”œβ”€β”€ analysis_report_2006.txt
18
+ β”‚ β”‚ └── ...
19
+ β”‚ └── parquet_subreddits_analysis/ # Analysis of Parquet-converted data
20
+ β”‚ β”œβ”€β”€ comments/ # Comment data analysis
21
+ β”‚ β”œβ”€β”€ figures/ # Subreddit analysis visualizations
22
+ β”‚ └── submissions/ # Submission data analysis
23
  β”‚
24
+ β”œβ”€β”€ analyzed_subreddits/ # Focused subreddit case studies
25
+ β”‚ β”œβ”€β”€ comments/ # Subreddit-specific comment archives
26
+ β”‚ β”‚ └── RC_funny.parquet # r/funny comments (empty as of now)
27
+ β”‚ β”œβ”€β”€ reddit-media/ # Media organized by subreddit and date
28
+ β”‚ β”‚ β”œβ”€β”€ content-hashed/ # Deduplicated media (content addressing)
29
+ β”‚ β”‚ β”œβ”€β”€ images/ # Image media
30
+ β”‚ β”‚ β”‚ └── r_funny/ # Organized by subreddit
31
+ β”‚ β”‚ β”‚ └── 2025/01/01/ # Daily structure for temporal analysis
32
+ β”‚ β”‚ └── videos/ # Video media
33
+ β”‚ β”‚ └── r_funny/ # Organized by subreddit
34
+ β”‚ β”‚ └── 2025/01/01/ # Daily structure
35
+ β”‚ └── submissions/ # Subreddit-specific submission archives
36
+ β”‚ └── RS_funny.parquet # r/funny submissions (empty as of now)
37
  β”‚
38
+ β”œβ”€β”€ converted_parquet/ # Optimized Parquet format (year-partitioned)
39
+ β”‚ β”œβ”€β”€ comments/ # Comments 2005-2025
40
+ β”‚ β”‚ β”œβ”€β”€ 2005/ ── 2025/ # Year partitions for efficient querying
41
+ β”‚ └── submissions/ # Submissions 2005-2025
42
+ β”‚ β”œβ”€β”€ 2005/ ── 2025/ # Year partitions
 
 
43
  β”‚
44
+ β”œβ”€β”€ original_dump/ # Raw downloaded Reddit archives
45
+ β”‚ β”œβ”€β”€ comments/ # Monthly comment archives (ZST compressed)
46
+ β”‚ β”‚ β”œβ”€β”€ RC_2005-12.zst ── RC_2025-12.zst # Complete 2005-2025 coverage
47
+ β”‚ β”‚ └── schema_analysis/ # Schema analysis directory
48
+ β”‚ └── submissions/ # Monthly submission archives
49
+ β”‚ β”œβ”€β”€ RS_2005-06.zst ── RS_2025-12.zst # Complete 2005-2025 coverage
50
+ β”‚ └── schema_analysis/ # Schema evolution analysis reports
51
  β”‚ β”œβ”€β”€ analysis_report_2005.txt
52
+ β”‚ └── ...
53
  β”‚
54
+ β”œβ”€β”€ subreddits_2025-01_* # Subreddit metadata (January 2025 snapshot)
55
+ β”‚ β”œβ”€β”€ type_public.jsonl # 2.78M public subreddits
56
+ β”‚ β”œβ”€β”€ type_restricted.jsonl # 1.92M restricted subreddits
57
+ β”‚ β”œβ”€β”€ type_private.jsonl # 182K private subreddits
58
+ β”‚ └── type_other.jsonl # 100 other/archived subreddits
 
 
 
59
  β”‚
60
+ β”œβ”€β”€ .gitattributes # Git LFS configuration for large files
61
+ └── README.md # This documentation file
 
 
 
 
 
 
 
62
  ```
63
 
64
+
65
  ## πŸ“ˆ Dataset Statistics
66
 
67
  ### Subreddit Ecosystem (January 2025)
 
93
  - Field-by-field statistical analysis
94
  - Type distribution tracking
95
  - Null/empty value profiling
96
+ - Schema evolution tracking (2005-2018 complete more coming soon)
97
 
98
  ### Stage 3: Format Conversion
99
  - ZST β†’ JSONL decompression
 
179
  - **Moderation scale:** 32% removed by Reddit, 36% by moderators
180
  - **Media evolution:** Video posts growing (3% in Jan 2025)
181
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
182
 
183
  ## πŸ“„ License & Attribution
184
 
 
224
  - Cross-platform comparative studies
225
 
226
  ---