Update README.md
Browse files
README.md
CHANGED
|
@@ -51,7 +51,7 @@ A comprehensive pipeline for archiving, processing, and analyzing Reddit data fr
|
|
| 51 |
β βββ analysis_report_2005.txt
|
| 52 |
β βββ ...
|
| 53 |
β
|
| 54 |
-
βββ subreddits_2025-01_* # Subreddit metadata (January 2025 snapshot)
|
| 55 |
β βββ type_public.jsonl # 2.78M public subreddits
|
| 56 |
β βββ type_restricted.jsonl # 1.92M restricted subreddits
|
| 57 |
β βββ type_private.jsonl # 182K private subreddits
|
|
@@ -145,6 +145,37 @@ A comprehensive pipeline for archiving, processing, and analyzing Reddit data fr
|
|
| 145 |
- **Categorization:** Subreddit type classification
|
| 146 |
- **Timestamps:** Unix epoch seconds
|
| 147 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 148 |
## π― Research Applications
|
| 149 |
|
| 150 |
### Community Studies
|
|
|
|
| 51 |
β βββ analysis_report_2005.txt
|
| 52 |
β βββ ...
|
| 53 |
β
|
| 54 |
+
βββ subreddits_2025-01_* # Subreddit metadata (currently old January 2025 snapshot)
|
| 55 |
β βββ type_public.jsonl # 2.78M public subreddits
|
| 56 |
β βββ type_restricted.jsonl # 1.92M restricted subreddits
|
| 57 |
β βββ type_private.jsonl # 182K private subreddits
|
|
|
|
| 145 |
- **Categorization:** Subreddit type classification
|
| 146 |
- **Timestamps:** Unix epoch seconds
|
| 147 |
|
| 148 |
+
|
| 149 |
+
## π§ Technical Design Decisions
|
| 150 |
+
|
| 151 |
+
### Compression Strategy: Why ZST β JSONL β Parquet?
|
| 152 |
+
|
| 153 |
+
This pipeline employs a tiered compression strategy based on access patterns:
|
| 154 |
+
|
| 155 |
+
#### **Original Archives (ZST Compressed)**
|
| 156 |
+
- **Format:** `.zst` (Zstandard) compressed JSONL
|
| 157 |
+
- **Why ZST?** 6:1 compression ratio (36GB β 6GB) vs gzip's 4:1
|
| 158 |
+
- **Trade-off:** 39.7s decompression time vs 26.8s raw read
|
| 159 |
+
- **Decision:** Keep only for archival; decompress once to JSONL
|
| 160 |
+
|
| 161 |
+
#### **Analytical Storage (Parquet)**
|
| 162 |
+
- **Format:** Apache Parquet with zst compression
|
| 163 |
+
- **Why Parquet?** Columnar storage enables:
|
| 164 |
+
- Selective column reads (read 2/17 columns = 15s vs 55s)
|
| 165 |
+
- Built-in compression (36GB β 7GB = 5:1 ratio)
|
| 166 |
+
- Predicate pushdown (skip irrelevant rows)
|
| 167 |
+
- **Benchmark:** Metadata read in 4.9s vs 26.8s JSONL read
|
| 168 |
+
|
| 169 |
+
## β‘ Performance Benchmarks
|
| 170 |
+
|
| 171 |
+
### File Processing Speeds (36GB Reddit Comments, Nov 2016)
|
| 172 |
+
|
| 173 |
+
| Format | Size | Read Time | Compression | Notes |
|
| 174 |
+
|--------|------|-----------|-------------|-------|
|
| 175 |
+
| **ZST Compressed** | 6.0GB | 39.7s | 6:1 | Requires decompression penalty |
|
| 176 |
+
| **JSONL Raw** | 36GB | 26.8s | 1:1 | Fastest for repeated access |
|
| 177 |
+
| **Parquet** | 7.0GB | 4.9s* | 5:1 | Metadata only; queries 2-15s |
|
| 178 |
+
|
| 179 |
## π― Research Applications
|
| 180 |
|
| 181 |
### Community Studies
|