hblim commited on
Commit
24bc15e
·
verified ·
1 Parent(s): b895a1d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -22
README.md CHANGED
@@ -36,18 +36,35 @@ configs:
36
  # Top Reddit Posts Daily
37
 
38
  ## Dataset Summary
39
- A continuously‐updated collection of daily “top” posts and their top comments from configurable subreddits, scraped via PRAW (the Reddit API). Each day’s data is stored as a separate Parquet file under `data_raw/{YYYY‑MM‑DD}.parquet`.
40
 
41
- - **Source:** Reddit (via PRAW)
42
- - **Frequency:** Daily
43
- - **Data files:** Parquet (`.parquet`)
44
- - **Total records per day:** Varies by subreddit and limits
45
 
46
- Currently configured to pull top posts from r/Apple, r/Android, and r/GooglePixel
 
 
47
 
48
- Source code https://github.com/halstonblim/reddit_scraper
49
 
50
- ## Supported Tasks and Leaderboards
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
  This dataset can be used for:
52
  - Text classification (e.g., sentiment analysis)
53
  - Topic modeling
@@ -56,16 +73,21 @@ This dataset can be used for:
56
 
57
 
58
  ## Languages
59
- - English (primary; non‑English text may appear depending on subreddit content)
60
 
61
  ## Dataset Structure
62
 
63
  ```
64
  hblim/top_reddit_posts_daily/
65
- └── data_raw/
66
- ├── 2025‑0415.parquet
67
- ├── 2025‑0416.parquet
68
  └── …
 
 
 
 
 
69
  ```
70
 
71
  ### Data Fields
@@ -96,23 +118,15 @@ Example entry:
96
 
97
 
98
  ## Data Splits
99
- There are no explicit train/test splits. Data is organized by date under the `data_raw/` folder.
100
 
101
  ## Dataset Creation
102
 
103
- 1. **Curation**
104
  - A Python script (`scrape.py`) runs daily, fetching the top N posts and top M comments per subreddit.
105
  - Posts are retrieved via PRAW’s `subreddit.top(time_filter="day")`.
106
  - Data is de‑duplicated against the previous day’s `post_id` values.
107
  - Stored as Parquet under `data_raw/{YYYY‑MM‑DD}.parquet`.
108
 
109
- 2. **Source Data**
110
- - Reddit’s public API (PRAW), subject to Reddit rate limits and API terms.
111
-
112
- 3. **Recommendations**
113
- - Respect Reddit’s API rate limits and community rules.
114
- - Consider throttling or caching for large‑scale usage.
115
-
116
  ## License
117
  This dataset is released under the [MIT License](https://opensource.org/licenses/MIT).
118
 
@@ -177,5 +191,4 @@ print(f"Total records across {len(dfs)} days: {len(df_all)}")
177
 
178
  ## Limitations & Ethics
179
  - **Bias:** Data reflects Reddit’s user base and community norms, which may not generalize.
180
- - **Rate Limits:** Excessive scraping may violate Reddit API terms.
181
  - **Privacy:** Only public content is collected; no personally identifiable information is stored.
 
36
  # Top Reddit Posts Daily
37
 
38
  ## Dataset Summary
 
39
 
40
+ A continuously-updated snapshot of public Reddit discourse on AI news. Each night a GitHub Actions cron job
 
 
 
41
 
42
+ 1. **Scrapes** new submissions from a configurable list of subreddits (→ `data_raw/`)
43
+ 2. **Classifies** each post with a DistilBERT sentiment model served on Replicate (→ `data_scored/`)
44
+ 3. **Summarises** daily trends for lightweight front-end consumption (→ `daily_summary/`)
45
 
46
+ The result is an easy-to-query, time-stamped record of Reddit sentiment that can be used for NLP research, social-media trend analysis, or as a teaching dataset for end-to-end MLOps.
47
 
48
+ Source code https://github.com/halstonblim/reddit_sentiment_pipeline
49
+
50
+ Currently configured to scrape only the top daily posts and comments to respect rate limits
51
+ ```
52
+ subreddits:
53
+ - name: artificial
54
+ post_limit: 100
55
+ comment_limit: 10
56
+ - name: LocalLLaMA
57
+ post_limit: 100
58
+ comment_limit: 10
59
+ - name: singularity
60
+ post_limit: 100
61
+ comment_limit: 10
62
+ - name: OpenAI
63
+ post_limit: 100
64
+ comment_limit: 10
65
+ ```
66
+
67
+ ## Supported Tasks
68
  This dataset can be used for:
69
  - Text classification (e.g., sentiment analysis)
70
  - Topic modeling
 
73
 
74
 
75
  ## Languages
76
+ - English, no filtering is currently done on the raw text
77
 
78
  ## Dataset Structure
79
 
80
  ```
81
  hblim/top_reddit_posts_daily/
82
+ └── data_raw/ # contains raw data scraped from reddit
83
+ ├── 2025‑0501.parquet
84
+ ├── 2025‑0501.parquet
85
  └── …
86
+ └── data_scored/ # contains same rows as raw data but with sentiment scores
87
+ ├── 2025‑05‑01.parquet
88
+ ├── 2025‑05‑01.parquet
89
+ └── …
90
+ └── subreddit_daily_summary.csv/ # contains daily summaries of sentiment averages grouped by (day, subreddit)
91
  ```
92
 
93
  ### Data Fields
 
118
 
119
 
120
  ## Data Splits
121
+ There are no explicit train/test splits. Data is organized by date under the `data_raw/` or `data_scored/` folder.
122
 
123
  ## Dataset Creation
124
 
 
125
  - A Python script (`scrape.py`) runs daily, fetching the top N posts and top M comments per subreddit.
126
  - Posts are retrieved via PRAW’s `subreddit.top(time_filter="day")`.
127
  - Data is de‑duplicated against the previous day’s `post_id` values.
128
  - Stored as Parquet under `data_raw/{YYYY‑MM‑DD}.parquet`.
129
 
 
 
 
 
 
 
 
130
  ## License
131
  This dataset is released under the [MIT License](https://opensource.org/licenses/MIT).
132
 
 
191
 
192
  ## Limitations & Ethics
193
  - **Bias:** Data reflects Reddit’s user base and community norms, which may not generalize.
 
194
  - **Privacy:** Only public content is collected; no personally identifiable information is stored.