metadata
license: mit
dataset_info:
features:
- name: subreddit
dtype: string
- name: created_at
dtype: timestamp[ns, tz=US/Central]
- name: retrieved_at
dtype: timestamp[ns, tz=US/Central]
- name: type
dtype: string
- name: text
dtype: string
- name: score
dtype: int64
- name: post_id
dtype: string
- name: parent_id
dtype: string
Top Reddit Posts Daily
Dataset Summary
A continuously-updated snapshot of public Reddit discourse on AI news. Each night a GitHub Actions cron job
- Scrapes new submissions from a configurable list of subreddits (→
data_raw/) - Classifies each post with a DistilBERT sentiment model served on Replicate (→
data_scored/) - Summarises daily trends for lightweight front-end consumption (→
daily_summary/)
The result is an easy-to-query, time-stamped record of Reddit sentiment that can be used for NLP research, social-media trend analysis, or as a teaching dataset for end-to-end MLOps.
Source code https://github.com/halstonblim/reddit_sentiment_pipeline
Currently configured to scrape only the top daily posts and comments to respect rate limits
subreddits:
- name: artificial
post_limit: 100
comment_limit: 10
- name: LocalLLaMA
post_limit: 100
comment_limit: 10
- name: singularity
post_limit: 100
comment_limit: 10
- name: OpenAI
post_limit: 100
comment_limit: 10
Supported Tasks
This dataset can be used for:
- Text classification (e.g., sentiment analysis)
- Topic modeling
- Language generation and summarization
- Time‑series analysis of Reddit activity
Languages
- English, no filtering is currently done on the raw text
Dataset Structure
hblim/top_reddit_posts_daily/
└── data_raw/ # contains raw data scraped from reddit
├── 2025‑05‑01.parquet
├── 2025‑05‑01.parquet
└── …
└── data_scored/ # contains same rows as raw data but with sentiment scores
├── 2025‑05‑01.parquet
├── 2025‑05‑01.parquet
└── …
└── subreddit_daily_summary.csv/ # contains daily summaries of sentiment averages grouped by (day, subreddit)
Data Fields
| Name | Type | Description |
|---|---|---|
subreddit |
string |
Name of the subreddit (e.g. “GooglePixel”) |
created_at |
datetime |
UTC timestamp when the post/comment was originally created |
retrieved_at |
datetime |
Local timezone timestamp when this data was scraped |
type |
string |
"post" or "comment" |
text |
string |
For posts: title + "\n\n" + selftext; for comments: comment body |
score |
int |
Reddit score (upvotes – downvotes) |
post_id |
string |
Unique Reddit ID for the post or comment |
parent_id |
string |
For comments: the parent comment/post ID; null for top‑level posts |
Example entry:
| Field | Value |
|---|---|
| subreddit | apple |
| created_at | 2025-04-17 19:59:44-05:00 |
| retrieved_at | 2025-04-18 12:46:10.631577-05:00 |
| type | post |
| text | Apple wanted people to vibe code Vision Pro apps with Siri |
| score | 427 |
| post_id | 1k1sn9w |
| parent_id | None |
Data Splits
There are no explicit train/test splits. Data is organized by date under the data_raw/ or data_scored/ folder.
Dataset Creation
- A Python script (
scrape.py) runs daily, fetching the top N posts and top M comments per subreddit. - Posts are retrieved via PRAW’s
subreddit.top(time_filter="day"). - Data is de‑duplicated against the previous day’s
post_idvalues. - Stored as Parquet under
data_raw/{YYYY‑MM‑DD}.parquet.
License
This dataset is released under the MIT License.
Citation
If you use this dataset, please cite it as:
@misc{lim_top_reddit_posts_daily_2025,
title = {Top Reddit Posts Daily: Scraped Daily Top Posts and Comments from Subreddits},
author = {Halston Lim},
year = {2025},
publisher = {Hugging Face Datasets},
howpublished = {\url{https://huggingface.co/datasets/hblim/top_reddit_posts_daily}}
}
Usage Example
Example A: Download and load a single day via HF Hub
from huggingface_hub import HfApi
import pandas as pd
api = HfApi()
repo_id = "hblim/top_reddit_posts_daily"
date_str = "2025-04-18"
today_path = api.hf_hub_download(
repo_id=repo_id,
filename=f"data_raw/{date_str}.parquet",
repo_type="dataset"
)
df_today = pd.read_parquet(today_path)
print(f"Records for {date_str}:")
print(df_today.head())
Example B: List, download, and concatenate all days
from huggingface_hub import HfApi
import pandas as pd
api = HfApi()
repo_id = "hblim/top_reddit_posts_daily"
# 1. List all parquet files in the dataset repo
all_files = api.list_repo_files(repo_id, repo_type="dataset")
parquet_files = sorted([f for f in all_files if f.startswith("data_raw/") and f.endswith(".parquet")])
# 2. Download each shard and load with pandas
dfs = []
for shard in parquet_files:
local_path = api.hf_hub_download(repo_id=repo_id, filename=shard, repo_type="dataset")
dfs.append(pd.read_parquet(local_path))
# 3. Concatenate into one DataFrame
df_all = pd.concat(dfs, ignore_index=True)
print(f"Total records across {len(dfs)} days: {len(df_all)}")
Limitations & Ethics
- Bias: Data reflects Reddit’s user base and community norms, which may not generalize.
- Privacy: Only public content is collected; no personally identifiable information is stored.