Datasets:
pretty_name: Amazon Products 2025 (Top Sellers, Curated)
tags:
- ecommerce
- product
- amazon
- metadata
- text
- regression
- classification
- recommendation
- ranking
license: mit
task_categories:
- text-classification
- text-regression
- recommendation
task_ids:
- topic-classification
- sentiment-classification
- rating-prediction
- item-ranking
language:
- en
multilinguality: monolingual
source_datasets: original
size_categories:
- n<1K
dataset_name: aslan-ng/amazon_products_2025
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Dataset Summary
A compact, hand-curated dataset of Amazon top-selling products (2025).
Each record contains product title, description, average_rating, rating_number, main_category, and a derived product_quality_score that uses a Bayesian-adjusted rating formula to estimate the true quality of products.
This dataset is designed for rapid prototyping of:
- product topic classification (e.g., infer
main_categoryfrom text) - rating prediction / quality regression
- ranking / recommendation experiments
- lightweight product text understanding benchmarks
Note: The dataset is small (~500 samples) for clarity, reproducibility, and teaching use.
Supported Tasks
- Text Classification (
title,description→main_category) - Text Regression (
title,description→product_quality_score) - Item Ranking (based on Bayesian-adjusted product quality)
Languages
- English (
en)
Dataset Structure
Data Instances
{
"title": "Wireless Noise-Cancelling Headphones",
"description": "Over-ear design, 30 hours of playtime, adaptive sound control.",
"average_rating": 4.7,
"rating_number": 20431,
"main_category": "Electronics",
"product_quality_score": 0.918
}
Data Fields
| Field | Type | Description |
|---|---|---|
title |
string | Product title as listed by the marketplace. |
description |
string | Short marketing text or summary of product features. |
average_rating |
float64 | Average user rating (0–5). |
rating_number |
int64 | Total number of user ratings. |
main_category |
string | Coarse product category label. |
product_quality_score |
float64 | Bayesian-adjusted product score, normalized to [0, 1], combining average_rating and rating_number. |
Calculation of product_quality_score
product_quality_score is computed using a Bayesian average formula to reduce bias toward products with few ratings.
The score combines each product’s observed average (R) and the global mean (C) weighted by the number of votes (v) and a prior constant (m):
score = (v / (v + m)) × R + (m / (v + m)) × C
where:
- ( R ) = product’s average rating
- ( v ) = number of ratings
- ( C ) = mean rating across all products
- ( m ) = minimum votes threshold (empirically chosen, e.g. 100)
The resulting score is then normalized to [0, 1] for consistency across tasks.
This approach favors well-rated products that also have sufficient review volume, mitigating small-sample noise.
Example Python Implementation
def bayesian_score(R, v, C, m=100):
"""
Compute Bayesian-adjusted product quality score.
Args:
R (float): product average rating
v (int): number of ratings
C (float): mean rating across all products
m (int): minimum votes threshold (prior weight)
Returns:
float: normalized Bayesian quality score in [0, 1]
"""
bayes = (v / (v + m)) * R + (m / (v + m)) * C
return bayes / 5.0 # normalize to [0,1] assuming rating scale 0–5
Limitations
- Small scale; not representative of the entire Amazon catalog.
- Single-snapshot dataset (no time evolution).
- Derived Bayesian prior constant (m) is approximate.
- Some descriptions truncated to fit Hugging Face tokenization limits.
Maintainers
- Maintainer: aslan-ng
- Contributions: PRs welcome for new categories, Bayesian hyperparameter tuning, or evaluation splits.