Local Datasets: Download Links & Building from Raw
This document lists the datasets under RecBole/dataset/ used in this project: download links (no RecBole auto-download) and how to build each from raw using scripts in this repository.
Summary
| Dataset (folder) | Task | Main file | Raw source |
|---|---|---|---|
| 30music | SBR | 30music.inter |
ReMAP Lab (link below) |
| nowp | SBR | nowp.inter |
Zenodo |
| amazon_reviews_books | SBR | amazon_reviews_books.inter |
Amazon Reviews (Books) |
| amazon_reviews_grocery_and_gourmet_food | CF | amazon_reviews_grocery_and_gourmet_food.inter |
Amazon Reviews (Grocery) |
| movielens | CF | movielens.inter |
MovieLens (Kaggle/GroupLens) |
| rsc15 | SBR | rsc15.inter |
RecSys Challenge 2015 (Kaggle) |
| tafeng | NBR | tafeng_merged.json |
Ta-Feng (Kaggle) |
| dunnhumby | NBR | dunnhumby_merged.json |
dunnhumby / Kaggle |
| instacart | NBR | instacart_merged.json |
Instacart / Kaggle |
1. 30music
- Folder:
dataset/30music/ - Task: Sequential recommendation (SBR)
- Main file:
30music.inter
Header:user_id:token session_id:token item_id:token timestamp:float
Download
- Source: ReMAP Lab, Politecnico di Milano (30Music listening and playlists dataset, RecSys 2015).
- Link: 30Music dataset (SharePoint)
- Citation: Turrin, R., Quadrana, M., Condorelli, A., Pagano, R., & Cremonesi, P. "30Music listening and playlists dataset", RecSys 2015.
Building from raw (this repo)
- This repo does not contain a conversion script from 30music raw files to
30music.inter. You need to produce a tab-separated.interwith the header above (one row per user/session/item/timestamp) and save it asdataset/30music/30music.inter. - After the main file exists, forget sets can be created with:
dataset/30music/create_forget_sets.py(requires30music.interand tracks/tags files as in the script).
2. NowP (NowPlaying)
- Folder:
dataset/nowp/ - Task: SBR
- Main file:
nowp.inter
Header:user_id:token session_id:token item_id:token timestamp:float
Download
- Source: NowP (Zenodo) — music listening dataset.
- Place the CSV that contains session-level data in
dataset/nowp/. The script expectssessions_2018.csvwith columns includinguser_id,session_id,timestamp, and an item identifier (e.g.musicbrainz_id).
Building from raw (this repo)
- Download from Zenodo and put
sessions_2018.csvindataset/nowp/. - From
dataset/nowp/run:python preprocess_nowp.py- Step 1: Reads
sessions_2018.csv, writesnowp_temp.interwith columnsuser_id,session_id,item_id(musicbrainz_id),timestamp. - Step 2: Sorts and deduplicates (external
sort) intonowp.inter. - Step 3: Removes the temp file.
- Step 1: Reads
- Output:
dataset/nowp/nowp.interin RecBole atomic format.
Amazon Reviews (shared source for Books and Grocery)
Both amazon_reviews_books and amazon_reviews_grocery_and_gourmet_food use the same data family; only the category (and thus the files/folder) differs.
Download (one source, pick your category)
- Source: Amazon product review. Provides per-category review data as
*.jsonl.gz(and optionallymeta_*.jsonl.gzfor metadata). - Format: Each review JSONL line has
user_id,asin(product id),rating,timestamp. Scripts ignoremeta_*files and use only the review*.jsonl.gzfor the chosen category. - Put the category’s review files in the matching dataset folder:
dataset/amazon_reviews_books/ordataset/amazon_reviews_grocery_and_gourmet_food/.
3. Books
- Folder:
dataset/amazon_reviews_books/ - Task: SBR (with sessions and rating)
- Main file:
amazon_reviews_books.inter
Header:user_id:token session_id:token item_id:token rating:float timestamp:float
Download: Use the Books category from the Amazon Reviews source above. Place the Books review *.jsonl.gz (and optionally meta_Books.jsonl.gz for sensitive-item scripts) in dataset/amazon_reviews_books/.
Building from raw (this repo)
Step 1 – Raw .inter (no sessions):
Fromdataset/amazon_reviews_books/:python convert_to_inter.py [--output_file amazon_reviews_books_raw.inter] [--files file1.jsonl.gz file2.jsonl.gz ...]Reads all non-meta
.jsonl.gzfiles, extractsuser_id,asin,rating,timestamp, writesamazon_reviews_books_raw.inter(tab-separated, no type suffixes).Step 2 – Deduplicate and filter:
Same directory:python deduplicate.py --input amazon_reviews_books_raw.inter --output amazon_reviews_books_clean.interWrites RecBole-style header and deduplicated, sorted data; keeps only users/items with at least 5 interactions (default).
Step 3 – Add sessions:
Same directory:python add_sessions.py --input_file amazon_reviews_books_clean.inter --output_file amazon_reviews_books.inter --time_window_hours 1.0Groups by user and time window (default 1 hour), assigns
session_id, writesamazon_reviews_books.interwith the full header above.Optional: sensitive items and forget sets use
identify_sensitive_items.pyandgenerate_forget_sets.py(see scripts’ help andmeta_Books.jsonl.gzfor metadata).
4. Food
- Folder:
dataset/amazon_reviews_grocery_and_gourmet_food/ - Task: CF (no sessions)
- Main file:
amazon_reviews_grocery_and_gourmet_food.inter
Header:user_id:token item_id:token rating:float timestamp:float
Download: Use the Grocery and Gourmet Food category from the Amazon Reviews source above. Place that category’s review *.jsonl.gz in dataset/amazon_reviews_grocery_and_gourmet_food/.
Building from raw (this repo)
Step 1 – Raw .inter:
Fromdataset/amazon_reviews_grocery_and_gourmet_food/:python preprocess_data.py --output_file amazon_reviews_grocery_and_gourmet_food_raw.inter [--files ...]Reads all non-meta
.jsonl.gzin the directory (or the files you pass), extractsuser_id,asin(as item_id),rating,timestamp; writes a tab-separated file with a simple header. Default: all*.jsonl.gzexceptmeta_*.Step 2 – Deduplicate:
Same directory:python deduplicate.py --input amazon_reviews_grocery_and_gourmet_food_raw.inter --output amazon_reviews_grocery_and_gourmet_food.interWrites the RecBole header and deduplicated/filtered data (min 5 interactions per user/item by default). Result is the main CF
.inter(no session_id).
5. MovieLens
- Folder:
dataset/movielens/ - Task: CF
- Main file:
movielens.inter
Header:user_id:token item_id:token rating:float timestamp:float
Download
- Source: MovieLens 20M (Kaggle) — use
rating.csv. Place it indataset/movielens/.
Building from raw (this repo)
Step 1 – Raw .inter:
Fromdataset/movielens/:python preprocess_movielens.py --input-file rating.csv --output-file movielens_raw.interReads
userId,movieId,rating,timestamp; converts timestamp to Unix; writes tab-separatedmovielens_raw.inter(with a simple header line).Step 2 – Deduplicate and filter:
Same directory:python deduplicate.py --input movielens_raw.inter --output movielens.interWrites RecBole header and deduplicated/filtered data (min 5 interactions per user/item by default). Output:
movielens.inter.Optional: sensitive movies and forget sets use
identify_sensitive_movies.pyandgenerate_forget_sets.pyin the same folder.
6. RSC15 (RecSys Challenge 2015)
- Folder:
dataset/rsc15/ - Task: SBR (session-based; session = user_id in some configs)
- Main file:
rsc15.inter
Header:session_id:token item_id:token timestamp:float(or in some scriptsuser_id:tokenis used for session id).
Download
- Source: RSC15 (Kaggle) — RecSys Challenge 2015 / YOOCHOOSE-style session data.
Building from raw (this repo)
Convert the challenge CSV into two RecBole
.interfiles with headersession_id:token item_id:token timestamp:float(oruser_id:token item_id:token timestamp:floatif your pipeline uses user_id for session):rsc15.train.interrsc15.test.inter
One row per (session, item, timestamp). How you split train/test is up to you (e.g. by time or as in the challenge).
Merge train + test:
Fromdataset/rsc15/:python merge_train_and_test.pyReads
rsc15.train.interandrsc15.test.inter, concatenates, sorts by first column and timestamp, writesrsc15.inter. Note: The script currently writes headeruser_id:token item_id:token timestamp:float. If your SBR config expectssession_id:token, either rename the column in the script or ensure your config treats that column as session id.For train/val/test splits from raw CSV, see
dataset/rsc15/train_val_test_unlearn_split_v2.py(reads CSV withsession_id,item_id,timestampand produces splits).
7. TaFeng (NBR)
- Folder:
dataset/tafeng/ - Task: Next-basket recommendation (NBR)
- Main file:
tafeng_merged.json(not.inter)
Download
- Source: TaFeng (Kaggle) — use
tafeng_all_months_merged.csv.zip.
Building from raw (this repo)
- This repo does not include a script that converts Ta-Feng CSV →
tafeng_merged.json. You must build the JSON yourself (or adapt the dunnhumby script logic). - Required JSON format: One object:
user_id (string) → list of baskets. Each basket = list of item IDs (ints or strings), in chronological order. Example:{ "user_1": [[1, 2, 3], [4, 5]], "user_2": [[10, 20]] } - Save as
dataset/tafeng/tafeng_merged.json. NBR models expect at least 4 baskets per user (configurable). After that, fraud baskets (for poisoning experiments) can be generated withdataset/create_fraud_baskets_nbr.py.
8. Dunnhumby (NBR)
- Folder:
dataset/dunnhumby/ - Task: NBR
- Main file:
dunnhumby_merged.json
Download
- Source: dunnhumby source files or dunnhumby The Complete Journey (Kaggle) — use
transaction_data.csv. Place it indataset/dunnhumby/.
Building from raw (this repo)
- From
dataset/dunnhumby/:python create_dunnhumby_merged_v2.py- Reads
transaction_data.csv. - Mapping:
household_key→ user_id,BASKET_ID→ basket,PRODUCT_ID→ item_id,TRANS_TIME→ ordering. - Groups by household, then by basket; sorts baskets by time; outputs list of product lists per user.
- Keeps only users with at least 4 baskets.
- Reads
- Script writes
dunnhumby_merged_v2.json. To use it as the main NBR file, either rename/copy todunnhumby_merged.jsonor point config todunnhumby_merged_v2.jsonif supported. - Fraud baskets:
dataset/create_fraud_baskets_nbr.py --dataset dunnhumby(readsdunnhumby_merged.jsonor the path you use).
9. Instacart (NBR)
- Folder:
dataset/instacart/ - Task: NBR
- Main file:
instacart_merged.json
Download
- Source: Instacart Market Basket Analysis (Kaggle). If the official data is unavailable: instacart-orders (GitHub).
Building from raw (this repo)
- This repo does not include a script that converts Instacart CSV/tables →
instacart_merged.json. Build the JSON as for tafeng/dunnhumby. - Required JSON format: Same as tafeng:
user_id (string) → list of baskets, each basket = list of item IDs, chronological. Save asdataset/instacart/instacart_merged.json. Minimum baskets per user (e.g. 4) is enforced by the NBR data loader. Then you can rundataset/create_fraud_baskets_nbr.py --dataset instacart.
RecBole atomic format (quick reference)
- Separator: Tab between columns.
- Header: First line = column names with type suffix, e.g.
user_id:token,item_id:token,rating:float,timestamp:float,session_id:token. - File naming: Main interaction file is
{dataset_name}.interindataset/{dataset_name}/. - NBR: Uses
{dataset_name}_merged.json(user → list of baskets), not.inter, when usingNextBasketDataset.
For more on RecBole data format, see the official docs (e.g. atomic files and dataset list).