erase-bench / dataset /DATASET_README.md
deem-data's picture
Upload dataset/DATASET_README.md with huggingface_hub
c726bca verified

Local Datasets: Download Links & Building from Raw

This document lists the datasets under RecBole/dataset/ used in this project: download links (no RecBole auto-download) and how to build each from raw using scripts in this repository.


Summary

Dataset (folder) Task Main file Raw source
30music SBR 30music.inter ReMAP Lab (link below)
nowp SBR nowp.inter Zenodo
amazon_reviews_books SBR amazon_reviews_books.inter Amazon Reviews (Books)
amazon_reviews_grocery_and_gourmet_food CF amazon_reviews_grocery_and_gourmet_food.inter Amazon Reviews (Grocery)
movielens CF movielens.inter MovieLens (Kaggle/GroupLens)
rsc15 SBR rsc15.inter RecSys Challenge 2015 (Kaggle)
tafeng NBR tafeng_merged.json Ta-Feng (Kaggle)
dunnhumby NBR dunnhumby_merged.json dunnhumby / Kaggle
instacart NBR instacart_merged.json Instacart / Kaggle

1. 30music

  • Folder: dataset/30music/
  • Task: Sequential recommendation (SBR)
  • Main file: 30music.inter
    Header: user_id:token session_id:token item_id:token timestamp:float

Download

  • Source: ReMAP Lab, Politecnico di Milano (30Music listening and playlists dataset, RecSys 2015).
  • Link: 30Music dataset (SharePoint)
  • Citation: Turrin, R., Quadrana, M., Condorelli, A., Pagano, R., & Cremonesi, P. "30Music listening and playlists dataset", RecSys 2015.

Building from raw (this repo)

  • This repo does not contain a conversion script from 30music raw files to 30music.inter. You need to produce a tab-separated .inter with the header above (one row per user/session/item/timestamp) and save it as dataset/30music/30music.inter.
  • After the main file exists, forget sets can be created with:
    • dataset/30music/create_forget_sets.py (requires 30music.inter and tracks/tags files as in the script).

2. NowP (NowPlaying)

  • Folder: dataset/nowp/
  • Task: SBR
  • Main file: nowp.inter
    Header: user_id:token session_id:token item_id:token timestamp:float

Download

  • Source: NowP (Zenodo) — music listening dataset.
  • Place the CSV that contains session-level data in dataset/nowp/. The script expects sessions_2018.csv with columns including user_id, session_id, timestamp, and an item identifier (e.g. musicbrainz_id).

Building from raw (this repo)

  1. Download from Zenodo and put sessions_2018.csv in dataset/nowp/.
  2. From dataset/nowp/ run:
    python preprocess_nowp.py
    
    • Step 1: Reads sessions_2018.csv, writes nowp_temp.inter with columns user_id, session_id, item_id (musicbrainz_id), timestamp.
    • Step 2: Sorts and deduplicates (external sort) into nowp.inter.
    • Step 3: Removes the temp file.
  3. Output: dataset/nowp/nowp.inter in RecBole atomic format.

Amazon Reviews (shared source for Books and Grocery)

Both amazon_reviews_books and amazon_reviews_grocery_and_gourmet_food use the same data family; only the category (and thus the files/folder) differs.

Download (one source, pick your category)

  • Source: Amazon product review. Provides per-category review data as *.jsonl.gz (and optionally meta_*.jsonl.gz for metadata).
  • Format: Each review JSONL line has user_id, asin (product id), rating, timestamp. Scripts ignore meta_* files and use only the review *.jsonl.gz for the chosen category.
  • Put the category’s review files in the matching dataset folder: dataset/amazon_reviews_books/ or dataset/amazon_reviews_grocery_and_gourmet_food/.

3. Books

  • Folder: dataset/amazon_reviews_books/
  • Task: SBR (with sessions and rating)
  • Main file: amazon_reviews_books.inter
    Header: user_id:token session_id:token item_id:token rating:float timestamp:float

Download: Use the Books category from the Amazon Reviews source above. Place the Books review *.jsonl.gz (and optionally meta_Books.jsonl.gz for sensitive-item scripts) in dataset/amazon_reviews_books/.

Building from raw (this repo)

  1. Step 1 – Raw .inter (no sessions):
    From dataset/amazon_reviews_books/:

    python convert_to_inter.py [--output_file amazon_reviews_books_raw.inter] [--files file1.jsonl.gz file2.jsonl.gz ...]
    

    Reads all non-meta .jsonl.gz files, extracts user_id, asin, rating, timestamp, writes amazon_reviews_books_raw.inter (tab-separated, no type suffixes).

  2. Step 2 – Deduplicate and filter:
    Same directory:

    python deduplicate.py --input amazon_reviews_books_raw.inter --output amazon_reviews_books_clean.inter
    

    Writes RecBole-style header and deduplicated, sorted data; keeps only users/items with at least 5 interactions (default).

  3. Step 3 – Add sessions:
    Same directory:

    python add_sessions.py --input_file amazon_reviews_books_clean.inter --output_file amazon_reviews_books.inter --time_window_hours 1.0
    

    Groups by user and time window (default 1 hour), assigns session_id, writes amazon_reviews_books.inter with the full header above.

  4. Optional: sensitive items and forget sets use identify_sensitive_items.py and generate_forget_sets.py (see scripts’ help and meta_Books.jsonl.gz for metadata).


4. Food

  • Folder: dataset/amazon_reviews_grocery_and_gourmet_food/
  • Task: CF (no sessions)
  • Main file: amazon_reviews_grocery_and_gourmet_food.inter
    Header: user_id:token item_id:token rating:float timestamp:float

Download: Use the Grocery and Gourmet Food category from the Amazon Reviews source above. Place that category’s review *.jsonl.gz in dataset/amazon_reviews_grocery_and_gourmet_food/.

Building from raw (this repo)

  1. Step 1 – Raw .inter:
    From dataset/amazon_reviews_grocery_and_gourmet_food/:

    python preprocess_data.py --output_file amazon_reviews_grocery_and_gourmet_food_raw.inter [--files ...]
    

    Reads all non-meta .jsonl.gz in the directory (or the files you pass), extracts user_id, asin (as item_id), rating, timestamp; writes a tab-separated file with a simple header. Default: all *.jsonl.gz except meta_*.

  2. Step 2 – Deduplicate:
    Same directory:

    python deduplicate.py --input amazon_reviews_grocery_and_gourmet_food_raw.inter --output amazon_reviews_grocery_and_gourmet_food.inter
    

    Writes the RecBole header and deduplicated/filtered data (min 5 interactions per user/item by default). Result is the main CF .inter (no session_id).


5. MovieLens

  • Folder: dataset/movielens/
  • Task: CF
  • Main file: movielens.inter
    Header: user_id:token item_id:token rating:float timestamp:float

Download

Building from raw (this repo)

  1. Step 1 – Raw .inter:
    From dataset/movielens/:

    python preprocess_movielens.py --input-file rating.csv --output-file movielens_raw.inter
    

    Reads userId, movieId, rating, timestamp; converts timestamp to Unix; writes tab-separated movielens_raw.inter (with a simple header line).

  2. Step 2 – Deduplicate and filter:
    Same directory:

    python deduplicate.py --input movielens_raw.inter --output movielens.inter
    

    Writes RecBole header and deduplicated/filtered data (min 5 interactions per user/item by default). Output: movielens.inter.

  3. Optional: sensitive movies and forget sets use identify_sensitive_movies.py and generate_forget_sets.py in the same folder.


6. RSC15 (RecSys Challenge 2015)

  • Folder: dataset/rsc15/
  • Task: SBR (session-based; session = user_id in some configs)
  • Main file: rsc15.inter
    Header: session_id:token item_id:token timestamp:float (or in some scripts user_id:token is used for session id).

Download

  • Source: RSC15 (Kaggle) — RecSys Challenge 2015 / YOOCHOOSE-style session data.

Building from raw (this repo)

  1. Convert the challenge CSV into two RecBole .inter files with header session_id:token item_id:token timestamp:float (or user_id:token item_id:token timestamp:float if your pipeline uses user_id for session):

    • rsc15.train.inter
    • rsc15.test.inter
      One row per (session, item, timestamp). How you split train/test is up to you (e.g. by time or as in the challenge).
  2. Merge train + test:
    From dataset/rsc15/:

    python merge_train_and_test.py
    

    Reads rsc15.train.inter and rsc15.test.inter, concatenates, sorts by first column and timestamp, writes rsc15.inter. Note: The script currently writes header user_id:token item_id:token timestamp:float. If your SBR config expects session_id:token, either rename the column in the script or ensure your config treats that column as session id.

  3. For train/val/test splits from raw CSV, see dataset/rsc15/train_val_test_unlearn_split_v2.py (reads CSV with session_id, item_id, timestamp and produces splits).


7. TaFeng (NBR)

  • Folder: dataset/tafeng/
  • Task: Next-basket recommendation (NBR)
  • Main file: tafeng_merged.json (not .inter)

Download

Building from raw (this repo)

  • This repo does not include a script that converts Ta-Feng CSV → tafeng_merged.json. You must build the JSON yourself (or adapt the dunnhumby script logic).
  • Required JSON format: One object: user_id (string) → list of baskets. Each basket = list of item IDs (ints or strings), in chronological order. Example:
    { "user_1": [[1, 2, 3], [4, 5]], "user_2": [[10, 20]] }
    
  • Save as dataset/tafeng/tafeng_merged.json. NBR models expect at least 4 baskets per user (configurable). After that, fraud baskets (for poisoning experiments) can be generated with dataset/create_fraud_baskets_nbr.py.

8. Dunnhumby (NBR)

  • Folder: dataset/dunnhumby/
  • Task: NBR
  • Main file: dunnhumby_merged.json

Download

Building from raw (this repo)

  1. From dataset/dunnhumby/:
    python create_dunnhumby_merged_v2.py
    
    • Reads transaction_data.csv.
    • Mapping: household_key → user_id, BASKET_ID → basket, PRODUCT_ID → item_id, TRANS_TIME → ordering.
    • Groups by household, then by basket; sorts baskets by time; outputs list of product lists per user.
    • Keeps only users with at least 4 baskets.
  2. Script writes dunnhumby_merged_v2.json. To use it as the main NBR file, either rename/copy to dunnhumby_merged.json or point config to dunnhumby_merged_v2.json if supported.
  3. Fraud baskets: dataset/create_fraud_baskets_nbr.py --dataset dunnhumby (reads dunnhumby_merged.json or the path you use).

9. Instacart (NBR)

  • Folder: dataset/instacart/
  • Task: NBR
  • Main file: instacart_merged.json

Download

Building from raw (this repo)

  • This repo does not include a script that converts Instacart CSV/tables → instacart_merged.json. Build the JSON as for tafeng/dunnhumby.
  • Required JSON format: Same as tafeng: user_id (string) → list of baskets, each basket = list of item IDs, chronological. Save as dataset/instacart/instacart_merged.json. Minimum baskets per user (e.g. 4) is enforced by the NBR data loader. Then you can run dataset/create_fraud_baskets_nbr.py --dataset instacart.

RecBole atomic format (quick reference)

  • Separator: Tab between columns.
  • Header: First line = column names with type suffix, e.g. user_id:token, item_id:token, rating:float, timestamp:float, session_id:token.
  • File naming: Main interaction file is {dataset_name}.inter in dataset/{dataset_name}/.
  • NBR: Uses {dataset_name}_merged.json (user → list of baskets), not .inter, when using NextBasketDataset.

For more on RecBole data format, see the official docs (e.g. atomic files and dataset list).