metadata
task_categories:
- other
configs:
- config_name: AmazonReviews2014-Beauty
data_files:
- split: val
path: AmazonReviews2014-Beauty/val.jsonl
- split: test
path: AmazonReviews2014-Beauty/test.jsonl
- config_name: AmazonReviews2014-Sports_and_Outdoors
data_files:
- split: val
path: AmazonReviews2014-Sports_and_Outdoors/val.jsonl
- split: test
path: AmazonReviews2014-Sports_and_Outdoors/test.jsonl
- config_name: AmazonReviews2023-Industrial_and_Scientific
data_files:
- split: val
path: AmazonReviews2023-Industrial_and_Scientific/val.jsonl
- split: test
path: AmazonReviews2023-Industrial_and_Scientific/test.jsonl
- config_name: AmazonReviews2023-Musical_Instruments
data_files:
- split: val
path: AmazonReviews2023-Musical_Instruments/val.jsonl
- split: test
path: AmazonReviews2023-Musical_Instruments/test.jsonl
- config_name: AmazonReviews2023-Office_Products
data_files:
- split: val
path: AmazonReviews2023-Office_Products/val.jsonl
- split: test
path: AmazonReviews2023-Office_Products/test.jsonl
- config_name: Steam
data_files:
- split: val
path: Steam/val.jsonl
- split: test
path: Steam/test.jsonl
- config_name: Yelp-Yelp_2020
data_files:
- split: val
path: Yelp-Yelp_2020/val.jsonl
- split: test
path: Yelp-Yelp_2020/test.jsonl
MemGen Annotations
This is the annotation dataset for the paper How Well Does Generative Recommendation Generalize?.
The annotations categorize evaluation instances under the leave-one-out protocol:
- test split uses the last item in the user history sequence as target,
- val split uses the second-to-last item as target.
Columns
sample_id: row index within the split in the original dataset.user_id: raw user identifier (join key).master: one ofmemorization,generalization,uncategorized.subcategories: list of{rule, hop}for fine-grained generalization types.all_labels: all string labels (e.g.,["generalization", "symmetry_3"]).
Load in M&G annotations
from datasets import load_dataset
labels = load_dataset(
"jamesding0302/memgen-annotations",
"AmazonReviews2014-Beauty",
split="test",
)
print(labels[0])
Merge with processed dataset
# 1) Load your processed dataset split (must be aligned with labels by row order)
ds = pipeline.split_datasets["test"]
# 2) Append label columns to the original dataset
ds = (ds
.add_column("master", labels["master"])
.add_column("subcategories", labels["subcategories"])
.add_column("all_labels", labels["all_labels"]))