nielsr's picture
nielsr HF Staff
Improve dataset card: Add task categories, relevant tags, paper/code/project links, and update usage details
f444575 verified
|
raw
history blame
7.83 kB
---
language:
- en
- zh
license: apache-2.0
pretty_name: AL-GR Raw Sequences πŸ“œ
tags:
- sequential-recommendation
- raw-data
- anonymized
- e-commerce
- next-item-prediction
- generative-retrieval
- semantic-identifiers
task_categories:
- text-generation
- text-retrieval
---
# AL-GR/Origin-Sequence-Data: Raw User Behavior Sequences πŸ“œ
[Paper](https://huggingface.co/papers/2509.20904) | [Project Page](https://huggingface.co/AL-GR) | [Code](https://github.com/selous123/al_sid)
## About the Dataset
This dataset is part of **FORGE**, a comprehensive benchmark for **FO**rming **R**aw user behavior sequences and **G**enerative r**E**trieval in Industrial Datasets, as presented in the paper [FORGE: Forming Semantic Identifiers for Generative Retrieval in Industrial Datasets](https://huggingface.co/papers/2509.20904). The FORGE benchmark aims to address challenges in semantic identifiers (SIDs) for generative retrieval (GR) by providing a large-scale public dataset with multimodal features.
Specifically, this `AL-GR/Origin-Sequence-Data` repository contains the foundational **raw user behavior sequences** for the `AL-GR` ecosystem. It represents the data *before* it is formatted into the instruction-following prompts used for training Large Language Models (LLMs) in generative retrieval tasks. The full FORGE dataset comprises 14 billion user interactions and multimodal features of 250 million items sampled from Taobao, one of the biggest e-commerce platforms in China.
Each row in this dataset (`Origin-Sequence-Data`) represents a step in a user's journey, consisting of a sequence of previously interacted items (`user_history`) and the next item they interacted with (`target_item`). All item IDs have been anonymized into short, unique strings.
This dataset is ideal for:
- πŸ§‘β€πŸ”¬ Researchers who want to design their own data processing or prompting strategies for generative retrieval.
- πŸ“ˆ Training and evaluating traditional sequential recommendation models (e.g., GRU4Rec, SASRec, etc.).
- πŸ”Ž Understanding the source data from which the main `AL-GR` generative dataset was built.
## πŸš€ Sample Usage
The data is structured in multiple folders (`s1_splits`, `s2_splits`, etc.), which is a non-standard format for the `datasets` library. To make loading seamless, a **loading script** is required.
#### Step 1: Create the Loading Script
Create a Python file named `origin-sequence-data.py` in your local directory and paste the following code into it.
```python
import csv
import datasets
import glob
_DESCRIPTION = "Raw user behavior sequences for the AL-GR project, split into history and target item."
_CITATION = """
@misc{fu2025forgeformingsemanticidentifiers,
title={FORGE: Forming Semantic Semantic Identifiers for Generative Retrieval in Industrial Datasets},
author={Kairui Fu and Tao Zhang and Shuwen Xiao and Ziyang Wang and Xinming Zhang and Chenchi Zhang and Yuliang Yan and Junjun Zheng and Yu Li and Zhihong Chen and Jian Wu and Xiangheng Kong and Shengyu Zhang and Kun Kuang and Yuning Jiang and Bo Zheng},
year={2025},
eprint={2509.20904},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2509.20904},
}
"""
class OriginSequenceData(datasets.GeneratorBasedBuilder):
"""A loader for the AL-GR Raw User Behavior Sequences."""
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features({
"user_history": datasets.Value("string"),
"target_item": datasets.Value("string"),
}),
citation=_CITATION,
)
def _split_generators(self, dl_manager):
# Data is already in the repository, so we point to the root.
repo_path = dl_manager.manual_dir
return [
datasets.SplitGenerator(
name="s1",
gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/s1_splits/*.csv"))},
),
datasets.SplitGenerator(
name="s2",
gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/s2_splits/*.csv"))},
),
datasets.SplitGenerator(
name="s3",
gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/s3_splits/*.csv"))},
),
datasets.SplitGenerator(
name="test",
gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/test/*.csv"))},
),
]
def _generate_examples(self, filepaths):
"""Yields examples from the data files."""
key = 0
for filepath in filepaths:
with open(filepath, "r", encoding="utf-8") as f:
# Assuming the CSV has headers: 'user_history', 'target_item'
# If not, you might need to use csv.reader and access by index.
reader = csv.DictReader(f)
for row in reader:
yield key, {
"user_history": row["user_history"],
"target_item": row["target_item"],
}
key += 1
```
#### Step 2: Upload the Script
Upload the `origin-sequence-data.py` file to the **root directory** of this dataset repository on the Hugging Face Hub.
#### Step 3: Load the Dataset with One Command!
Once the script is uploaded, you (and anyone else) can load the entire dataset effortlessly:
```python
from datasets import load_dataset
# The loading script will be automatically detected and executed.
dataset = load_dataset("AL-GR/Origin-Sequence-Data")
# Access different splits
print("Sample from s1 split:")
print(dataset['s1'][0])
print("
Sample from test split:")
print(dataset['test'][0])
```
## πŸ—οΈ Dataset Structure
### Data Fields
- `user_history` (string) πŸ•’: A space-separated sequence of anonymized item IDs representing the user's past interactions.
- `target_item` (string) 🎯: The single anonymized item ID that the user interacted with next.
### Data Splits
The dataset is partitioned into four main parts, stored in separate folders:
- `s1_splits`, `s2_splits`, `s3_splits`: Three chronological training splits. This is useful for time-aware training and evaluation, allowing models to be trained on older data and tested on newer data.
- `test`: A dedicated test set for final model evaluation.
## πŸ”— Relationship to `AL-GR`
This dataset is the direct precursor to the main `AL-GR` generative dataset. The transformation is as follows:
- **`Origin-Sequence-Data` (This dataset):**
- `user_history`: "AdPxq 6Vf1Re WkQqK..."
- `target_item`: "ECZSq"
- **`AL-GR` (Generative dataset):**
- `system`: "You are a recommendation system..."
- `user`: "The current user's historical behavior is as follows: C...C..." (IDs might be re-mapped)
- `answer`: "C..." (The target item, re-mapped)
This dataset provides the raw material for anyone wishing to replicate or create variants of the `AL-GR` prompt format.
## ✍️ Citation
If you use this dataset in your research, please cite:
```bibtex
@misc{fu2025forgeformingsemanticidentifiers,
title={FORGE: Forming Semantic Identifiers for Generative Retrieval in Industrial Datasets},
author={Kairui Fu and Tao Zhang and Shuwen Xiao and Ziyang Wang and Xinming Zhang and Chenchi Zhang and Yuliang Yan and Junjun Zheng and Yu Li and Zhihong Chen and Jian Wu and Xiangheng Kong and Shengyu Zhang and Kun Kuang and Yuning Jiang and Bo Zheng},
year={2025},
eprint={2509.20904},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2509.20904},
}
```
## πŸ“œ License
This dataset is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).