File size: 7,825 Bytes
ff6dc79
 
f444575
 
 
 
ff6dc79
f444575
 
 
 
 
 
 
 
 
 
ff6dc79
 
 
 
f444575
 
 
ff6dc79
f444575
ff6dc79
f444575
 
 
ff6dc79
 
f444575
ff6dc79
f444575
ff6dc79
f444575
ff6dc79
 
 
 
 
 
 
 
 
 
 
 
 
 
f444575
 
 
 
 
 
 
 
ff6dc79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f444575
 
ff6dc79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f444575
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
---
language:
- en
- zh
license: apache-2.0
pretty_name: AL-GR Raw Sequences πŸ“œ
tags:
- sequential-recommendation
- raw-data
- anonymized
- e-commerce
- next-item-prediction
- generative-retrieval
- semantic-identifiers
task_categories:
- text-generation
- text-retrieval
---

# AL-GR/Origin-Sequence-Data: Raw User Behavior Sequences πŸ“œ

[Paper](https://huggingface.co/papers/2509.20904) | [Project Page](https://huggingface.co/AL-GR) | [Code](https://github.com/selous123/al_sid)

## About the Dataset

This dataset is part of **FORGE**, a comprehensive benchmark for **FO**rming **R**aw user behavior sequences and **G**enerative r**E**trieval in Industrial Datasets, as presented in the paper [FORGE: Forming Semantic Identifiers for Generative Retrieval in Industrial Datasets](https://huggingface.co/papers/2509.20904). The FORGE benchmark aims to address challenges in semantic identifiers (SIDs) for generative retrieval (GR) by providing a large-scale public dataset with multimodal features.

Specifically, this `AL-GR/Origin-Sequence-Data` repository contains the foundational **raw user behavior sequences** for the `AL-GR` ecosystem. It represents the data *before* it is formatted into the instruction-following prompts used for training Large Language Models (LLMs) in generative retrieval tasks. The full FORGE dataset comprises 14 billion user interactions and multimodal features of 250 million items sampled from Taobao, one of the biggest e-commerce platforms in China.

Each row in this dataset (`Origin-Sequence-Data`) represents a step in a user's journey, consisting of a sequence of previously interacted items (`user_history`) and the next item they interacted with (`target_item`). All item IDs have been anonymized into short, unique strings.

This dataset is ideal for:
- πŸ§‘β€πŸ”¬ Researchers who want to design their own data processing or prompting strategies for generative retrieval.
- πŸ“ˆ Training and evaluating traditional sequential recommendation models (e.g., GRU4Rec, SASRec, etc.).
- πŸ”Ž Understanding the source data from which the main `AL-GR` generative dataset was built.

## πŸš€ Sample Usage

The data is structured in multiple folders (`s1_splits`, `s2_splits`, etc.), which is a non-standard format for the `datasets` library. To make loading seamless, a **loading script** is required.

#### Step 1: Create the Loading Script

Create a Python file named `origin-sequence-data.py` in your local directory and paste the following code into it.

```python
import csv
import datasets
import glob

_DESCRIPTION = "Raw user behavior sequences for the AL-GR project, split into history and target item."
_CITATION = """
@misc{fu2025forgeformingsemanticidentifiers,
      title={FORGE: Forming Semantic Semantic Identifiers for Generative Retrieval in Industrial Datasets}, 
      author={Kairui Fu and Tao Zhang and Shuwen Xiao and Ziyang Wang and Xinming Zhang and Chenchi Zhang and Yuliang Yan and Junjun Zheng and Yu Li and Zhihong Chen and Jian Wu and Xiangheng Kong and Shengyu Zhang and Kun Kuang and Yuning Jiang and Bo Zheng},
      year={2025},
      eprint={2509.20904},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2509.20904}, 
}
"""

class OriginSequenceData(datasets.GeneratorBasedBuilder):
    """A loader for the AL-GR Raw User Behavior Sequences."""

    def _info(self):
        return datasets.DatasetInfo(
            description=_DESCRIPTION,
            features=datasets.Features({
                "user_history": datasets.Value("string"),
                "target_item": datasets.Value("string"),
            }),
            citation=_CITATION,
        )

    def _split_generators(self, dl_manager):
        # Data is already in the repository, so we point to the root.
        repo_path = dl_manager.manual_dir

        return [
            datasets.SplitGenerator(
                name="s1",
                gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/s1_splits/*.csv"))},
            ),
            datasets.SplitGenerator(
                name="s2",
                gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/s2_splits/*.csv"))},
            ),
            datasets.SplitGenerator(
                name="s3",
                gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/s3_splits/*.csv"))},
            ),
            datasets.SplitGenerator(
                name="test",
                gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/test/*.csv"))},
            ),
        ]

    def _generate_examples(self, filepaths):
        """Yields examples from the data files."""
        key = 0
        for filepath in filepaths:
            with open(filepath, "r", encoding="utf-8") as f:
                # Assuming the CSV has headers: 'user_history', 'target_item'
                # If not, you might need to use csv.reader and access by index.
                reader = csv.DictReader(f)
                for row in reader:
                    yield key, {
                        "user_history": row["user_history"],
                        "target_item": row["target_item"],
                    }
                    key += 1
```

#### Step 2: Upload the Script

Upload the `origin-sequence-data.py` file to the **root directory** of this dataset repository on the Hugging Face Hub.

#### Step 3: Load the Dataset with One Command!

Once the script is uploaded, you (and anyone else) can load the entire dataset effortlessly:

```python
from datasets import load_dataset

# The loading script will be automatically detected and executed.
dataset = load_dataset("AL-GR/Origin-Sequence-Data")

# Access different splits
print("Sample from s1 split:")
print(dataset['s1'][0])

print("
Sample from test split:")
print(dataset['test'][0])
```

## πŸ—οΈ Dataset Structure

### Data Fields

- `user_history` (string) πŸ•’: A space-separated sequence of anonymized item IDs representing the user's past interactions.
- `target_item` (string) 🎯: The single anonymized item ID that the user interacted with next.

### Data Splits

The dataset is partitioned into four main parts, stored in separate folders:
- `s1_splits`, `s2_splits`, `s3_splits`: Three chronological training splits. This is useful for time-aware training and evaluation, allowing models to be trained on older data and tested on newer data.
- `test`: A dedicated test set for final model evaluation.

## πŸ”— Relationship to `AL-GR`

This dataset is the direct precursor to the main `AL-GR` generative dataset. The transformation is as follows:

- **`Origin-Sequence-Data` (This dataset):**
  - `user_history`: "AdPxq 6Vf1Re WkQqK..."
  - `target_item`: "ECZSq"

- **`AL-GR` (Generative dataset):**
  - `system`: "You are a recommendation system..."
  - `user`: "The current user's historical behavior is as follows: C...C..." (IDs might be re-mapped)
  - `answer`: "C..." (The target item, re-mapped)

This dataset provides the raw material for anyone wishing to replicate or create variants of the `AL-GR` prompt format.

## ✍️ Citation

If you use this dataset in your research, please cite:

```bibtex
@misc{fu2025forgeformingsemanticidentifiers,
      title={FORGE: Forming Semantic Identifiers for Generative Retrieval in Industrial Datasets}, 
      author={Kairui Fu and Tao Zhang and Shuwen Xiao and Ziyang Wang and Xinming Zhang and Chenchi Zhang and Yuliang Yan and Junjun Zheng and Yu Li and Zhihong Chen and Jian Wu and Xiangheng Kong and Shengyu Zhang and Kun Kuang and Yuning Jiang and Bo Zheng},
      year={2025},
      eprint={2509.20904},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2509.20904}, 
}
```

## πŸ“œ License

This dataset is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).