AL-GR commited on
Commit
ff6dc79
Β·
verified Β·
1 Parent(s): c8c629e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +171 -0
README.md ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - zh
6
+ pretty_name: "AL-GR Raw Sequences πŸ“œ"
7
+ tags:
8
+ - sequential-recommendation
9
+ - raw-data
10
+ - anonymized
11
+ - e-commerce
12
+ - next-item-prediction
13
+ ---
14
+
15
+ # AL-GR/Origin-Sequence-Data: Raw User Behavior Sequences πŸ“œ
16
+
17
+ ## πŸ“ Dataset Summary
18
+
19
+ This repository, `AL-GR/Origin-Sequence-Data`, contains the foundational **raw user behavior sequences** for the `AL-GR` ecosystem. It represents the data *before* it is formatted into the instruction-following prompts used for training Large Language Models (LLMs).
20
+
21
+ Each row in the dataset represents a step in a user's journey, consisting of a sequence of previously interacted items (`user_history`) and the next item they interacted with (`target_item`). All item IDs have been anonymized into short, unique strings.
22
+
23
+ This dataset is ideal for:
24
+ - πŸ§‘β€πŸ”¬ Researchers who want to design their own data processing or prompting strategies.
25
+ - πŸ“ˆ Training and evaluating traditional sequential recommendation models (e.g., GRU4Rec, SASRec, etc.).
26
+ - πŸ”Ž Understanding the source data from which the main `AL-GR` dataset was built.
27
+
28
+ ## πŸš€ How to Use
29
+
30
+ The data is structured in multiple folders (`s1_splits`, `s2_splits`, etc.), which is a non-standard format for the `datasets` library. To make loading seamless, a **loading script** is required.
31
+
32
+ #### Step 1: Create the Loading Script
33
+
34
+ Create a Python file named `origin-sequence-data.py` in your local directory and paste the following code into it.
35
+
36
+ ```python
37
+ import csv
38
+ import datasets
39
+ import glob
40
+
41
+ _DESCRIPTION = "Raw user behavior sequences for the AL-GR project, split into history and target item."
42
+ _CITATION = """
43
+ @misc{al-gr-origin-sequence,
44
+ author = {[Your Name or Team Name]},
45
+ title = {AL-GR/Origin-Sequence-Data: Raw User Behavior Sequences},
46
+ year = {[Year]},
47
+ # ... other citation info
48
+ }
49
+ """
50
+
51
+ class OriginSequenceData(datasets.GeneratorBasedBuilder):
52
+ """A loader for the AL-GR Raw User Behavior Sequences."""
53
+
54
+ def _info(self):
55
+ return datasets.DatasetInfo(
56
+ description=_DESCRIPTION,
57
+ features=datasets.Features({
58
+ "user_history": datasets.Value("string"),
59
+ "target_item": datasets.Value("string"),
60
+ }),
61
+ citation=_CITATION,
62
+ )
63
+
64
+ def _split_generators(self, dl_manager):
65
+ # Data is already in the repository, so we point to the root.
66
+ repo_path = dl_manager.manual_dir
67
+
68
+ return [
69
+ datasets.SplitGenerator(
70
+ name="s1",
71
+ gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/s1_splits/*.csv"))},
72
+ ),
73
+ datasets.SplitGenerator(
74
+ name="s2",
75
+ gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/s2_splits/*.csv"))},
76
+ ),
77
+ datasets.SplitGenerator(
78
+ name="s3",
79
+ gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/s3_splits/*.csv"))},
80
+ ),
81
+ datasets.SplitGenerator(
82
+ name="test",
83
+ gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/test/*.csv"))},
84
+ ),
85
+ ]
86
+
87
+ def _generate_examples(self, filepaths):
88
+ """Yields examples from the data files."""
89
+ key = 0
90
+ for filepath in filepaths:
91
+ with open(filepath, "r", encoding="utf-8") as f:
92
+ # Assuming the CSV has headers: 'user_history', 'target_item'
93
+ # If not, you might need to use csv.reader and access by index.
94
+ reader = csv.DictReader(f)
95
+ for row in reader:
96
+ yield key, {
97
+ "user_history": row["user_history"],
98
+ "target_item": row["target_item"],
99
+ }
100
+ key += 1
101
+ ```
102
+
103
+ #### Step 2: Upload the Script
104
+
105
+ Upload the `origin-sequence-data.py` file to the **root directory** of this dataset repository on the Hugging Face Hub.
106
+
107
+ #### Step 3: Load the Dataset with One Command!
108
+
109
+ Once the script is uploaded, you (and anyone else) can load the entire dataset effortlessly:
110
+
111
+ ```python
112
+ from datasets import load_dataset
113
+
114
+ # The loading script will be automatically detected and executed.
115
+ dataset = load_dataset("AL-GR/Origin-Sequence-Data")
116
+
117
+ # Access different splits
118
+ print("Sample from s1 split:")
119
+ print(dataset['s1'][0])
120
+
121
+ print("\nSample from test split:")
122
+ print(dataset['test'][0])
123
+ ```
124
+
125
+ ## πŸ—οΈ Dataset Structure
126
+
127
+ ### Data Fields
128
+
129
+ - `user_history` (string) πŸ•’: A space-separated sequence of anonymized item IDs representing the user's past interactions.
130
+ - `target_item` (string) 🎯: The single anonymized item ID that the user interacted with next.
131
+
132
+ ### Data Splits
133
+
134
+ The dataset is partitioned into four main parts, stored in separate folders:
135
+ - `s1_splits`, `s2_splits`, `s3_splits`: Three chronological training splits. This is useful for time-aware training and evaluation, allowing models to be trained on older data and tested on newer data.
136
+ - `test`: A dedicated test set for final model evaluation.
137
+
138
+ ## πŸ”— Relationship to `AL-GR`
139
+
140
+ This dataset is the direct precursor to the main `AL-GR` generative dataset. The transformation is as follows:
141
+
142
+ - **`Origin-Sequence-Data` (This dataset):**
143
+ - `user_history`: "AdPxq 6Vf1Re WkQqK..."
144
+ - `target_item`: "ECZSq"
145
+
146
+ - **`AL-GR` (Generative dataset):**
147
+ - `system`: "You are a recommendation system..."
148
+ - `user`: "The current user's historical behavior is as follows: C...C..." (IDs might be re-mapped)
149
+ - `answer`: "C..." (The target item, re-mapped)
150
+
151
+ This dataset provides the raw material for anyone wishing to replicate or create variants of the `AL-GR` prompt format.
152
+
153
+ ## ✍️ Citation
154
+
155
+ If you use this dataset in your research, please cite:
156
+
157
+ ```bibtex
158
+ @misc{fu2025forgeformingsemanticidentifiers,
159
+ title={FORGE: Forming Semantic Identifiers for Generative Retrieval in Industrial Datasets},
160
+ author={Kairui Fu and Tao Zhang and Shuwen Xiao and Ziyang Wang and Xinming Zhang and Chenchi Zhang and Yuliang Yan and Junjun Zheng and Yu Li and Zhihong Chen and Jian Wu and Xiangheng Kong and Shengyu Zhang and Kun Kuang and Yuning Jiang and Bo Zheng},
161
+ year={2025},
162
+ eprint={2509.20904},
163
+ archivePrefix={arXiv},
164
+ primaryClass={cs.IR},
165
+ url={https://arxiv.org/abs/2509.20904},
166
+ }
167
+ ```
168
+
169
+ ## πŸ“œ License
170
+
171
+ This dataset is licensed under the [e.g., Apache License 2.0].