| Notation | Explanation |
| i, i1, ij | Item, item identifier, item ID |
| t | The number of actions in the input action sequence; the timestamp when the model makes a prediction |
| it+1 | The ground-truth next item |
| it+1 | The predicted next item |
| S = {i1, i2, ..., it} | The action sequence where each action is represented with the interacted item ID |
| A, A1, Aj | A set of item features or tokens |
| m = |Aj| | The number of features associated with each item |
| fj,k | The k-th feature of item ij |
| Fk | The collection of all possible choices for the k-th feature |
| S' = {A1, A2, ..., At} | The action sequence where each action is represented with a set of item features |
| c, c1, cj | Input & generated tokens |
| l | The number of tokens in the token sequence |
| C = {c1, c2, ..., cl} | The token sequence tokenized from the input action sequence S' |
| {cl+1, ..., cq} | The tokens generated by the GR model |
| V | The vocabulary of ActionPiece tokenizer |
| R | The merge rules of ActionPiece tokenizer |
| {(cu, cv) → cnew} | One merge rule indicating two adjacent tokens cu and cv can be replaced by a token cnew |
| Q = |V| | The size of ActionPiece vocabulary |
| P(c, c') | The probability that tokens c and c' are adjacent when flattening a sequence of sets into a token sequence |
| N | The number of action sequences in the training corpus |
| L | The average length of action sequences in the training corpus |
| H | Maximal heap size, O(NLm) |
| q | The number of segmentations produced using set permutation regularization during inference |
# Appendices
# A. Notations
We summarize the notations used in this paper in Table 6.
# B. Algorithmic Details
In this section, we provide detailed algorithms for vocabulary construction and segmentation.
# B.1. Vocabulary Construction Algorithm
The overall procedure for vocabulary construction is illustrated in Algorithm 1. As described in Section 3.2.1, this process involves iterative Count (Algorithm 2) and Update (Algorithm 3) operations.
# B.2. Segmentation with Set Permutation Regularization Algorithm
The detailed algorithm for segmenting action sequences into token sequences using set permutation regularization (SPR) is shown in Algorithm 4. In practice, we often run Algorithm 4 multiple times to augment the training corpus or ensemble recommendation outputs, as described in Sections 3.3.1 and 3.3.2.
# C. Efficient Vocabulary Construction Implementation
To efficiently construct the ActionPiece vocabulary, we propose using data structures such as heaps with a lazy update trick, linked lists, and inverted indices to speed up each iteration of the construction process. The key idea is to avoid recalculating token co-occurrences in every iteration and instead update the data structures. The pseudocode is shown in Figure 7.
Algorithm 2 ActionPiece Vocabulary Construction - Count (Figure 2)
input Action sequence corpus $S'$ , current vocabulary $V$
output Accumulated weighted token co-occurrences count(.,.)
1: for i←0 to $|V|, j←0$ to $|V|$ do
2: count $(c_i, c_j) \gets 0$
3: end for
4: for all sequence $S' \in S'$ do
5: t← length $(S')$ # Number of action nodes in sequence
6: for k←0 to t-1 do
7: $A_k \gets S'[k]$ # Current action node
8: # Process all unordered token pairs within $A_k$
9: for all $c_i, c_j \in A_k, i \neq j$ do
10: count $(c_i, c_j) \gets \text{count}(c_i, c_j) + 2/|A_k|$ # Weight of tokens within a single set (Equation (1))
11: count $(c_j, c_i) \gets \text{count}(c_j, c_i) + 2/|A_k|$ # Symmetric update
12: end for
13: # Process all ordered token pairs between $A_k$ and $A_{k+1}$
14: if k