Add 1 files
Browse files- 2412/2412.03160.md +397 -0
2412/2412.03160.md
ADDED
|
@@ -0,0 +1,397 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: Byte BPE Tokenization as an Inverse string Homomorphism
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2412.03160
|
| 4 |
+
|
| 5 |
+
Published Time: Thu, 05 Dec 2024 01:34:06 GMT
|
| 6 |
+
|
| 7 |
+
Markdown Content:
|
| 8 |
+
Q1: Given a context-free or regular language L 𝐿 L italic_L over the character alphabet, what is the structure of the token language L′superscript 𝐿′L^{\prime}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT after tokenization?
|
| 9 |
+
|
| 10 |
+
This question natrually arises when we want to study the recognition power of language models (LLMs) for a certain category, such as context-free languages, by writing a context-free grammar in the character space and feeding it directly to the LLM without worrying about the structure being lost after tokenization. To answer this question, we first formalize the tokenization process as a mapping from the character alphabet to the token ID alphabet. Counterintuitively, we first show that the tokenization is not an homomorphic mapping, i.e., it does not preserve the structure of the input string language. However, we show that the inverse of the tokenization process, which we refer to as detokenization, is an homomorphic mapping. This homomorphic property of detokenization allows us to establish a connection between the token language L′superscript 𝐿′L^{\prime}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and the original string language L 𝐿 L italic_L. We thus obtain a preliminary answer to Q1: the token language L′superscript 𝐿′L^{\prime}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT retains the structure of the original string language L 𝐿 L italic_L. If the original language L 𝐿 L italic_L is a context-free (regular) language, then the token language L′superscript 𝐿′L^{\prime}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is also a context-free (regular) language. We then extend our discussion to the following question:
|
| 11 |
+
|
| 12 |
+
Q2: Is the answer to Q1 affected by the presence of Unicode characters in the source language?
|
| 13 |
+
|
| 14 |
+
We show that the above homomorphic property of detokenization breaks down when the source language contains Unicode characters. This is because a single Unicode character can be represented by multiple token IDs. However, a closer look reveals that the tokenization process is actually operating on byte-level tokenizations of the Unicode characters. We show that the support for Unicode characters can be naturally integrated into the homomorphic framework by transforming the grammar with Unicode characters to a new grammar with byte-level alphabet. We thus obtain the answer to Q2: the presence of Unicode characters does not affect the homomorphic property of tokenization. Finally, we extend our discussion to the following question:
|
| 15 |
+
|
| 16 |
+
Q3: The tokenization mentioned above is actually a one-to-many mapping. But in practice, the tokenizer selects one of the possible tokenizations based on some rules. How does this affect the structure of the token language?
|
| 17 |
+
|
| 18 |
+
We refer to the special tokenization selected by the tokenizer as the proper tokenization. The actual proper tokenization language is a special subset of the tokenization language. We provide some insights into the structure of the proper tokenization language while not providing a complete answer to Q3.
|
| 19 |
+
|
| 20 |
+
Depth String Tokenization Tokens
|
| 21 |
+
0""[ 1 ]BOS = 1
|
| 22 |
+
1"[]"[ 1, 5159 ]␣[ = 518
|
| 23 |
+
2"[[]]"[ 1, 518, 2636, 29962 ][] = 2636
|
| 24 |
+
3"[[[]]]"[ 1, 5519, 2636, 5262 ]␣[[ = 5519
|
| 25 |
+
4"[[[[[]]]]"[ 1, 5519, 29961, 2636, 5262, 29962 ][[ = 29961
|
| 26 |
+
5"[[[[[[]]]]]"[ 1, 5519, 8999, 2636, 5262, 5262 ][[[ = 8999
|
| 27 |
+
6"[[[[[[[[]]]]]]"[ 1, 5519, 8999, 29961, 2636, 5262, 5262, 29962 ]] = 29962
|
| 28 |
+
7"[[[[[[[[[]]]]]]]"[ 1, 5519, 8999, 8999, 2636, 5262, 5262, 5262 ]]] = 5262
|
| 29 |
+
8"[[[[[[[[[[[]]]]]]]]"[ 1, 5519, 8999, 8999, 29961, 2636, 5262, 5262, 5262, 29962 ]␣[] = 5159
|
| 30 |
+
|
| 31 |
+
Figure 2: Tokenization Output for Nested Brackets Using LLaMA-2 Tokenizer
|
| 32 |
+
|
| 33 |
+
### Contributions
|
| 34 |
+
|
| 35 |
+
* •Tokenization Formalization: We formalize tokenization as a mapping between character and token ID alphabets. While tokenization itself is not homomorphic, we show that its inverse, detokenization, preserves the structure of the original string language.
|
| 36 |
+
* •Language Preservation: We demonstrate that token languages retain the structure of the original context-free or regular languages, answering our first research question (Q1).
|
| 37 |
+
* •Unicode Integration: We extend the homomorphic framework to handle Unicode characters by treating tokenization at the byte level, ensuring that Unicode support does not affect the structure of the token language (Q2).
|
| 38 |
+
* •Proper Tokenization: We introduce proper tokenization and analyze some of its structural properties, providing insights into the structure of the proper tokenization language (Q3).
|
| 39 |
+
|
| 40 |
+
These contributions offer a framework to understand tokenization’s role in preserving language structure in large models.
|
| 41 |
+
|
| 42 |
+
2 Preliminaries
|
| 43 |
+
---------------
|
| 44 |
+
|
| 45 |
+
### 2.1 Context-free grammar and language
|
| 46 |
+
|
| 47 |
+
###### Definition 2.1(Context-free Grammar).
|
| 48 |
+
|
| 49 |
+
A _context-free grammar (CFG)_ is a 4-tuple G=(V,Σ,P,S)𝐺 𝑉 Σ 𝑃 𝑆 G=(V,\Sigma,P,S)italic_G = ( italic_V , roman_Σ , italic_P , italic_S ), where
|
| 50 |
+
|
| 51 |
+
* •V 𝑉 V italic_V is a finite set of non-terminal symbols (variables),
|
| 52 |
+
* •Σ Σ\Sigma roman_Σ is a finite set of terminal symbols,
|
| 53 |
+
* •P 𝑃 P italic_P is a finite set of production rules, each of the form A→α→𝐴 𝛼 A\rightarrow\alpha italic_A → italic_α, where A∈N 𝐴 𝑁 A\in N italic_A ∈ italic_N and α∈(N∪Σ)∗𝛼 superscript 𝑁 Σ\alpha\in(N\cup\Sigma)^{*}italic_α ∈ ( italic_N ∪ roman_Σ ) start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT,
|
| 54 |
+
* •S∈N 𝑆 𝑁 S\in N italic_S ∈ italic_N is the start symbol.
|
| 55 |
+
|
| 56 |
+
###### Definition 2.2(Formal Language).
|
| 57 |
+
|
| 58 |
+
A _formal language_ L 𝐿 L italic_L is a set of strings over an alphabet Σ Σ\Sigma roman_Σ, where a string is a finite sequence of symbols from Σ Σ\Sigma roman_Σ.
|
| 59 |
+
|
| 60 |
+
If G(V,Σ,P,S)𝐺 𝑉 Σ 𝑃 𝑆 G(V,\Sigma,P,S)italic_G ( italic_V , roman_Σ , italic_P , italic_S ) is a CFG, the language of G 𝐺 G italic_G, denoted L(G)𝐿 𝐺 L(G)italic_L ( italic_G ), is the set of all strings of terminal symbols that can be derived from the start symbol S 𝑆 S italic_S. If a language L 𝐿 L italic_L is the language of some CFG, then L 𝐿 L italic_L is called a _context-free language(CFL)_.
|
| 61 |
+
|
| 62 |
+
###### Definition 2.3(Pushdown Automaton).
|
| 63 |
+
|
| 64 |
+
A _pushdown automaton (PDA)_ is a 7-tuple M=(Q,Σ,Γ,δ,q 0,Z 0,F)𝑀 𝑄 Σ Γ 𝛿 subscript 𝑞 0 subscript 𝑍 0 𝐹 M=(Q,\Sigma,\Gamma,\delta,q_{0},Z_{0},F)italic_M = ( italic_Q , roman_Σ , roman_Γ , italic_δ , italic_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_Z start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_F ), where
|
| 65 |
+
|
| 66 |
+
* •Q 𝑄 Q italic_Q is a finite set of states,
|
| 67 |
+
* •Σ Σ\Sigma roman_Σ is a finite set of input symbols,
|
| 68 |
+
* •Γ Γ\Gamma roman_Γ is a finite set of stack symbols,
|
| 69 |
+
* •δ:Q×(Σ∪{ϵ})×Γ→2 Q×Γ∗:𝛿→𝑄 Σ italic-ϵ Γ superscript 2 𝑄 superscript Γ\delta:Q\times(\Sigma\cup\{\epsilon\})\times\Gamma\rightarrow 2^{Q\times\Gamma% ^{*}}italic_δ : italic_Q × ( roman_Σ ∪ { italic_ϵ } ) × roman_Γ → 2 start_POSTSUPERSCRIPT italic_Q × roman_Γ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT is the transition function,
|
| 70 |
+
* •q 0∈Q subscript 𝑞 0 𝑄 q_{0}\in Q italic_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∈ italic_Q is the start state,
|
| 71 |
+
* •Z 0∈Γ subscript 𝑍 0 Γ Z_{0}\in\Gamma italic_Z start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∈ roman_Γ is the initial stack symbol,
|
| 72 |
+
* •F⊆Q 𝐹 𝑄 F\subseteq Q italic_F ⊆ italic_Q is the set of accepting states.
|
| 73 |
+
|
| 74 |
+
###### Theorem 2.1(Pushdown Automaton and Context-free Grammar).
|
| 75 |
+
|
| 76 |
+
For every context-free grammar G 𝐺 G italic_G, there exists a pushdown automaton M 𝑀 M italic_M that accepts the language L(G)𝐿 𝐺 L(G)italic_L ( italic_G ).
|
| 77 |
+
|
| 78 |
+
Thm.[2.1](https://arxiv.org/html/2412.03160v1#S2.Thmtheorem1 "Theorem 2.1 (Pushdown Automaton and Context-free Grammar). ‣ 2.1 Context-free grammar and language ‣ 2 Preliminaries ‣ Contributions ‣ 1 Introduction ‣ Byte BPE Tokenization as an Inverse string Homomorphism") implies that one can always construct a PDA to decide whether a given string belongs to a context-free language.
|
| 79 |
+
|
| 80 |
+
###### Definition 2.4(String Homomorphism).
|
| 81 |
+
|
| 82 |
+
Given two operations ⊕direct-sum\oplus⊕ and ⊙direct-product\odot⊙ on two alphabets Σ∗superscript Σ\Sigma^{*}roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT and ℕ∗superscript ℕ\mathbb{N}^{*}blackboard_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT respectively, a function h:Σ∗→T∗:ℎ→superscript Σ superscript 𝑇 h:\Sigma^{*}\rightarrow T^{*}italic_h : roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → italic_T start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a string homomorphism if ∀u,v∈Σ∗,h(u⊕v)=h(u)⊙h(v)formulae-sequence for-all 𝑢 𝑣 superscript Σ ℎ direct-sum 𝑢 𝑣 direct-product ℎ 𝑢 ℎ 𝑣\forall u,v\in\Sigma^{*},h(u\oplus v)=h(u)\odot h(v)∀ italic_u , italic_v ∈ roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_h ( italic_u ⊕ italic_v ) = italic_h ( italic_u ) ⊙ italic_h ( italic_v ).
|
| 83 |
+
|
| 84 |
+
In the following, we assume that ⊕direct-sum\oplus⊕ and ⊙direct-product\odot⊙ are both string concatenation operations and use xy 𝑥 𝑦 xy italic_x italic_y to denote the concatenation of two elements x 𝑥 x italic_x and y 𝑦 y italic_y. Thus, a mapping h ℎ h italic_h is a string homomorphism if it preserves the concatenation of strings. One can apply a homomorphism to a language L 𝐿 L italic_L by applying it to each string in the language, which results in a new language h(L)ℎ 𝐿 h(L)italic_h ( italic_L ). That is, h(L)={h(w)∣w∈L}ℎ 𝐿 conditional-set ℎ 𝑤 𝑤 𝐿 h(L)=\{h(w)\mid w\in L\}italic_h ( italic_L ) = { italic_h ( italic_w ) ∣ italic_w ∈ italic_L } is the image of L 𝐿 L italic_L under h ℎ h italic_h.
|
| 85 |
+
|
| 86 |
+
###### Definition 2.5(Inverse Homomorphism).
|
| 87 |
+
|
| 88 |
+
Given a string homomorphism h:Σ∗→ℕ∗:ℎ→superscript Σ superscript ℕ h:\Sigma^{*}\rightarrow\mathbb{N}^{*}italic_h : roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → blackboard_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, the inverse function h−1:ℕ∗→Σ∗:superscript ℎ 1→superscript ℕ superscript Σ h^{-1}:\mathbb{N}^{*}\rightarrow\Sigma^{*}italic_h start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT : blackboard_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is called an inverse homomorphism.
|
| 89 |
+
|
| 90 |
+
The inverse homomorphism h−1(L)superscript ℎ 1 𝐿 h^{-1}(L)italic_h start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_L ) includes all strings in Σ∗superscript Σ\Sigma^{*}roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT that map to strings in L 𝐿 L italic_L under h ℎ h italic_h.
|
| 91 |
+
|
| 92 |
+
###### Theorem 2.2(Closure under Inverse Homomorphism).
|
| 93 |
+
|
| 94 |
+
If L 𝐿 L italic_L is a context-free(regular) language and h:Σ∗→ℕ∗:ℎ→superscript Σ superscript ℕ h:\Sigma^{*}\rightarrow\mathbb{N}^{*}italic_h : roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → blackboard_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a homomorphism, then the inverse homomorphic image h−1(L)superscript ℎ 1 𝐿 h^{-1}(L)italic_h start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_L ) is also a context-free(regular) language(Hopcroft et al., [2006](https://arxiv.org/html/2412.03160v1#bib.bib5), Theorem 7.30).
|
| 95 |
+
|
| 96 |
+
###### Theorem 2.3(Closure under intersection).
|
| 97 |
+
|
| 98 |
+
If L 1 subscript 𝐿 1 L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is a context-free(regular) language and L 2 subscript 𝐿 2 L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is a regular language, then the intersection L 1∩L 2 subscript 𝐿 1 subscript 𝐿 2 L_{1}\cap L_{2}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∩ italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is also a context-free(regular) language (Hopcroft et al., [2006](https://arxiv.org/html/2412.03160v1#bib.bib5), Theorem 7.27).
|
| 99 |
+
|
| 100 |
+
### 2.2 Tokenization
|
| 101 |
+
|
| 102 |
+
In the context of LLM, we have two alphabets:
|
| 103 |
+
|
| 104 |
+
1. 1.the character alphabet Σ Σ\Sigma roman_Σ which is typically a charset, i.e. Unicode characters or ASCII.
|
| 105 |
+
2. 2.the token ID alphabet ℕ ℕ\mathbb{N}blackboard_N which is the set of all possible token IDs in a language model’s vocabulary, i.e. ℕ={0,1,…,|V|−1}ℕ 0 1…𝑉 1\mathbb{N}=\{0,1,\ldots,|V|-1\}blackboard_N = { 0 , 1 , … , | italic_V | - 1 } where V 𝑉 V italic_V is the vocabulary of the language model’s tokenizer.
|
| 106 |
+
|
| 107 |
+
###### Definition 2.6(Tokenization).
|
| 108 |
+
|
| 109 |
+
The tokenization function 2 2 2 In some literature, tokenization refers specifically to the process of splitting text, while the term encoding is used for the mapping from tokens to IDs. In this work, we use tokenization to cover both processes.f tok:Σ∗→ℕ∗:subscript 𝑓 tok→superscript Σ superscript ℕ f_{\text{tok}}:\Sigma^{*}\rightarrow\mathbb{N}^{*}italic_f start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT : roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → blackboard_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT maps a string to a sequence of sub-word units, or tokens, which are indexed by token IDs and can be fed into a model.
|
| 110 |
+
|
| 111 |
+
The function f tok subscript 𝑓 tok f_{\text{tok}}italic_f start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT is injective but not surjective, since not every sequence in ℕ∗superscript ℕ\mathbb{N}^{*}blackboard_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT corresponds to a string in Σ∗superscript Σ\Sigma^{*}roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT.
|
| 112 |
+
|
| 113 |
+
###### Definition 2.7(Detokenization).
|
| 114 |
+
|
| 115 |
+
The detokenization function f detok:ℕ∗→Σ∗:subscript 𝑓 detok→superscript ℕ superscript Σ f_{\text{detok}}:\mathbb{N}^{*}\rightarrow\Sigma^{*}italic_f start_POSTSUBSCRIPT detok end_POSTSUBSCRIPT : blackboard_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT does the opposite of tokenization. It reconstructs the original string by converting the token IDs back into their respective sub-word tokens and concatenating them.
|
| 116 |
+
|
| 117 |
+
By construct, f detok(f tok(x))=x subscript 𝑓 detok subscript 𝑓 tok 𝑥 𝑥 f_{\text{detok}}(f_{\text{tok}}(x))=x italic_f start_POSTSUBSCRIPT detok end_POSTSUBSCRIPT ( italic_f start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT ( italic_x ) ) = italic_x for all x∈Σ∗𝑥 superscript Σ x\in\Sigma^{*}italic_x ∈ roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT.
|
| 118 |
+
|
| 119 |
+
###### Definition 2.8(Extended Tokenization).
|
| 120 |
+
|
| 121 |
+
The extended tokenization function F tok:Σ∗→ℕ∗:subscript 𝐹 tok→superscript Σ superscript ℕ F_{\text{tok}}:\Sigma^{*}\rightarrow\mathbb{N}^{*}italic_F start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT : roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → blackboard_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a surjective extension of f tok subscript 𝑓 tok f_{\text{tok}}italic_f start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT, mapping a string to all possible valid tokenizations.
|
| 122 |
+
|
| 123 |
+
An illustration of the difference between a proper tokenization and an extended tokenization is shown in Listing[1](https://arxiv.org/html/2412.03160v1#LST1 "Listing 1 ‣ 2.2 Tokenization ‣ 2 Preliminaries ‣ Contributions ‣ 1 Introduction ‣ Byte BPE Tokenization as an Inverse string Homomorphism").
|
| 124 |
+
|
| 125 |
+
###### Proposition 2.1.
|
| 126 |
+
|
| 127 |
+
The detokenization function f detok subscript 𝑓 detok f_{\text{detok}}italic_f start_POSTSUBSCRIPT detok end_POSTSUBSCRIPT is the inverse of the tokenization function F tok subscript 𝐹 tok F_{\text{tok}}italic_F start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT.
|
| 128 |
+
|
| 129 |
+
Given a string s∈Σ∗𝑠 superscript Σ s\in\Sigma^{*}italic_s ∈ roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, we can distinguish between three types of tokenizations:
|
| 130 |
+
|
| 131 |
+
1. 1.f tok(s)subscript 𝑓 tok 𝑠 f_{\text{tok}}(s)italic_f start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT ( italic_s ) as the proper tokenization of s 𝑠 s italic_s
|
| 132 |
+
2. 2.F tok(s)subscript 𝐹 tok 𝑠 F_{\text{tok}}(s)italic_F start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT ( italic_s ) as the extended tokenization of s 𝑠 s italic_s
|
| 133 |
+
3. 3.F tok(s)∖f tok(s)subscript 𝐹 tok 𝑠 subscript 𝑓 tok 𝑠 F_{\text{tok}}(s)\setminus f_{\text{tok}}(s)italic_F start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT ( italic_s ) ∖ italic_f start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT ( italic_s ) as the improper tokenization of s 𝑠 s italic_s
|
| 134 |
+
|
| 135 |
+
In practice, Proper tokenization is the unique tokenization of a string that is directly returned by the tokenizer. As f tok subscript 𝑓 tok f_{\text{tok}}italic_f start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT is not surjective, we define the image of f tok subscript 𝑓 tok f_{\text{tok}}italic_f start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT, which is a strict subset of ℕ∗superscript ℕ\mathbb{N}^{*}blackboard_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, as the proper tokenization space ℕ proper∗superscript subscript ℕ proper\mathbb{N}_{\text{proper}}^{*}blackboard_N start_POSTSUBSCRIPT proper end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT.
|
| 136 |
+
|
| 137 |
+
from transformers import GPT2Tokenizer
|
| 138 |
+
|
| 139 |
+
tokenizer=GPT2Tokenizer.from_pretrained("gpt2")
|
| 140 |
+
|
| 141 |
+
tokenizer.encode("a")
|
| 142 |
+
|
| 143 |
+
tokenizer.encode("aa")
|
| 144 |
+
|
| 145 |
+
tokenizer.encode("aaa")
|
| 146 |
+
|
| 147 |
+
tokenizer.encode("aaaa")
|
| 148 |
+
|
| 149 |
+
tokenizer.decode([24794])
|
| 150 |
+
|
| 151 |
+
tokenizer.decode([46071,64])
|
| 152 |
+
|
| 153 |
+
tokenizer.decode([7252,7252])
|
| 154 |
+
|
| 155 |
+
tokenizer.decode([7252,64,64])
|
| 156 |
+
|
| 157 |
+
tokenizer.decode([64,7252,64])
|
| 158 |
+
|
| 159 |
+
tokenizer.decode([64,64,64,64])
|
| 160 |
+
|
| 161 |
+
Listing 1: The text "aaaa" has more than one tokenizations while only the first one is the proper tokenization.
|
| 162 |
+
|
| 163 |
+
Unicode Support. Byte-level tokenization 3 3 3 also known as byte-level encoding(Wang et al., [2019](https://arxiv.org/html/2412.03160v1#bib.bib14); Radford et al., [2019](https://arxiv.org/html/2412.03160v1#bib.bib9)) is the standard way to provide support for Unicode characters. Unlike traditional character-level tokenization, byte-level tokenization first converts the text into a byte sequence according to a format such as UTF-8, and then applies tokenization to the byte sequence. The resulting token IDs are chunks of bytes instead of characters, which allows the model to support effectively any Unicode character.
|
| 164 |
+
|
| 165 |
+
### 2.3 Tokenization languages
|
| 166 |
+
|
| 167 |
+
Given a formal language L 𝐿 L italic_L defined on an alphabet Σ Σ\Sigma roman_Σ (either ASCII or Unicode), with a context-free grammar G 𝐺 G italic_G generating L 𝐿 L italic_L, we investigate the structure of the token language L′superscript 𝐿′L^{\prime}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT after tokenization.
|
| 168 |
+
|
| 169 |
+
Aligned with the terms used in the previous sections, we define the following terms:
|
| 170 |
+
|
| 171 |
+
* •Source language L 𝐿 L italic_L is a context-free or regular language over the character alphabet Σ Σ\Sigma roman_Σ.
|
| 172 |
+
* •Extended tokenization language L E′subscript superscript 𝐿′𝐸 L^{\prime}_{E}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT is the set of all possible tokenizations of strings in L 𝐿 L italic_L, i.e. the image of L 𝐿 L italic_L under the extended tokenization F tok subscript 𝐹 tok F_{\text{tok}}italic_F start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT.
|
| 173 |
+
* •Proper tokenization language L′superscript 𝐿′L^{\prime}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is the set of tokenizations returned by the tokenizer, i.e. the image of L 𝐿 L italic_L under the proper tokenization f tok subscript 𝑓 tok f_{\text{tok}}italic_f start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT.
|
| 174 |
+
* •Improper tokenization language L I′subscript superscript 𝐿′𝐼 L^{\prime}_{I}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT is the set of tokenizations that are not returned by the tokenizer, i.e. L I′=L E′∖L′subscript superscript 𝐿′𝐼 subscript superscript 𝐿′𝐸 superscript 𝐿′L^{\prime}_{I}=L^{\prime}_{E}\setminus L^{\prime}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT = italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT ∖ italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT.
|
| 175 |
+
|
| 176 |
+
3 Extended Tokenization preserving Language Structure
|
| 177 |
+
-----------------------------------------------------
|
| 178 |
+
|
| 179 |
+
In this section, we study Q1: Given a context-free or regular language L 𝐿 L italic_L over the character alphabet, what is the structure of the token language L′superscript 𝐿′L^{\prime}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT after tokenization? We start with showing that extended tokenization can be seen as an inverse homomorphism from Σ∗superscript Σ\Sigma^{*}roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT to ℕ∗superscript ℕ\mathbb{N}^{*}blackboard_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, which implies that the structure of the source language is preserved. At the end, we show a construction of a pushdown automaton (PDA) that recognizes the token language.
|
| 180 |
+
|
| 181 |
+
### 3.1 Extended Tokenization is inverse homomorphism
|
| 182 |
+
|
| 183 |
+
###### Proposition 3.1.
|
| 184 |
+
|
| 185 |
+
Tokenization function f tok subscript 𝑓 tok f_{\text{tok}}italic_f start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT is not homomorphic from Σ∗superscript Σ\Sigma^{*}roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT to ℕ∗superscript ℕ\mathbb{N}^{*}blackboard_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT under the concatenation operation.
|
| 186 |
+
|
| 187 |
+
One can easily verify this by considering the following counter-example.
|
| 188 |
+
|
| 189 |
+
###### Example 3.1.
|
| 190 |
+
|
| 191 |
+
With GPT-3 tokenizer, the brackets [ and ] are individually tokenized as 58 58 58 58 and 60 60 60 60, respectively, but the combined [] is tokenized as 21737 21737 21737 21737.
|
| 192 |
+
|
| 193 |
+
In contrast, the detokenization function is homomorphic, i.e. F detok([n 1,n 2])=[F detok(n 1),F detok(n 2)],∀n 1,n 2∈ℕ∗formulae-sequence subscript 𝐹 detok subscript 𝑛 1 subscript 𝑛 2 subscript 𝐹 detok subscript 𝑛 1 subscript 𝐹 detok subscript 𝑛 2 for-all subscript 𝑛 1 subscript 𝑛 2 superscript ℕ F_{\text{detok}}([n_{1},n_{2}])=[F_{\text{detok}}(n_{1}),F_{\text{detok}}(n_{2% })],\forall n_{1},n_{2}\in\mathbb{N}^{*}italic_F start_POSTSUBSCRIPT detok end_POSTSUBSCRIPT ( [ italic_n start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_n start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] ) = [ italic_F start_POSTSUBSCRIPT detok end_POSTSUBSCRIPT ( italic_n start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) , italic_F start_POSTSUBSCRIPT detok end_POSTSUBSCRIPT ( italic_n start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) ] , ∀ italic_n start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_n start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∈ blackboard_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT as shown in Fig.[1](https://arxiv.org/html/2412.03160v1#S1 "1 Introduction ‣ Byte BPE Tokenization as an Inverse string Homomorphism") This is not surprising, as the detokenization function, as the name suggests, performs the following steps:
|
| 194 |
+
|
| 195 |
+
1. 1.Mapping token IDs back to their corresponding tokens.
|
| 196 |
+
2. 2.Concatenating these tokens to reconstruct the string.
|
| 197 |
+
3. 3.Performing necessary post-processing to restore the original string format.
|
| 198 |
+
|
| 199 |
+
We can now state the following proposition:
|
| 200 |
+
|
| 201 |
+
###### Proposition 3.2.
|
| 202 |
+
|
| 203 |
+
The detokenization function F detok:ℕ∗→Σ∗:subscript 𝐹 detok→superscript ℕ superscript Σ F_{\text{detok}}:\mathbb{N}^{*}\rightarrow\Sigma^{*}italic_F start_POSTSUBSCRIPT detok end_POSTSUBSCRIPT : blackboard_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is homomorphic under the concatenation operation.
|
| 204 |
+
|
| 205 |
+
As a direct consequence of the Proposition[3.2](https://arxiv.org/html/2412.03160v1#S3.Thmproposition2 "Proposition 3.2. ‣ 3.1 Extended Tokenization is inverse homomorphism ‣ 3 Extended Tokenization preserving Language Structure ‣ 2.3 Tokenization languages ‣ 2 Preliminaries ‣ Contributions ‣ 1 Introduction ‣ Byte BPE Tokenization as an Inverse string Homomorphism"), we have the following corollary:
|
| 206 |
+
|
| 207 |
+
###### Corollary 3.0.1.
|
| 208 |
+
|
| 209 |
+
The extended tokenization function F tok subscript 𝐹 tok F_{\text{tok}}italic_F start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT is an inverse homomorphism from Σ∗superscript Σ\Sigma^{*}roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT to ℕ∗superscript ℕ\mathbb{N}^{*}blackboard_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT under the concatenation operation.
|
| 210 |
+
|
| 211 |
+
All major tokenization schemes, including Byte Pair Encoding (BPE)(Sennrich et al., [2016](https://arxiv.org/html/2412.03160v1#bib.bib10)), WordPiece, and SentencePiece(Kudo and Richardson, [2018](https://arxiv.org/html/2412.03160v1#bib.bib8)), exhibit this homomorphic property in their detokenization functions.
|
| 212 |
+
|
| 213 |
+
Using the closure properties of context-free languages under inverse homomorphism (Theorem[2.2](https://arxiv.org/html/2412.03160v1#S2.Thmtheorem2 "Theorem 2.2 (Closure under Inverse Homomorphism). ‣ 2.1 Context-free grammar and language ‣ 2 Preliminaries ‣ Contributions ‣ 1 Introduction ‣ Byte BPE Tokenization as an Inverse string Homomorphism")), we can state the following proposition:
|
| 214 |
+
|
| 215 |
+
###### Proposition 3.3.
|
| 216 |
+
|
| 217 |
+
The extended token language L E′⊆ℕ∗subscript superscript 𝐿′𝐸 superscript ℕ L^{\prime}_{E}\subseteq\mathbb{N}^{*}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT ⊆ blackboard_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a context-free language if the original language L⊆Σ∗𝐿 superscript Σ L\subseteq\Sigma^{*}italic_L ⊆ roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a context-free language.
|
| 218 |
+
|
| 219 |
+
### 3.2 Token-space automata construction
|
| 220 |
+
|
| 221 |
+
In this section, we explain how to construct a recognizer for the extended token language L E′subscript superscript 𝐿′𝐸 L^{\prime}_{E}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT based on the recognizer for the string language L 𝐿 L italic_L. The main idea is based on the construction of a pushdown automaton (PDA) for the inverse homomorphism of a context-free language sketched in Hopcroft et al. ([2006](https://arxiv.org/html/2412.03160v1#bib.bib5), Theorem 7.30). Given a homomorphism h ℎ h italic_h from alphabet ℕ ℕ\mathbb{N}blackboard_N to alphabet Σ Σ\Sigma roman_Σ, and L 𝐿 L italic_L being a context-free language over Σ Σ\Sigma roman_Σ, the construction of a PDA to accept language L′=h−1(L)superscript 𝐿′superscript ℎ 1 𝐿 L^{\prime}=h^{-1}(L)italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_h start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_L ) is shown in Fig.[3](https://arxiv.org/html/2412.03160v1#S3.F3 "Figure 3 ‣ 3.2 Token-space automata construction ‣ 3 Extended Tokenization preserving Language Structure ‣ 2.3 Tokenization languages ‣ 2 Preliminaries ‣ Contributions ‣ 1 Introduction ‣ Byte BPE Tokenization as an Inverse string Homomorphism"). As stated in Thm.[2.1](https://arxiv.org/html/2412.03160v1#S2.Thmtheorem1 "Theorem 2.1 (Pushdown Automaton and Context-free Grammar). ‣ 2.1 Context-free grammar and language ‣ 2 Preliminaries ‣ Contributions ‣ 1 Introduction ‣ Byte BPE Tokenization as an Inverse string Homomorphism"), we can always construct a PDA M 𝑀 M italic_M which reads the input string in the alphabet Σ Σ\Sigma roman_Σ and accepts the language L 𝐿 L italic_L. The construction of such a PDA is standard and well-known in the literature(Hopcroft et al., [2006](https://arxiv.org/html/2412.03160v1#bib.bib5), Chap 6.3.1). We then construct a PDA M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT which reads the input string in the alphabet ℕ ℕ\mathbb{N}blackboard_N (token IDs in our case) and accepts the language L E′=F tok(L)subscript superscript 𝐿′𝐸 subscript 𝐹 tok 𝐿 L^{\prime}_{E}=F_{\text{tok}}(L)italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT = italic_F start_POSTSUBSCRIPT tok end_POSTSUBSCRIPT ( italic_L ). The working of the PDA M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is as follows:
|
| 222 |
+
|
| 223 |
+
1. 1.It applies the homomorphism h ℎ h italic_h(detokenization in our case) to the input token ID a 𝑎 a italic_a and puts the result h(a)ℎ 𝑎 h(a)italic_h ( italic_a ) into the buffer, i.e. mapping the token ID to the corresponding string in the character space.
|
| 224 |
+
2. 2.The underlying PDA M 𝑀 M italic_M in the character space reads the input characters h(a)ℎ 𝑎 h(a)italic_h ( italic_a ) and updates its state and stack accordingly.
|
| 225 |
+
|
| 226 |
+
The resulting PDA M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT reads the token IDs as input and decides whether the token IDs form a valid string in the token language tok(L)𝑡 𝑜 𝑘 𝐿 tok(L)italic_t italic_o italic_k ( italic_L ).
|
| 227 |
+
|
| 228 |
+

|
| 229 |
+
|
| 230 |
+
Figure 3: Construction of a PDA M′superscript 𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT to accept language h−1(L)superscript ℎ 1 𝐿 h^{-1}(L)italic_h start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_L ). In the context of LLM, the input a 𝑎 a italic_a is a token ID, the homomorphism h ℎ h italic_h is detokenization, the buffer is used to store the token h(a)ℎ 𝑎 h(a)italic_h ( italic_a ), the PDA state is the current state of the PDA in the character space, and the PDA stack is the stack of the PDA in the character space.
|
| 231 |
+
|
| 232 |
+
4 Tokenization with Unicode characters
|
| 233 |
+
--------------------------------------
|
| 234 |
+
|
| 235 |
+
In this section, we answer the Q2: Is the presence of Unicode characters in the input language a challenge for the structure-preserving property of tokenization?
|
| 236 |
+
|
| 237 |
+
When the source langauge contains Unicode characters, the tokenization process becomes more complex because a single character can be represented by multiple tokens which are not detokenizable independently. For example, the Chinese character {CJK}UTF8gbsn你(U+4F60, you) is tokenized as [19526,254]19526 254[19526,254][ 19526 , 254 ] in the GPT-2 tokenizer but the token 19526 19526 19526 19526 or 254 254 254 254 alone does not correspond to any character. Knowing only the token 19526 19526 19526 19526 is insufficient to determine the character {CJK}UTF8gbsn你, as the context provided by the token 254 254 254 254 is also necessary, as illustrated in Fig.[1](https://arxiv.org/html/2412.03160v1#S1 "1 Introduction ‣ Byte BPE Tokenization as an Inverse string Homomorphism"). This dependency of the next token seems breaking the homomorphic property of the detokenization function as shown in Fig.[1](https://arxiv.org/html/2412.03160v1#S1 "1 Introduction ‣ Byte BPE Tokenization as an Inverse string Homomorphism"). However, considering that the tokenization function is actually operating on byte-level tokenizations of the Unicode characters, we can prove that the token language still retains the structure of the original language.
|
| 238 |
+
|
| 239 |
+
###### Proposition 4.1.
|
| 240 |
+
|
| 241 |
+
Byte-level tokenization is inverse-homomorphic from token IDs to byte sequences.
|
| 242 |
+
|
| 243 |
+
With the exactly same argument as in Proposition[3.2](https://arxiv.org/html/2412.03160v1#S3.Thmproposition2 "Proposition 3.2. ‣ 3.1 Extended Tokenization is inverse homomorphism ‣ 3 Extended Tokenization preserving Language Structure ‣ 2.3 Tokenization languages ‣ 2 Preliminaries ‣ Contributions ‣ 1 Introduction ‣ Byte BPE Tokenization as an Inverse string Homomorphism"), we can show that the byte-level tokenization is inverse-homomorphic from token IDs to byte sequences. The character-level tokenization on ASCII characters is a special case of byte-level tokenization where each character is represented by a single byte.
|
| 244 |
+
|
| 245 |
+
###### Lemma 4.1(Character Encoding Scheme Closure).
|
| 246 |
+
|
| 247 |
+
Context-free languages are closed under any finite-length character encoding scheme such as UTF-8 or UTF-16, which maps characters to byte sequences.
|
| 248 |
+
|
| 249 |
+
Lemma[4.1](https://arxiv.org/html/2412.03160v1#S4.Thmtheorem1 "Lemma 4.1 (Character Encoding Scheme Closure). ‣ 4 Tokenization with Unicode characters ‣ 3.2 Token-space automata construction ‣ 3 Extended Tokenization preserving Language Structure ‣ 2.3 Tokenization languages ‣ 2 Preliminaries ‣ Contributions ‣ 1 Introduction ‣ Byte BPE Tokenization as an Inverse string Homomorphism") is trivially true because a finite-length character encoding scheme is a string replacement operation.
|
| 250 |
+
|
| 251 |
+
Combining Proposition[4.1](https://arxiv.org/html/2412.03160v1#S4.Thmproposition1 "Proposition 4.1. ‣ 4 Tokenization with Unicode characters ‣ 3.2 Token-space automata construction ‣ 3 Extended Tokenization preserving Language Structure ‣ 2.3 Tokenization languages ‣ 2 Preliminaries ‣ Contributions ‣ 1 Introduction ‣ Byte BPE Tokenization as an Inverse string Homomorphism") and Lemma[4.1](https://arxiv.org/html/2412.03160v1#S4.Thmtheorem1 "Lemma 4.1 (Character Encoding Scheme Closure). ‣ 4 Tokenization with Unicode characters ‣ 3.2 Token-space automata construction ‣ 3 Extended Tokenization preserving Language Structure ‣ 2.3 Tokenization languages ‣ 2 Preliminaries ‣ Contributions ‣ 1 Introduction ‣ Byte BPE Tokenization as an Inverse string Homomorphism"), we can show that the following three langauges all have the same structure:
|
| 252 |
+
|
| 253 |
+
* •The source language L 𝐿 L italic_L over the character alphabet.
|
| 254 |
+
* •The byte-level language L b subscript 𝐿 𝑏 L_{b}italic_L start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT, which is the image of L 𝐿 L italic_L under the character encoding scheme such as UTF-8.
|
| 255 |
+
* •The token language L t subscript 𝐿 𝑡 L_{t}italic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, which is the image of L b subscript 𝐿 𝑏 L_{b}italic_L start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT under the byte-level tokenization function.
|
| 256 |
+
|
| 257 |
+
5 Proper tokenization language
|
| 258 |
+
------------------------------
|
| 259 |
+
|
| 260 |
+
In the previous section, we have shown that the image of a context-free language under extended tokenization, either byte-level or character-level, is still a context-free language. In this section, we investigate Q3: Given a context-free or regular source language L 𝐿 L italic_L, what is the structure of the proper tokenization language L′superscript 𝐿′L^{\prime}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT after tokenization?
|
| 261 |
+
|
| 262 |
+
Model BPE
|
| 263 |
+
Training Starts from a basic token vocabulary, i.e. 256 byte values as the initial tokens, and learns rules to merge tokens
|
| 264 |
+
Learns An ordered list of binary merges and a vocabulary of final tokens
|
| 265 |
+
Tokenizing Given a text, maps it to a sequence of bytes and iteratively applies the merge rules (in order) to the byte sequence until no more merges can be performed
|
| 266 |
+
|
| 267 |
+
Table 1: Byte Pair Encoding (BPE) Algorithm
|
| 268 |
+
|
| 269 |
+
### 5.1 Byte Pair Encoding (BPE)
|
| 270 |
+
|
| 271 |
+
As described in Table[1](https://arxiv.org/html/2412.03160v1#S5.T1 "Table 1 ‣ 5 Proper tokenization language ‣ 4 Tokenization with Unicode characters ‣ 3.2 Token-space automata construction ‣ 3 Extended Tokenization preserving Language Structure ‣ 2.3 Tokenization languages ‣ 2 Preliminaries ‣ Contributions ‣ 1 Introduction ‣ Byte BPE Tokenization as an Inverse string Homomorphism"), the BPE algorithm iteratively applies the merge operation (in order) to the token sequence until no more merge operations can be performed. The token sequence after the last merge operation is the proper tokenization of the input string.
|
| 272 |
+
|
| 273 |
+
Two types of improper tokenization: We define two types of improper tokenization for BPE:
|
| 274 |
+
|
| 275 |
+
* •Mergeable tokenization: A tokenization t 1,t 2,⋯,t n subscript 𝑡 1 subscript 𝑡 2⋯subscript 𝑡 𝑛 t_{1},t_{2},\cdots,t_{n}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , ⋯ , italic_t start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is called mergeable if there exists a token t i+1 subscript 𝑡 𝑖 1 t_{i+1}italic_t start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT that can be merged with t i subscript 𝑡 𝑖 t_{i}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
|
| 276 |
+
* •Tokenization with wrong merge order: A tokenization t 1,t 2,⋯,t n subscript 𝑡 1 subscript 𝑡 2⋯subscript 𝑡 𝑛 t_{1},t_{2},\cdots,t_{n}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , ⋯ , italic_t start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is called a tokenization with the wrong merge order if it is not the proper tokenization and can not be merged further.
|
| 277 |
+
|
| 278 |
+
###### Example 5.1.
|
| 279 |
+
|
| 280 |
+
Consider a learnt BPE tokenizer with vocabulary {a,b,aa,aaa,bb}𝑎 𝑏 𝑎 𝑎 𝑎 𝑎 𝑎 𝑏 𝑏\{a,b,aa,aaa,bb\}{ italic_a , italic_b , italic_a italic_a , italic_a italic_a italic_a , italic_b italic_b } and the following merge operations in order:
|
| 281 |
+
|
| 282 |
+
1. 1.a,a→aa→𝑎 𝑎 𝑎 𝑎 a,a\rightarrow aa italic_a , italic_a → italic_a italic_a
|
| 283 |
+
2. 2.aa,a→aaa→𝑎 𝑎 𝑎 𝑎 𝑎 𝑎 aa,a\rightarrow aaa italic_a italic_a , italic_a → italic_a italic_a italic_a
|
| 284 |
+
3. 3.a,b→ab→𝑎 𝑏 𝑎 𝑏 a,b\rightarrow ab italic_a , italic_b → italic_a italic_b
|
| 285 |
+
4. 4.b,b→bb→𝑏 𝑏 𝑏 𝑏 b,b\rightarrow bb italic_b , italic_b → italic_b italic_b
|
| 286 |
+
|
| 287 |
+
Given the input string aaabb 𝑎 𝑎 𝑎 𝑏 𝑏 aaabb italic_a italic_a italic_a italic_b italic_b, the proper tokenization is aaa,bb 𝑎 𝑎 𝑎 𝑏 𝑏 aaa,bb italic_a italic_a italic_a , italic_b italic_b. However, if we exchange the order of merge rule 2 and 3, we get the tokenization aa,ab,b 𝑎 𝑎 𝑎 𝑏 𝑏 aa,ab,b italic_a italic_a , italic_a italic_b , italic_b, which is an improper tokenization with wrong merge order.
|
| 288 |
+
|
| 289 |
+
It’s straightforward to detect mergeable tokens in a token sequence by looping through the pairs of tokens and checking if they can be merged. To detect a tokenization with the wrong merge order, one can use the unmerge-remerge method:
|
| 290 |
+
|
| 291 |
+
1. 1.Unmerge: Given a token sequence t 1,t 2,⋯,t n subscript 𝑡 1 subscript 𝑡 2⋯subscript 𝑡 𝑛 t_{1},t_{2},\cdots,t_{n}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , ⋯ , italic_t start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT, unmerge the tokens in the reverse order of the merge operation to get the sequence of bytes b 1,b 2,⋯,b m subscript 𝑏 1 subscript 𝑏 2⋯subscript 𝑏 𝑚 b_{1},b_{2},\cdots,b_{m}italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , ⋯ , italic_b start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT.
|
| 292 |
+
2. 2.Remerge: Apply the BPE algorithm to b 1,b 2,⋯,b m subscript 𝑏 1 subscript 𝑏 2⋯subscript 𝑏 𝑚 b_{1},b_{2},\cdots,b_{m}italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , ⋯ , italic_b start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT to get the proper tokenization s 1,s 2,⋯,s k subscript 𝑠 1 subscript 𝑠 2⋯subscript 𝑠 𝑘 s_{1},s_{2},\cdots,s_{k}italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , ⋯ , italic_s start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT.
|
| 293 |
+
3. 3.Check: If t 1,t 2,⋯,t n≠s 1,s 2,⋯,s k formulae-sequence subscript 𝑡 1 subscript 𝑡 2⋯subscript 𝑡 𝑛 subscript 𝑠 1 subscript 𝑠 2⋯subscript 𝑠 𝑘 t_{1},t_{2},\cdots,t_{n}\neq s_{1},s_{2},\cdots,s_{k}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , ⋯ , italic_t start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ≠ italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , ⋯ , italic_s start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT, then t 1,t 2,⋯,t n subscript 𝑡 1 subscript 𝑡 2⋯subscript 𝑡 𝑛 t_{1},t_{2},\cdots,t_{n}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , ⋯ , italic_t start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is an improper tokenization with the wrong merge order.
|
| 294 |
+
|
| 295 |
+
### 5.2 Is Proper Tokenization Language Context-Free?
|
| 296 |
+
|
| 297 |
+
###### Proposition 5.1.
|
| 298 |
+
|
| 299 |
+
Proper tokenization language L′superscript 𝐿′L^{\prime}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is a subset of the Extended tokenization language L E′subscript superscript 𝐿′𝐸 L^{\prime}_{E}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT. More specifically, it is the intersection of L E′subscript superscript 𝐿′𝐸 L^{\prime}_{E}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT and the proper tokenization space ℕ proper∗superscript subscript ℕ proper\mathbb{N}_{\text{proper}}^{*}blackboard_N start_POSTSUBSCRIPT proper end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT
|
| 300 |
+
|
| 301 |
+
A subset of a context-free language is not necessarily a context-free language. However, if we can show that the proper tokenization space ℕ proper∗superscript subscript ℕ proper\mathbb{N}_{\text{proper}}^{*}blackboard_N start_POSTSUBSCRIPT proper end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a regular language, then we can use Theorem[2.3](https://arxiv.org/html/2412.03160v1#S2.Thmtheorem3 "Theorem 2.3 (Closure under intersection). ‣ 2.1 Context-free grammar and language ‣ 2 Preliminaries ‣ Contributions ‣ 1 Introduction ‣ Byte BPE Tokenization as an Inverse string Homomorphism") to conclude that the proper tokenization language L′superscript 𝐿′L^{\prime}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is also a context-free language.
|
| 302 |
+
|
| 303 |
+
Due to the closure properties of regular languages under complementation, it is sufficient to show that the improper tokenization language L I′subscript superscript 𝐿′𝐼 L^{\prime}_{I}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT is a regular language. Due to the closure properties of regular languages under finite union, it is sufficient to show that both mergeable tokenization and tokenization with the wrong merge order are regular languages. While the first is straightforward, the second is more challenging.
|
| 304 |
+
|
| 305 |
+
The unmerge-remerge method is a multi-pass algorithm that is not compatible with the finite-state automaton (FSA) construction.
|
| 306 |
+
|
| 307 |
+
We leave this as an open problem for future study.
|
| 308 |
+
|
| 309 |
+
6 Discussion
|
| 310 |
+
------------
|
| 311 |
+
|
| 312 |
+
Suppose that we train a language model M 𝑀 M italic_M as a acceptor to recognize a context-free language L 𝐿 L italic_L. There are two tokenization languages:
|
| 313 |
+
|
| 314 |
+
* •Proper tokenization language L′superscript 𝐿′L^{\prime}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
|
| 315 |
+
* •Extended tokenization language L E′subscript superscript 𝐿′𝐸 L^{\prime}_{E}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT
|
| 316 |
+
|
| 317 |
+
Which one does the model M 𝑀 M italic_M learn? It’s actually sufficient for the model to learn the extended tokenization language L E′subscript superscript 𝐿′𝐸 L^{\prime}_{E}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT. Given two strings s 1∈L subscript 𝑠 1 𝐿 s_{1}\in L italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∈ italic_L and s 2∉L subscript 𝑠 2 𝐿 s_{2}\notin L italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∉ italic_L, the model M 𝑀 M italic_M can differentiate between them by checking if the tokenization is in L E′subscript superscript 𝐿′𝐸 L^{\prime}_{E}italic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT or not. We conclude that learning the extended tokenization language is sufficient for the model to recognize the context-free language L 𝐿 L italic_L correctly. Therefore, the expressiveness of the neural network model is not limited by the tokenization algorithm used.
|
| 318 |
+
|
| 319 |
+
7 Related Work
|
| 320 |
+
--------------
|
| 321 |
+
|
| 322 |
+
#### Study of Tokenization
|
| 323 |
+
|
| 324 |
+
Kudo ([2018](https://arxiv.org/html/2412.03160v1#bib.bib7)) introduced the concept of proper tokenization versus general tokenization, referring to the ambiguity present in tokenization algorithms. They suggested that this ambiguity could be leveraged during training to improve language model robustness. Singh and Strouse ([2024](https://arxiv.org/html/2412.03160v1#bib.bib12)) investigated how different tokenization schemes affect arithmetic tasks in large language models. In their analysis of subword tokenization methods, Bostrom and Durrett ([2020](https://arxiv.org/html/2412.03160v1#bib.bib1)) found that unigram tokenization aligns better with morphological structures and often surpasses byte-pair encoding (BPE) in downstream tasks. Further, Zouhar et al. ([2023](https://arxiv.org/html/2412.03160v1#bib.bib16)) examined the link between tokenization and channel efficiency, proposing that an efficient tokenizer maximizes the channel’s usage from an information-theoretic perspective.
|
| 325 |
+
|
| 326 |
+
#### Enforcing Output Structure in LLMs
|
| 327 |
+
|
| 328 |
+
Another area of research focuses on constraining the outputs of large language models (LLMs) to adhere to specific grammatical structures, enhancing performance in tasks like code synthesis and semantic parsing. Deutsch et al. ([2019](https://arxiv.org/html/2412.03160v1#bib.bib2)) introduced a method to constrain language model outputs using pushdown automata, suitable for generating context-free languages. Kuchnik et al. ([2023](https://arxiv.org/html/2412.03160v1#bib.bib6)) and Willard and Louf ([2023](https://arxiv.org/html/2412.03160v1#bib.bib15)) explored techniques for restricting LLM outputs to regular languages, while Shin et al. ([2021](https://arxiv.org/html/2412.03160v1#bib.bib11)), Geng et al. ([2024](https://arxiv.org/html/2412.03160v1#bib.bib3)), and Wang et al. ([2023](https://arxiv.org/html/2412.03160v1#bib.bib13)) proposed methods for constraining outputs to context-free grammars. Our work, in contrast, focuses on understanding the properties of tokenization algorithms and the structure of the resulting token language. Additionally, regarding proper tokenization, guidance-ai ([2024](https://arxiv.org/html/2412.03160v1#bib.bib4)) introduced Token Healing, a method designed to iteratively retokenize input strings, aiming to recover the correct tokenization.
|
| 329 |
+
|
| 330 |
+
8 Conclusion
|
| 331 |
+
------------
|
| 332 |
+
|
| 333 |
+
In conclusion, our work formalizes the tokenization process as an inverse homomorphism for context-free languages, providing a rigorous framework for understanding its structural properties. We demonstrated that the structure of context-free and regular languages is preserved through detokenization, ensuring that the expressiveness of neural architectures is not compromised by tokenization. This property holds even in the presence of Unicode characters, with byte-level tokenization allowing for the preservation of language structure. Moreover, we introduced the concept of proper tokenization and highlighted its implications on the structure of the resulting token languages.
|
| 334 |
+
|
| 335 |
+
Our findings underscore the importance of tokenization as more than a mere preprocessing step; it is a structural operation that affects the language processing capabilities of large language models. Future work could address the complexity of improper tokenizations, especially in practical implementations, and explore the implications of these findings for optimizing language models further.
|
| 336 |
+
|
| 337 |
+
References
|
| 338 |
+
----------
|
| 339 |
+
|
| 340 |
+
* Bostrom and Durrett (2020) Kaj Bostrom and Greg Durrett. 2020. [Byte Pair Encoding is Suboptimal for Language Model Pretraining](https://doi.org/10.18653/v1/2020.findings-emnlp.414). In _Findings of the Association for Computational Linguistics: EMNLP 2020_, pages 4617–4624, Online. Association for Computational Linguistics.
|
| 341 |
+
* Deutsch et al. (2019) Daniel Deutsch, Shyam Upadhyay, and Dan Roth. 2019. [A general-purpose algorithm for constrained sequential inference](https://doi.org/10.18653/v1/K19-1045). In _Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)_, pages 482–492, Hong Kong, China. Association for Computational Linguistics.
|
| 342 |
+
* Geng et al. (2024) Saibo Geng, Martin Josifoski, Maxime Peyrard, and Robert West. 2024. [Grammar-constrained decoding for structured nlp tasks without finetuning](http://arxiv.org/abs/2305.13971).
|
| 343 |
+
* guidance-ai (2024) guidance-ai. 2024. Guidance. [https://github.com/guidance-ai/guidance](https://github.com/guidance-ai/guidance). Accessed: 2024-03-12.
|
| 344 |
+
* Hopcroft et al. (2006) John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. 2006. _Introduction to Automata Theory, Languages, and Computation (3rd Edition)_. Addison-Wesley Longman Publishing Co., Inc., USA.
|
| 345 |
+
* Kuchnik et al. (2023) Michael Kuchnik, Virginia Smith, and George Amvrosiadis. 2023. [Validating large language models with relm](http://arxiv.org/abs/2211.15458).
|
| 346 |
+
* Kudo (2018) Taku Kudo. 2018. [Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates](https://doi.org/10.18653/v1/P18-1007). In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 66–75, Melbourne, Australia. Association for Computational Linguistics.
|
| 347 |
+
* Kudo and Richardson (2018) Taku Kudo and John Richardson. 2018. [Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing](http://arxiv.org/abs/1808.06226).
|
| 348 |
+
* Radford et al. (2019) Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
|
| 349 |
+
* Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. [Neural machine translation of rare words with subword units](https://doi.org/10.18653/v1/P16-1162). In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 1715–1725, Berlin, Germany. Association for Computational Linguistics.
|
| 350 |
+
* Shin et al. (2021) Richard Shin, Christopher Lin, Sam Thomson, Charles Chen, Subhro Roy, Emmanouil Antonios Platanios, Adam Pauls, Dan Klein, Jason Eisner, and Benjamin Van Durme. 2021. [Constrained language models yield few-shot semantic parsers](https://doi.org/10.18653/v1/2021.emnlp-main.608). In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, pages 7699–7715, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 351 |
+
* Singh and Strouse (2024) Aaditya K. Singh and D.J. Strouse. 2024. [Tokenization counts: the impact of tokenization on arithmetic in frontier LLMs](https://doi.org/10.48550/arXiv.2402.14903). ArXiv:2402.14903 [cs].
|
| 352 |
+
* Wang et al. (2023) Bailin Wang, Zi Wang, Xuezhi Wang, Yuan Cao, Rif A.Saurous, and Yoon Kim. 2023. [Grammar prompting for domain-specific language generation with large language models](https://proceedings.neurips.cc/paper_files/paper/2023/file/cd40d0d65bfebb894ccc9ea822b47fa8-Paper-Conference.pdf). In _Advances in Neural Information Processing Systems_, volume 36, pages 65030–65055. Curran Associates, Inc.
|
| 353 |
+
* Wang et al. (2019) Changhan Wang, Kyunghyun Cho, and Jiatao Gu. 2019. [Neural machine translation with byte-level subwords](http://arxiv.org/abs/1909.03341).
|
| 354 |
+
* Willard and Louf (2023) Brandon T. Willard and Rémi Louf. 2023. [Efficient guided generation for large language models](http://arxiv.org/abs/2307.09702).
|
| 355 |
+
* Zouhar et al. (2023) Vilém Zouhar, Clara Meister, Juan Gastaldi, Li Du, Mrinmaya Sachan, and Ryan Cotterell. 2023. [Tokenization and the noiseless channel](https://doi.org/10.18653/v1/2023.acl-long.284). In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 5184–5207, Toronto, Canada. Association for Computational Linguistics.
|
| 356 |
+
|
| 357 |
+
Appendix A Example of Homomorphic Tokenization API
|
| 358 |
+
--------------------------------------------------
|
| 359 |
+
|
| 360 |
+
In this section, we investigate the implementation of tokenization in real-world and show that they still preserve the context-free property of the source language.
|
| 361 |
+
|
| 362 |
+
Recall that a function f:Σ∗→Γ∗:𝑓→superscript Σ superscript Γ f:\Sigma^{*}\rightarrow\Gamma^{*}italic_f : roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → roman_Γ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is homomorphic if f(x⊕y)=f(x)⊕f(y)𝑓 direct-sum 𝑥 𝑦 direct-sum 𝑓 𝑥 𝑓 𝑦 f(x\oplus y)=f(x)\oplus f(y)italic_f ( italic_x ⊕ italic_y ) = italic_f ( italic_x ) ⊕ italic_f ( italic_y ) for any x,y∈Σ∗𝑥 𝑦 superscript Σ x,y\in\Sigma^{*}italic_x , italic_y ∈ roman_Σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT. In the context of LM, we want to know whether the decoding function def tokenizer_decode(token_ids: List[int]) -> str: is homomorphic. In the following, we will use the API of the tokenizers library 4 4 4[https://github.com/huggingface/tokenizers](https://github.com/huggingface/tokenizers) to illustrate the tokenization process. Generally speaking, the decoding function consists of two steps:
|
| 363 |
+
|
| 364 |
+
1. 1.convert the token ids to tokens. tokenizer.convert_ids_to_tokens(token_ids:List[int])-> List[str]
|
| 365 |
+
2. 2.join the tokens to form a string and apply some post-processing if needed. tokenizer.convert_tokens_to_string(tokens:List[str]-> str)
|
| 366 |
+
|
| 367 |
+
We will show that the step (2) can cause the homomorphism to break.
|
| 368 |
+
|
| 369 |
+
Appendix B Leading space in tokenization
|
| 370 |
+
----------------------------------------
|
| 371 |
+
|
| 372 |
+
Many tokenizers, including LLaMA, T5 employ a longstanding practice of distinguishing between prefix token and non-prefix token by baking the space character into the prefix token. This heuristic breaks the homomorphism because the leading space in the token will be lost if the token is at the beginning of a string. An example of Hello World tokenized by T5 is given below:
|
| 373 |
+
|
| 374 |
+
“Hello World” is tokenized as [22172, 3186][‘‘␣Hello’’, ‘‘␣World’’] by LLAMA.
|
| 375 |
+
|
| 376 |
+
We define h ℎ h italic_h as the detokenization function and h−1 superscript ℎ 1 h^{-1}italic_h start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT as the tokenization function: Given
|
| 377 |
+
|
| 378 |
+
h(22172)ℎ 22172\displaystyle h(22172)italic_h ( 22172 )=``␣Hello′′,absent``␣𝐻 𝑒 𝑙 𝑙 superscript 𝑜′′\displaystyle=``\text{\textvisiblespace}Hello^{\prime\prime},= ` ` ␣ italic_H italic_e italic_l italic_l italic_o start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT ,
|
| 379 |
+
h(3186)ℎ 3186\displaystyle h(3186)italic_h ( 3186 )=``␣World′′.absent``␣𝑊 𝑜 𝑟 𝑙 superscript 𝑑′′\displaystyle=``\text{\textvisiblespace}World^{\prime\prime}.= ` ` ␣ italic_W italic_o italic_r italic_l italic_d start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT .
|
| 380 |
+
|
| 381 |
+
We see that the homomorphism is broken:
|
| 382 |
+
|
| 383 |
+
h(22172,3186)ℎ 22172 3186\displaystyle h(22172,3186)italic_h ( 22172 , 3186 )=``Hello␣World′′absent``𝐻 𝑒 𝑙 𝑙 𝑜␣𝑊 𝑜 𝑟 𝑙 superscript 𝑑′′\displaystyle=``Hello\text{\textvisiblespace}World^{\prime\prime}= ` ` italic_H italic_e italic_l italic_l italic_o ␣ italic_W italic_o italic_r italic_l italic_d start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT
|
| 384 |
+
≠\displaystyle\neq≠
|
| 385 |
+
h(22172)+h(3186)ℎ 22172 ℎ 3186\displaystyle h(22172)+h(3186)italic_h ( 22172 ) + italic_h ( 3186 )=``␣Hello␣World′′absent``␣𝐻 𝑒 𝑙 𝑙 𝑜␣𝑊 𝑜 𝑟 𝑙 superscript 𝑑′′\displaystyle=``\text{\textvisiblespace}Hello\text{\textvisiblespace}World^{% \prime\prime}= ` ` ␣ italic_H italic_e italic_l italic_l italic_o ␣ italic_W italic_o italic_r italic_l italic_d start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT
|
| 386 |
+
|
| 387 |
+
And if we reverse the order of the tokens, we still get the same problem:
|
| 388 |
+
|
| 389 |
+
h(3186,22172)ℎ 3186 22172\displaystyle h(3186,22172)italic_h ( 3186 , 22172 )=``World␣Hello′′absent``𝑊 𝑜 𝑟 𝑙 𝑑␣𝐻 𝑒 𝑙 𝑙 superscript 𝑜′′\displaystyle=``World\text{\textvisiblespace}Hello^{\prime\prime}= ` ` italic_W italic_o italic_r italic_l italic_d ␣ italic_H italic_e italic_l italic_l italic_o start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT
|
| 390 |
+
≠\displaystyle\neq≠
|
| 391 |
+
h(3186)+h(22172)ℎ 3186 ℎ 22172\displaystyle h(3186)+h(22172)italic_h ( 3186 ) + italic_h ( 22172 )=``␣World␣Hello′′absent``␣𝑊 𝑜 𝑟 𝑙 𝑑␣𝐻 𝑒 𝑙 𝑙 superscript 𝑜′′\displaystyle=``\text{\textvisiblespace}World\text{\textvisiblespace}Hello^{% \prime\prime}= ` ` ␣ italic_W italic_o italic_r italic_l italic_d ␣ italic_H italic_e italic_l italic_l italic_o start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT
|
| 392 |
+
|
| 393 |
+
The above example shows that the tokenization process is not homomorphic and depends on the context of the token in the string, i.e. whether the token is at the beginning of the string or not.
|
| 394 |
+
|
| 395 |
+
However, this is break is relatively easy to fix by simply considering an intermediate CFL, i.e. the language with a leading space.
|
| 396 |
+
|
| 397 |
+
As the operation of adding a leading space to a string is a regular operation, we still get CFL.
|