Upload folder using huggingface_hub
Browse files- README.md +127 -0
- json_tokenizer_vocab.json +0 -0
- tokenizer_config.json +43 -0
README.md
ADDED
|
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: transformers
|
| 3 |
+
license: mit
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
tags:
|
| 7 |
+
- tokenizer
|
| 8 |
+
- json
|
| 9 |
+
- bpe
|
| 10 |
+
- structured-data
|
| 11 |
+
- llm
|
| 12 |
+
pipeline_tag: text-generation
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# json-tokenizer: Structure-Aware Tokenization for JSON
|
| 16 |
+
|
| 17 |
+
A structure-aware tokenizer that assigns dedicated single tokens to JSON grammar elements, learns a compact key vocabulary from training data, and applies byte-pair encoding (BPE) only to value content.
|
| 18 |
+
|
| 19 |
+
**Paper:** [Structure-Aware Tokenization for JSON: Exploiting Schema Repetition for Compact Token Sequences with a 90x Smaller Vocabulary](https://doi.org/10.5281/zenodo.XXXXXXX)
|
| 20 |
+
|
| 21 |
+
**Code:** [github.com/anthony-maio/json-tokenizer](https://github.com/anthony-maio/json-tokenizer)
|
| 22 |
+
|
| 23 |
+
## Key Results
|
| 24 |
+
|
| 25 |
+
| Metric | Value |
|
| 26 |
+
|--------|-------|
|
| 27 |
+
| Token savings vs cl100k_base | **5-15%** on schema-repetitive JSON |
|
| 28 |
+
| Vocabulary size | **4,190 tokens** (vs 100,256 for cl100k_base) |
|
| 29 |
+
| Vocab reduction | **~90x smaller** |
|
| 30 |
+
| Roundtrip fidelity | **100% lossless** across 4,200+ test objects |
|
| 31 |
+
| Crossover point | Beats cl100k_base at just **558 tokens** |
|
| 32 |
+
|
| 33 |
+
## Architecture
|
| 34 |
+
|
| 35 |
+
Three-tier vocabulary:
|
| 36 |
+
1. **Structural tokens** (IDs 0-15): `{`, `}`, `[`, `]`, `:`, `,`, `true`, `false`, `null`, type markers
|
| 37 |
+
2. **Key vocabulary** (IDs 32-N): Learned single-token keys from training data (65 keys)
|
| 38 |
+
3. **BPE subwords** (IDs N+1 to N+B): Byte-pair encoding trained on JSON value strings (4,096 tokens)
|
| 39 |
+
|
| 40 |
+
## This Model
|
| 41 |
+
|
| 42 |
+
This pretrained tokenizer was trained on four structured JSON datasets:
|
| 43 |
+
- GeoJSON city features (geographic data)
|
| 44 |
+
- Observability telemetry logs (monitoring data)
|
| 45 |
+
- Kubernetes manifests (infrastructure config)
|
| 46 |
+
- Structured API outputs
|
| 47 |
+
|
| 48 |
+
**Total training objects:** 10,530
|
| 49 |
+
**Vocabulary:** 4,190 tokens (16 structural + 16 reserved + 65 keys + 4,096 BPE)
|
| 50 |
+
|
| 51 |
+
## Usage
|
| 52 |
+
|
| 53 |
+
### With HuggingFace Transformers
|
| 54 |
+
|
| 55 |
+
```python
|
| 56 |
+
# Requires: pip install json-tokenizer[huggingface]
|
| 57 |
+
from json_tokenizer.hf_compat import JSONPreTrainedTokenizer
|
| 58 |
+
|
| 59 |
+
tokenizer = JSONPreTrainedTokenizer.from_pretrained("anthonym21/json-tokenizer-structured")
|
| 60 |
+
|
| 61 |
+
# Encode JSON
|
| 62 |
+
output = tokenizer('{"name": "Alice", "age": 30, "active": true}')
|
| 63 |
+
print(output["input_ids"])
|
| 64 |
+
|
| 65 |
+
# Decode back to JSON (lossless)
|
| 66 |
+
decoded = tokenizer.decode(output["input_ids"])
|
| 67 |
+
print(decoded) # {"name": "Alice", "age": 30, "active": true}
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
### With Core Library
|
| 71 |
+
|
| 72 |
+
```python
|
| 73 |
+
# Requires: pip install json-tokenizer
|
| 74 |
+
from json_tokenizer import JSONTokenizer
|
| 75 |
+
|
| 76 |
+
tokenizer = JSONTokenizer.load("./path/to/saved/tokenizer")
|
| 77 |
+
|
| 78 |
+
# Encode (accepts Python dicts, lists, or JSON strings)
|
| 79 |
+
ids = tokenizer.encode({"name": "Alice", "age": 30})
|
| 80 |
+
|
| 81 |
+
# Decode (lossless roundtrip)
|
| 82 |
+
json_str = tokenizer.decode(ids)
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
## Training Your Own
|
| 86 |
+
|
| 87 |
+
```python
|
| 88 |
+
from json_tokenizer import JSONTokenizer
|
| 89 |
+
|
| 90 |
+
tok = JSONTokenizer(bpe_vocab_size=4096, max_key_vocab=512)
|
| 91 |
+
tok.train_from_json_files(["your_data.jsonl"])
|
| 92 |
+
tok.save("./my_tokenizer")
|
| 93 |
+
|
| 94 |
+
# Convert to HF format
|
| 95 |
+
from json_tokenizer.hf_compat import JSONPreTrainedTokenizer
|
| 96 |
+
hf_tok = JSONPreTrainedTokenizer.from_json_tokenizer(tok)
|
| 97 |
+
hf_tok.save_pretrained("./my_hf_tokenizer")
|
| 98 |
+
```
|
| 99 |
+
|
| 100 |
+
## Where It Wins / Where It Loses
|
| 101 |
+
|
| 102 |
+
| Scenario | json-tokenizer | cl100k_base |
|
| 103 |
+
|----------|---------------|-------------|
|
| 104 |
+
| GeoJSON (schema-repetitive) | **+7.8% savings** | baseline |
|
| 105 |
+
| Telemetry logs | **+5.5% savings** | baseline |
|
| 106 |
+
| Batch JSON arrays | **+9.3% savings** | baseline |
|
| 107 |
+
| Config objects | **+12.3% savings** | baseline |
|
| 108 |
+
| Prose-heavy JSON (Alpaca) | -26.2% | **wins** |
|
| 109 |
+
| K8s manifests (deep nesting) | break-even | break-even |
|
| 110 |
+
|
| 111 |
+
**Best for:** API responses, observability logs, function calling, structured outputs
|
| 112 |
+
**Not for:** Prose-heavy JSON, general-purpose text
|
| 113 |
+
|
| 114 |
+
## Citation
|
| 115 |
+
|
| 116 |
+
```bibtex
|
| 117 |
+
@article{maio2026json,
|
| 118 |
+
title={Structure-Aware Tokenization for JSON: Exploiting Schema Repetition for Compact Token Sequences with a 90x Smaller Vocabulary},
|
| 119 |
+
author={Maio, Anthony},
|
| 120 |
+
year={2026},
|
| 121 |
+
url={https://doi.org/10.5281/zenodo.XXXXXXX}
|
| 122 |
+
}
|
| 123 |
+
```
|
| 124 |
+
|
| 125 |
+
## License
|
| 126 |
+
|
| 127 |
+
MIT
|
json_tokenizer_vocab.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"added_tokens_decoder": {
|
| 3 |
+
"0": {
|
| 4 |
+
"content": "<pad>",
|
| 5 |
+
"lstrip": false,
|
| 6 |
+
"normalized": false,
|
| 7 |
+
"rstrip": false,
|
| 8 |
+
"single_word": false,
|
| 9 |
+
"special": true
|
| 10 |
+
},
|
| 11 |
+
"1": {
|
| 12 |
+
"content": "<s>",
|
| 13 |
+
"lstrip": false,
|
| 14 |
+
"normalized": false,
|
| 15 |
+
"rstrip": false,
|
| 16 |
+
"single_word": false,
|
| 17 |
+
"special": true
|
| 18 |
+
},
|
| 19 |
+
"2": {
|
| 20 |
+
"content": "</s>",
|
| 21 |
+
"lstrip": false,
|
| 22 |
+
"normalized": false,
|
| 23 |
+
"rstrip": false,
|
| 24 |
+
"single_word": false,
|
| 25 |
+
"special": true
|
| 26 |
+
},
|
| 27 |
+
"15": {
|
| 28 |
+
"content": "<unk>",
|
| 29 |
+
"lstrip": false,
|
| 30 |
+
"normalized": false,
|
| 31 |
+
"rstrip": false,
|
| 32 |
+
"single_word": false,
|
| 33 |
+
"special": true
|
| 34 |
+
}
|
| 35 |
+
},
|
| 36 |
+
"backend": "custom",
|
| 37 |
+
"bos_token": "<s>",
|
| 38 |
+
"eos_token": "</s>",
|
| 39 |
+
"model_max_length": 1000000000000000019884624838656,
|
| 40 |
+
"pad_token": "<pad>",
|
| 41 |
+
"tokenizer_class": "JSONPreTrainedTokenizer",
|
| 42 |
+
"unk_token": "<unk>"
|
| 43 |
+
}
|