metadata
license: apache-2.0
tags:
- chinese
- english
- causal-lm
- trainable-embeddings
- conceptual-demo
- transformer
pipeline_tag: text-generation
library_name: transformers
demo_bvv_unfrozen_zh
This repository contains the model and associated resources from the papers
Model summary
demo_bvv_unfrozen_zh is a 500M parameter Causal Language Model (LM) trained as an open proof-of-concept for the "frozen embeddings" paradigm. This version uses fully trainable token embeddings β a standard setup β and serves as a baseline for direct comparison with the corresponding "frozen-embedding" model Bochkov/demo_bvv_zh.
- Architecture: Transformer, rotary positional encoding
- Vocabulary: Custom Unicode-based, 131072 tokens
- Embedding: Unfrozen (trainable, classic)
- Pretraining data: 9B tokens, (Wikipedia, SQuAD2.0, TriviaQA, NQ etc) and 10% SFT (instruction/factual Q&A) mixed in
- Purpose: Compare learning capacity and generalization of full vs. frozen-embedding LMs on small data
- Trained on small English+Chinese dataset.
- Vocabulary: 131072 (Unicode/visual + frequent n-grams).
- 16-layer transformer, 1024 hidden dim, 32 heads.
Intended use
- Academic and engineering demonstration.
- Proof-of-concept for multilingual/fusion/frozen-embedding research.
- NOT intended or suitable for actual production generation or factual knowledge (corpus ~9B tokens only).
Model comparison (vs frozen baseline)
| Model | Total Params | MMLU avg (%) | BLEU en-zh (%) | BLEU zh-en (%) |
|---|---|---|---|---|
| Bochkov/demo_bvv_zh (frozen) | 0.5B | 19.4 | 1.41 | 7.78 |
| Bochkov/demo_bvv_unfrozen_zh (baseline) | 0.5B | 14.0 | 1.65 | 5.93 |
Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained('Bochkov/demo_bvv_unfrozen_zh', trust_remote_code=True).to('cuda')
tokenizer = AutoTokenizer.from_pretrained('Bochkov/demo_bvv_unfrozen_zh')
inputs = tokenizer("Hello, world! ", return_tensors="pt").to('cuda')
outputs = model.generate(
**inputs,
max_new_tokens=100,
temperature=0.8,
top_k=50,
top_p=0.95,
do_sample=True
)
print(tokenizer.decode(outputs[0]))
Citation
If you find this work helpful or inspiring, please consider citing the associated papers:
@article{
bochkov2025emergent,
title={Emergent Semantics Beyond Token Embeddings: Transformer {LM}s with Frozen Visual Unicode Representations},
author={Andrey Bochkov},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2025},
url={https://openreview.net/forum?id=Odh8IynO1o},
note={}
}
@misc{bochkov2025growingtransformersmodularcomposition,
title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate},
author={A. Bochkov},
year={2025},
eprint={2507.07129},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2507.07129},
}