demo_bvv_moe: Mixture-of-Experts LLM with Frozen Shared Embeddings (Russian + Chinese, Demo-Scale)
This repository contains the model and associated resources from the papers
Model Summary
- Model size: ~0.9B parameters
- Languages: Russian, Chinese, some English
- Frozen, Unicode/visual token embeddings: All tokens (for all supported languages) share the same frozen embedding matrix, based on Unicode and visual forms, not statistical co-occurrence.
- Direct Mixture-of-Experts merge: Two language models (Russian-, Chinese-oriented) are combined without retraining via simple logits averaging, made possible by the strictly-shared embeddings.
- Demo-scale: Trained on a modest dataset (9B tokens), with a small SFT fraction (~10%), intended to illustrate the principle, not to maximize absolute scores.
- Comparison available: Separately released standard (unfrozen embeddings) models for direct comparison of convergence and generalization.
demo_bvv_moe is a demonstration-scale Mixture-of-Experts (MoE) decoder-only causal language model combining two independently trained models (Russian and Chinese) with strictly frozen, shared visual/Unicode-based token embeddings.
- Each "expert" was pre-trained on a small subordinate corpus (English-Russian, English-Chinese) with ~9B total tokens, mixing 10% SFT-like samples, using the same, fully frozen embedding matrix for all languages.
- After separate training, the two models were seamlessly merged at the transformer block level using a "mean logits" MoE fusion approach β thanks to the shared frozen token embeddings, no retraining/alignment of embeddings was needed.
- This model is a conceptual/research artifact, designed to illustrate that frozen, non-semantic embeddings enable combining multilingual LMs into a working MoE model without catastrophic loss of performance.
Intended Purpose
This model is not an end-user chatbot solution.
Its purpose is:
- To demonstrate new possibilities in LM architecture:
- Multilingual/multimodal MoE with frozen, shared embeddings
- Modular, "plug-and-play" scaling and mixing of LMs
- Comparison between frozen and unfrozen/learnable embeddings in real convergence
- As a reference implementation for research communities investigating model unification, low-resource language mixing, or studying where "meaning" emerges inside LLM architectures.
Evaluation
MMLU: 23.44% Β± 0.28%
ARC-e: 23.74% Β± 1.02%
ARC-c: 25.28% Β± 2.07%
C-SENSE: 19.69% Β± 1.13%
SQUAD: 19.73% Β± 1.45%
This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs β a step toward modular, fusable, multilingual LMs.
Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained('Bochkov/demo_bvv_moe', trust_remote_code=True).to('cuda')
tokenizer = AutoTokenizer.from_pretrained('Bochkov/demo_bvv_moe')
inputs = tokenizer("Hello, ΠΌΠΈΡ! ", return_tensors="pt").to('cuda')
outputs = model.generate(
**inputs,
max_new_tokens=100,
temperature=0.8,
top_k=50,
top_p=0.95,
do_sample=True
)
print(tokenizer.decode(outputs[0]))
π§βπ¬ Citation & Concept
If you find this work helpful or inspiring, please consider citing the associated papers:
@article{
bochkov2025emergent,
title={Emergent Semantics Beyond Token Embeddings: Transformer {LM}s with Frozen Visual Unicode Representations},
author={Andrey Bochkov},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2025},
url={https://openreview.net/forum?id=Odh8IynO1o},
note={}
}
@misc{bochkov2025growingtransformersmodularcomposition,
title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate},
author={A. Bochkov},
year={2025},
eprint={2507.07129},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2507.07129},
}
- Downloads last month
- 22
Collection including Bochkov/demo_bvv_moe
Collection
Frozen embedding LMs (en/ru/zh). Baselines: frozen vs unfrozen embedding ablation.
β’
7 items
β’
Updated