--- license: apache-2.0 tags: - prime-rl - moe - test-model library_name: transformers ---
# glm4-moe-tiny A small (~543M parameter) GLM-4 MoE model for testing only. It is generally compatible with vLLM and HuggingFace Transformers but is meant to be used with [prime-rl](https://github.com/PrimeIntellect-ai/prime-rl). Fine-tuned on [PrimeIntellect/Reverse-Text-SFT](https://huggingface.co/datasets/PrimeIntellect/Reverse-Text-SFT) to provide a non-trivial distribution for KL divergence during RL. ## Quick Start ```bash uv run rl @ configs/ci/integration/rl_moe/glm4_moe.toml ``` See the [Testing MoE at Small Scale](https://github.com/PrimeIntellect-ai/prime-rl/blob/main/docs/testing-moe-at-small-scale.md) guide for full instructions. ## Model Details | Parameter | Value | |-----------|-------| | Hidden size | 1024 | | Layers | 24 | | Experts | 8 | | Active experts | 4 | | Parameters | ~543M | ## Links - [prime-rl](https://github.com/PrimeIntellect-ai/prime-rl) - RL training framework - [PrimeIntellect](https://www.primeintellect.ai/) - Building infrastructure for decentralized AI