--- library_name: transformers tags: - prime-rl - verifiers - prime-intellect - reinforcement-learning - reasoning - agentic - mixture-of-experts license: mit language: - en base_model: - zai-org/GLM-4.5-Air-Base pipeline_tag: text-generation --- # INTELLECT-3
Prime Intellect Logo

🚀 State-of-the-art 100B+ parameter Mixture-of-Experts model trained with large-scale reinforcement learning

📚 Trained with prime-rl infra and verifiers environments | 🌐 Environments on Environments Hub
📖 Read the Technical Report | 💬 Join our Discord

## Introduction **INTELLECT-3** is a 106B (A12B) parameter Mixture-of-Experts reasoning model post-trained from [GLM-4.5-Air-Base](https://huggingface.co/zai-org/GLM-4.5-Air-Base) using supervised fine-tuning (SFT) followed by large-scale reinforcement learning (RL). Training was performed with [prime-rl](https://github.com/PrimeIntellect-ai/prime-rl) using environments built with the [verifiers](https://github.com/PrimeIntellect-ai/verifiers) library. All training and evaluation environments are available on the [Environments Hub](https://app.primeintellect.ai/dashboard/environments). The model, training frameworks, and environments are open-sourced under fully-permissive licenses (MIT and Apache 2.0). For more details, see the [technical report](PAPER_LINK_PLACEHOLDER). ## Evaluation INTELLECT-3 achieves best-in-class performance on math, coding, and reasoning benchmarks: | Benchmark | Score | |-----------|-------| | AIME 2025 | 88.0 | | LiveCodeBench v6 | 69.3 | | GPQA Diamond | 74.4 | | HLE | 14.6 | ## Model Variants | Model | HuggingFace | |-------|-------------| | INTELLECT-3 | [PrimeIntellect/INTELLECT-3](https://huggingface.co/PrimeIntellect/INTELLECT-3) | | INTELLECT-3-FP8 | [PrimeIntellect/INTELLECT-3-FP8](https://huggingface.co/PrimeIntellect/INTELLECT-3-FP8) | ## Serving with vLLM The BF16 version can be served on 2x H200s: ```bash vllm serve PrimeIntellect/INTELLECT-3 \ --tensor-parallel-size 2 \ --tool-call-parser qwen3_coder \ --reasoning-parser deepseek_r1 ``` The FP8 version can be served on a single H200: ```bash vllm serve PrimeIntellect/INTELLECT-3-FP8 \ --tool-call-parser qwen3_coder \ --reasoning-parser deepseek_r1 ``` ## Citation ```bibtex @misc{intellect3, title={INTELLECT-3: Technical Report}, author={Prime Intellect Team}, year={2025}, url={https://huggingface.co/PrimeIntellect/INTELLECT-3} } ```