Nemotron-Cascade 2: Post-Training LLMs with Cascade RL and Multi-Domain On-Policy Distillation
Abstract
Nemotron-Cascade 2 is a 30B parameter Mixture-of-Experts model with 3B activated parameters that achieves exceptional reasoning and agentic capabilities, matching frontier open models despite its compact size and demonstrating high intelligence density.
We introduce Nemotron-Cascade 2, an open 30B MoE model with 3B activated parameters that delivers best-in-class reasoning and strong agentic capabilities. Despite its compact size, its mathematical and coding reasoning performance approaches that of frontier open models. It is the second open-weight LLM, after DeepSeekV3.2-Speciale-671B-A37B, to achieve Gold Medal-level performance in the 2025 International Mathematical Olympiad (IMO), the International Olympiad in Informatics (IOI), and the ICPC World Finals, demonstrating remarkably high intelligence density with 20x fewer parameters. In contrast to Nemotron-Cascade 1, the key technical advancements are as follows. After SFT on a meticulously curated dataset, we substantially expand Cascade RL to cover a much broader spectrum of reasoning and agentic domains. Furthermore, we introduce multi-domain on-policy distillation from the strongest intermediate teacher models for each domain throughout the Cascade RL process, allowing us to efficiently recover benchmark regressions and sustain strong performance gains along the way. We release the collection of model checkpoint and training data.
Community
We release the Nemotron-Cascade-2-30B-A3B model and training data at: https://huggingface.co/collections/nvidia/nemotron-cascade-2
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CHIMERA: Compact Synthetic Data for Generalizable LLM Reasoning (2026)
- To Mix or To Merge: Toward Multi-Domain Reinforcement Learning for Large Language Models (2026)
- Step 3.5 Flash: Open Frontier-Level Intelligence with 11B Active Parameters (2026)
- Nanbeige4.1-3B: A Small General Model that Reasons, Aligns, and Acts (2026)
- Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware Training (2026)
- CORD: Bridging the Audio-Text Reasoning Gap via Weighted On-policy Cross-modal Distillation (2026)
- Reinforcement-aware Knowledge Distillation for LLM Reasoning (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper