File size: 2,346 Bytes
2d3f960 f99530b 2d3f960 df48110 2d3f960 df48110 2d3f960 df48110 2d3f960 4b877e0 2d3f960 bb14135 2d3f960 bb14135 2d3f960 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | ---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: >-
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- nemotron-terminal
- terminal
- code-agent
- SFT
- pytorch
---
# Nemotron-Terminal Model Family
**Nemotron-Terminal** is a family of models specialized for autonomous terminal interaction, fine-tuned from the Qwen3 (8B, 14B, and 32B). Developed by NVIDIA, these models utilize [Nemotron-Terminal-Corpus](https://huggingface.co/datasets/nvidia/Nemotron-Terminal-Corpus), a large-scale open-source dataset for terminal tasks, to achieve performance that rivals frontier models many times their size.
## Model Variants
We release the following variants of the Nemotron-Terminal family:
- **Nemotron-Terminal-8B**
- Nemotron-Terminal-14B
- Nemotron-Terminal-32B
## Performance on Terminal-Bench 2.0
The Nemotron-Terminal family demonstrates profound leaps in capability compared to the Qwen3 baselines across multiple specialized categories.
| Model | Size | Base Accuracy | **Nemotron-Terminal Accuracy** |
| :--- | :---: | :---: | :---: |
| **Nemotron-Terminal-8B** | 8B | 2.47% | **13.0%** |
| Nemotron-Terminal-14B | 14B | 4.04% | **20.2%** |
| Nemotron-Terminal-32B | 32B | 3.37% | **27.4%** |
## Usage
The models are trained using the **Terminus 2** scaffolding and output a structured JSON format.
For evaluation on Terminal Bench 2.0, we encourage using Terminus 2 scaffolding to maintain consistency with training.
### Expected Output Format
```json
{
"analysis": "Analysis of the current terminal state...",
"plan": "Step-by-step plan for the next command...",
"commands": [
{
"keystrokes": "ls -la\n",
"duration": 0.1
}
],
"task_complete": false
}
```
## 📜 Citation
If you use this dataset in your research, please cite the following work:
```bibtex
@misc{pi2026dataengineeringscalingllm,
title={On Data Engineering for Scaling LLM Terminal Capabilities},
author={Renjie Pi and Grace Lam and Mohammad Shoeybi and Pooya Jannaty and Bryan Catanzaro and Wei Ping},
year={2026},
eprint={2602.21193},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2602.21193},
}
|