File size: 3,040 Bytes
c16d524 ce1b9d1 8adf01b ce1b9d1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
license: mit
base_model:
- Qwen/Qwen3-4B-Instruct-2507
---
## LiteCoder-4b-Terminal-preview
**LiteCoder-4b-Terminal-preview** is part of our series of models specialized in terminal-based interactions and stems from our recent efforts to develop capable small and medium-sized code agent models. The model is fine-tuned from `
Qwen3-4B-Instruct-2507` on the [LiteCoder-SFT-Terminal-preview](https://huggingface.co/datasets/Lite-Coder/LiteCoder-SFT-Terminal-preview) dataset.
**Notably, this model achieves competitive results using fewer than 1,000 training samples.** By relying entirely on a fully synthetic pipeline—without converting any existing datasets—we were able to secure significant gains on the challenging Terminal Bench, matching the performance of leading open-source models with extreme data efficiency.
## Released Artifacts
| 2025/12/17 | | |
| --- | --- | --- |
| LiteCoder-4b-Terminal-preview | Model | https://huggingface.co/Lite-Coder/LiteCoder-4b-Terminal-preview |
| LiteCoder-SFT-Terminal-preview | Dataset | https://huggingface.co/datasets/Lite-Coder/LiteCoder-SFT-Terminal-preview |
## Results
Our models achieve competitive results on **Terminal Bench**, significantly outperforming general-purpose models of similar (and even larger) sizes.
**Terminal Bench 1.0 Performance**
| **Model** | **Agent** | **Results** |
| --- | --- | --- |
| **LiteCoder-30a3b-Terminal-preview** | Terminus 2 | **18.75%** |
| Qwen3-30B-A3B-Nex-N1 | Terminus 2 | 18.75% |
| **LiteCoder-4b-Terminal-preview** | Terminus 2 | **13.75%** |
| Qwen3-30B-A3B-Instruct | Terminus 2 | 12.5% |
| Qwen3-4B-Instruct | Terminus 2 | 5.0% |
**Terminal Bench 2.0 Performance**
| **Model** | **Agent** | **Results** |
| --- | --- | --- |
| **LiteCoder-30a3b-Terminal-preview** | Terminus 2 | **5.6%** |
| **LiteCoder-4b-Terminal-preview** | Terminus 2 | **3.3%** |
| Qwen3-32B | Terminus 2 | 1.9% |
| InternLM3-8B-Nex-N1 | Terminus 2 | 0% |
| Qwen3-8B | Terminus 2 | 0% |
## Citation
```latex
@misc{LiteCoder Team,
title={LiteCoder: Advancing Small and Medium-sized Code Agents},
author={Xiaoxuan Peng and Xinyu Lu and Kaiqi Zhang and Taosong Fang and Boxi Cao and Yaojie Lu},
year={2025},
}
```
## Future Directions
- **Scaling Environments:** Expanding the diversity of Docker environments and teacher models to improve generalization.
- **Agentic RL:** Implementing Reinforcement Learning specifically for multi-turn agentic workflows.
## Team & Contributions
- **Xiaoxuan Peng:** Main Contributor
- [Xinyu Lu](https://scholar.google.com/citations?user=_OsLG8EAAAAJ&hl=zh-CN)**:** Project Lead
- **Kaiqi Zhang:** Contributor
- **Taosong Fang**: Contributor
- **Boxi Cao:** Contributor
- **Yaojie Lu:** Contributor
## Acknowledgements
LiteCoder builds upon multiple open-source projects, including [Harbor](https://github.com/laude-institute/harbor). The models are trained using [AutoAlign](https://github.com/icip-cas/AutoAlign).
## Join Us
Join the discussion on our [Discord](https://discord.gg/EX9qZe8B).
|