--- license: apache-2.0 library_name: transformers pipeline_tag: text-generation tags: - agent - text-generation-inference --- # AgentCPM-Report: Gemini-2.5-pro-DeepResearch Level Local DeepResearch

This repository contains **AgentCPM-Report**, an 8B-parameter deep research agent introduced in the paper [AgentCPM-Report: Interleaving Drafting and Deepening for Open-Ended Deep Research](https://arxiv.org/abs/2602.06540). AgentCPM-Report uses a **Writing As Reasoning Policy (WARP)** to dynamically revise outlines during report generation, alternating between evidence-based drafting and reasoning-driven deepening to produce high-quality, long-form research reports. ## Links & Resources ### πŸ“Š AgentCPM-Report Models - **[AgentCPM-Report](https://huggingface.co/openbmb/AgentCPM-Report)** The Gemini-2.5-pro-DeepResearch Level Local DeepResearch Model - **[AgentCPM-Report-GGUF](https://huggingface.co/openbmb/AgentCPM-Report-GGUF)** The GGUF version of AgentCPM-Report ### πŸ€– AgentCPM-Explore Models - **[AgentCPM-Explore](https://huggingface.co/openbmb/AgentCPM-Explore)** The first open-source agent model with 4B parameters to appear on 8 widely used long-horizon agent benchmarks. - **[AgentCPM-Explore-GGUF](https://huggingface.co/openbmb/AgentCPM-Explore-GGUF)** The GGUF version of AgentCPM-Explore ### πŸ’» Code & Framework - **[AgentCPM](https://github.com/OpenBMB/AgentCPM)** Our code for AgentCPM Series - **[UltraRAG](https://github.com/OpenBMB/UltraRAG)** A RAG Framework, Less Code, Lower Barrier, Faster Deployment ## News - [2026-01-20] πŸš€πŸš€πŸš€ We open-sourced AgentCPM-Report built on MiniCPM4.1-8B, capable of matching top closed-source commercial systems like Gemini-2.5-pro-DeepResearch in report generation. ## Overview AgentCPM-Report is an open-source large language model agent jointly developed by [THUNLP](https://nlp.csai.tsinghua.edu.cn), Renmin University of China [RUCBM](https://github.com/RUCBM), and [ModelBest](https://modelbest.cn/en). It is based on the [MiniCPM4.1](https://github.com/OpenBMB/MiniCPM) 8B-parameter base model. It accepts user instructions as input and autonomously generates long-form reports. Key highlights: - **Extreme Performance, Minimal Footprint**: Through an average of 40 rounds of deep retrieval and nearly 100 rounds of chain-of-thought reasoning, it achieves comprehensive information mining and restructuring, enabling edge-side models to produce logically rigorous, deeply insightful long-form articles with tens of thousands of words. With just 8 billion parameters, it delivers performance on par with top-tier closed-source systems in deep research tasks. - **Physical Isolation, Local Security**: Specifically designed for high-privacy scenarios, it supports fully offline and agile local deployment, completely eliminating the risk of cloud data leaks. Leveraging our UltraRAG framework, it efficiently mounts and understands your local private knowledge base, securely transforming core confidential data into highly valuable professional decision-making reports without ever leaving its domain. ## Demo Cases
**You can watch our demo video here [Demo](https://www.youtube.com/watch?v=d5XWONt0PWo) πŸ”—** ## Quick Start ### Docker Deployment
**You can watch our demo video here [Tutorial](https://www.youtube.com/watch?v=ze8qJRrass4) πŸ”—** We provide a minimal one-click `docker-compose` deployment integrated with UltraRAG, including the RAG framework UltraRAG2.0, the model inference framework vllm, and the vector database milvus. If you want CPU inference, we also provide a llama.cpp-based version for gguf modelsβ€”just switch `docker-compose.yml` to `docker-compose.cpu.yml`. ``` bash git clone git@github.com:OpenBMB/UltraRAG.git cd UltraRAG git checkout agentcpm-report-demo cd agentcpm-report-demo cp env.example .env docker-compose -f docker-compose.yml up -d --build docker-compose -f docker-compose.yml logs -f ultrarag-ui ``` The first startup pulls images, downloads the model, and configures the environment, which takes about 30 minutes. Then open `http://localhost:5050`. If you can see the UI, your deployment is successful. Follow the UI instructions to upload local files, chunk them, and build indexes; then in the Chat section, select AgentCPM-Report in the pipeline to start your workflow. (Optional) You can import [Wiki2024](https://modelscope.cn/datasets/UltraRAG/UltraRAG_Benchmark/tree/master/corpus/wiki24) as the writing database. You can read more tutorials about AgentCPM-Report in the [documentation](https://ultrarag.openbmb.cn/pages/en/demo/deepresearch). ## Evaluation Experiments on DeepResearch Bench, DeepConsult, and DeepResearch Gym demonstrate that AgentCPM-Report outperforms leading closed-source systems, with substantial gains in Insight. Detailed benchmark results can be found in the associated research paper. ## Acknowledgements This project would not be possible without the support and contributions of the open-source community. During development, we referred to and used multiple excellent open-source frameworks, models, and data resources, including [verl](https://github.com/volcengine/verl), [UltraRAG](https://github.com/OpenBMB/UltraRAG), [MiniCPM4.1](https://github.com/OpenBMB/MiniCPM), and [SurveyGo](https://surveygo.modelbest.cn/). ## Citation If **AgentCPM-Report** is helpful for your research, please cite it as follows: ```bibtex @misc{li2026agentcpmreport, title={AgentCPM-Report: Interleaving Drafting and Deepening for Open-Ended Deep Research}, author={Yishan Li and Wentong Chen and Yukun Yan and Mingwei Li and Sen Mei and Xiaorong Wang and Kunpeng Liu and Xin Cong and Shuo Wang and Zhong Zhang and Yaxi Lu and Zhenghao Liu and Yankai Lin and Zhiyuan Liu and Maosong Sun}, year={2026}, eprint={2602.06540}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2602.06540}, } ```