Browsecomp Reproducibility | 结果复现
Hi Xiaomi MiMo team, thank you so much for open-sourcing such impressive models and sharing your research!
Just a question regarding reproducibility of the MiMo V2 Flash general-agent benchmarks: How can the BrowseComp evaluation results be replicated? Is the search-agent and context management framework you used for BrowseComp evaluation open-source, or do you plan to open-source it?
Also, if I can also ask the same question concerning the Code agent, that was used for SWE-Bench verified? Are there any plans to open-source that framework?
Thanks again for your great work! 🙏
你好,小米 MiMo 团队,非常感谢你们将如此出色的模型开源并分享你们的研究成果!
我有一个关于 MiMo V2 Flash 通用智能体(general-agent)基准测试可复现性的问题:BrowseComp 的评测结果是如何复现的?你们在 BrowseComp 评测中使用的搜索智能体以及上下文管理框架是否已经开源,或者是否有计划进行开源?
另外,如果可以的话,我也想就用于 SWE-Bench Verified 的 Code Agent 提出同样的问题:是否有计划将该框架开源?
再次感谢你们出色的工作!🙏