MM-ACT: Learn from Multimodal Parallel Generation to Act Paper • 2512.00975 • Published Nov 30, 2025 • 6
ARM-Thinker: Reinforcing Multimodal Generative Reward Models with Agentic Tool Use and Visual Reasoning Paper • 2512.05111 • Published Dec 4, 2025 • 49
Human-Agent Collaborative Paper-to-Page Crafting for Under $0.1 Paper • 2510.19600 • Published Oct 22, 2025 • 69
InternSVG: Towards Unified SVG Tasks with Multimodal Large Language Models Paper • 2510.11341 • Published Oct 13, 2025 • 35
Vlaser: Vision-Language-Action Model with Synergistic Embodied Reasoning Paper • 2510.11027 • Published Oct 13, 2025 • 23
InternSVG: Towards Unified SVG Tasks with Multimodal Large Language Models Paper • 2510.11341 • Published Oct 13, 2025 • 35
InternSVG: Towards Unified SVG Tasks with Multimodal Large Language Models Paper • 2510.11341 • Published Oct 13, 2025 • 35 • 2
MCPMark: A Benchmark for Stress-Testing Realistic and Comprehensive MCP Use Paper • 2509.24002 • Published Sep 28, 2025 • 174
InternVL3.5: Advancing Open-Source Multimodal Models in Versatility, Reasoning, and Efficiency Paper • 2508.18265 • Published Aug 25, 2025 • 212
InternVL3.5: Advancing Open-Source Multimodal Models in Versatility, Reasoning, and Efficiency Paper • 2508.18265 • Published Aug 25, 2025 • 212
ArchCAD-400K: An Open Large-Scale Architectural CAD Dataset and New Baseline for Panoptic Symbol Spotting Paper • 2503.22346 • Published Mar 28, 2025 • 3