Characterizing Mobile SoC for Accelerating Heterogeneous LLM Inference
Abstract
Heterogeneous parallel processing across GPU and NPU accelerators in mobile systems significantly improves LLM inference speed while maintaining system efficiency.
With the rapid advancement of artificial intelligence technologies such as ChatGPT, AI agents, and video generation, contemporary mobile systems have begun integrating these AI capabilities on local devices to enhance privacy and reduce response latency. To meet the computational demands of AI tasks, current mobile SoCs are equipped with diverse AI accelerators, including GPUs and Neural Processing Units (NPUs). However, there has not been a comprehensive characterization of these heterogeneous processors, and existing designs typically only leverage a single AI accelerator for LLM inference, leading to suboptimal use of computational resources and memory bandwidth. In this paper, we first summarize key performance characteristics of heterogeneous processors, SoC memory bandwidth, etc. Drawing on these observations, we propose different heterogeneous parallel mechanisms to fully exploit both GPU and NPU computational power and memory bandwidth. We further design a fast synchronization mechanism between heterogeneous processors that leverages the unified memory architecture. By employing these techniques, we present HeteroInfer, the fastest LLM inference engine in mobile devices which supports GPU-NPU heterogeneous execution. Evaluation shows that HeteroInfer delivers a 1.34x to 6.02x end-to-end speedup over state-of-the-art GPU-only and NPU-only LLM engines, while maintaining negligible interference with other applications.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- HeRo: Adaptive Orchestration of Agentic RAG on Heterogeneous Mobile SoC (2026)
- EdgeFlow: Fast Cold Starts for LLMs on Mobile Devices (2026)
- MATCHA: Efficient Deployment of Deep Neural Networks on Multi-Accelerator Heterogeneous Edge SoCs (2026)
- Tessera: Unlocking Heterogeneous GPUs through Kernel-Granularity Disaggregation (2026)
- Understand and Accelerate Memory Processing Pipeline for Disaggregated LLM Inference (2026)
- FlashMem: Supporting Modern DNN Workloads on Mobile with GPU Memory Hierarchy Optimizations (2026)
- SkipOPU: An FPGA-based Overlay Processor for Large Language Models with Dynamically Allocated Computation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2501.14794 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper