Papers
arxiv:2601.03708

MHRC-Bench: A Multilingual Hardware Repository-Level Code Completion benchmark

Published on Jan 7
Authors:
,
,
,
,

Abstract

Researchers introduce MHRC-Bench, the first repository-level benchmark for multilingual hardware code completion that covers three major hardware design styles and includes code-structure-level annotations derived from syntax tree analysis.

AI-generated summary

Large language models (LLMs) have achieved strong performance on code completion tasks in general-purpose programming languages. However, existing repository-level code completion benchmarks focus almost exclusively on software code and largely overlook hardware description languages. In this work, we present MHRC-Bench, consisting of MHRC-Bench-Train and MHRC-Bench-Eval, the first benchmark designed for multilingual hardware code completion at the repository level. Our benchmark targets completion tasks and covers three major hardware design coding styles. Each completion target is annotated with code-structure-level and hardware-oriented semantic labels derived from concrete syntax tree analysis. We conduct a comprehensive evaluation of models on MHRC-Bench-Eval. Comprehensive evaluation results and analysis demonstrate the effectiveness of MHRC-Bench.

Community

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.03708 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.03708 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.