A Controlled Diagnostic Study of Hardware-Induced Distortions in Hardware-Aware Training
Abstract
Hardware-aware training diagnostic framework analyzes hardware non-idealities as structured perturbations to determine optimizability and guide hardware-software co-design decisions.
Hardware-aware training (HAT) is widely used to improve the robustness of neural networks on non-ideal AI accelerators, such as analog in-memory computing (IMC) systems. However, not all hardware-induced distortions are equally compensable by training. This paper presents a diagnostic framework that models hardware non-idealities as structured perturbations of the forward operator and evaluates their compatibility with gradient-based optimization. We analyze six representative perturbation classes--read noise, variability, drift, stuck-at faults, IR-drop, and ADC discretization--and identify three key diagnostics: gradient expectation consistency, bounded gradient variance, and non-degenerate sensitivity. Our results show a clear separation between perturbations that can be compensated by HAT and those that consistently break optimization. This provides practical guidance for hardware-software co-design, clarifying which non-idealities can be addressed at the training level and which require circuit-, architecture-, or calibration-level mitigation. This study should be interpreted as a controlled empirical analysis under vanilla forward-perturbation HAT, rather than as a universal theory of hardware-aware training.
Get this paper in your agent:
hf papers read 2605.09416 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper